Ken's Sandbox

Asset Creation Evolved…

Technical Reel 2018

Technical Demo Reel from 2018

Work up to the fall of 2017.

Reel 2015

Demo Reel from 2015

Work up to the fall of 2015.

Tool : Cabin Gen v0.1

This tool is designed to streamline the task of building and populating structures in a game. It was built over a few days in free time, and has had maybe one major iteration so far. Based on rules, rooms types are created, and appropriate furniture for that room is placed. First a random floorpan is created based on the square footage the user requests. Wall and roofing are then placed around and over the floorplan. Rooms are assigned a type based on size, a hierarchy, and the presence of other rooms. Appropriate room furnishings are only present in a valid room type ie. No toilets in the main living area, and no beds in the washroom. Furniture generally lives along walls, and placement areas become invalid when there are doorways, or low windows or other objects claiming the space. Assuming there are multiple valid locations for a piece of furniture the user can manually cycle through the options to find a more pleasing location. One of the interesting parts of the tool is the probability matrix that allows the user to control the mixture of assets. In this case the mixture of plain walls, and two kinds of window treatments.

Lastly the architecture can be stylized for art direction reasons so walls and corners are not confined to perfect 90 degree angles. Roofs can dip etc while remaining in a fully procedural non-destructive system. This is still a work in progress. There are some minor roofing scenarios that could generate better results as well as the missing connection piece for roof corners.


Tool : Tree Planter v0.1

This is a tool developed to lay trees out effeciently and at scale, while still being very artistically directable. The idea here is to place trees within a non-destructive closed curve. A maximum tree count is specified by the user but if the curves area is not large enough, or there is not enough valid terrain under the curve it will not place a tree. The trees, rocks, and grass all have their own rules as to what a valid slope if for growth to occur. In the case of trees the user can also control how much of the terrains normal is inherited, making the tree up to perpendicular with the uneven terrain. This assumes the slope of the terrain is valid for tree placement in the first place, which is another controllable setting.

Multiple tree types, rocks and grasses are fed into a probability matrix that allows the user to intimately control the composition of each growth type. This could possibly be used for changing seasons over time, or for performance gains by increasing the occurrence of cheaper trees. Trees are treated as bundles, so they have their sort of biome of rocks and grass. They also claim their own space so you will not see grass growing through rocks, or trees growing on rocks. Further to that obstruction objects can be designated which the trees, rocks and grass will grow around and accommodate.

This tool is not meant to populate an entire level with one large curve, it instead is far more powerful when used to create many smaller tree clusters that can be directed in the location of the trees, the shape of the cluster, the composition of trees and coverage.

A cheap and responsive, yet visually rich preview mode allows an artist to work at very interactive speeds.


Tool : Skull Fitter v0.1, Houdini + OpenCV

I have been wanting to play with OpenCV for a while. While I have used it before, I had only done very basic things like augmenting images, basically things you could already do with Photoshop, and grabbing depth maps or segmenting images with my old 360 Kinect. Not the interesting things like facial recognition, tracking objects, or extracting features and keypoints.

A pet project of mine for a while has been the idea of a 3d DCC having some sense of the data it is working on. It seems like the program knowing that you are working with a car, or a quadruped or in this case a human head would allow the software to make smarted context valid automations. This is a very early iteration of the tool. First using openCV I identify the face, eyes, nose and mouth of the 3d model. This then allows more landmarks to be placed and validated on the mesh. A skull mesh is then placed, aligned, and scaled to fit within the volume of the head. Various anatomical skin depths are then taken into account over the head so that the skull is a reasonable distance from the surface at all locations.

The registration could still be better and will be refined further. The next phase is building viens and placing facial muscles and tendons between the skull and skin surface that again adapt to the scan it is fed.





Deep Learning Part 2

After a major release of the game I work on, some down time was awarded which meant more time to get back into the sandbox. Some house cleaning moving from Keras 1.0 to Keras 2.x, trying Tensorflow as my backend instead of Theano which I had used exclusively before. As well as some other updated python libraries. Here are some new images with some lots of improvements over the tests from 12+ months ago. Convergence time is much quicker. Instead of potentially a few dozen iteration usefully under 20 is more than enough with quickly demonizing returns be about the tenth iteration.


Deep_learning_Template_girl

Here is another batch with the old pink image reworked for comparison. Not much difference but far quicker result. In general more style input images seem to be successful in producing satisfactory results basically meaning higher success rate.

Deep_learning_Template_Ken

Playing in the Deep Learning Sandbox

This post is already out of date as I type this, but I guess having a child does that. Last spring and into the summer of 2016 I was bitten by the Machine Learning bug. Enough to stop just reading books on it, and begin trying to code things. Armed with Keras, and a small grocery list of Python modules I began doing a lot of experimentation with object recognition with CNN's and object detection with Haar Cascades. I worked through lots of basic data science problems but found I really enjoyed trying to improve my CNN model on the popular CIFAR-10 dataset. CIFAR-10  is an established computer-vision dataset used for object recognition. It is a subset of the 80 million tiny images dataset and consists of 60,000 32x32 colour images containing one of 10 object classes, with 6000 images per class.
I believe the Kaggle world record is around 97% accuracy at identifying which class an image belongs to over a 10,000 image long test. Humans score about 94% accuracy at identifying each class an image belongs to. I was quite happy with the 93.20% accuracy I eventually got tuning my CNN and playing with different configurations and layer depths. It was a very interesting process to experiment with tuning models to learn very quickly but top out at say 80% accuracy, and make models that learned far slower and took much longer to train but could break 90% accuracy.

cifar-10 trans

My real interest in Machine Learning is how it can be applied to visual problems, and tools that learn from the user. Perhaps one day anticipating what the user wants to do. When the paper "A Neural Algorithm of Artistic Style" came out I was very interested in the possibilities of quantifying an artistic style. Moving my training from the CPU to the GPU with Theano, CUDA 5.1 and a GTX 1060, and a snippet of the VGG16 dataset I hacked together my own version of the very popular "Prisma" app running on my old Dell. The neatest thing about what I had was that the code could take in any style image I gave it and attempt to transfer it. With the Prisma app you are locked into to a small pre-trained set of styles of their choosing. I need to get back to this area as I have seen improvements on the web that I would love to incorporate for better more consistent results than what I was getting last summer. The two initial uses I had for this technology was to learn our concept artists style and be able to transfer it to images from the web, generating my own concept art without the wait. He is a busy guy. The secondly use was for Look Dev. My idea here was to take an art style we liked, and a "White box" version of a level and envision what a game world might look like in a few seconds in that style instead of having artists spend days potentially "arting-up" a level. This seemed like a very good cheap litmus test to see if a style was worth further investigation.

Example_1_trans Example_4_trans

For my future ML work, beyond making my Prisma hack more robust, I hope to explore the "enhancement" algorithms that are starting to pop up that have been used to upres textures and explore applying style transfer to textures, and then dealing with the caveats of UV islands and tiling. I have seen companies like Artomatix having similar ideas which is exciting and validating to see.

Tool : Component-Based Synthesis

Building on my pervious armature work and inspired by the 2012 Siggraph paper “A probabilistic model for component-based shape synthesis” I set out two months ago on weekends and evenings to explore the world of Asset Synthesis. I have been interested in this area for some time and intend to continue to explore this domain with organics, expand on some preliminary work with texture synthesis, and work towards more generalized solutions.

I ended up looking into the area of machine learning as a result of this project. I think ML has huge potential in the creation of intelligent tools and content generation. Procedural content is great but often lends itself to only certain types of assets and complexities before hitting diminishing returns. It also often lacks the aesthetics of handmade assets. To me the holy grail of content generation is to enable a group of skilled artists to establish a style and a very high quality baseline that is then able to be replicated and expanded beyond what that team could otherwise produce without the sacrifice of quality or artistic control. I feel that expanding upon human loved assets is the way to do this. Therefore learning from existing content seems crucial.

I learned quite a lot from this project. One valuable lesson was to not always look for a solution that works 100% of the time. There may not be one. Instead consider multiple approaches and develop a process to measure and choose the best solution given the circumstance. Another concept that was new to me was the phenomenon of “overfitting” a problem. There was a lot of ML methodology I could relate to, and though I always strive to make the most robust tools I can, using the most generalized methods I can. It was enlightening to take a step back and realize that a solution that works 80% of the time on whatever you throw at it may be better than a solution that works 100% of the time, but only on a certain dataset. Overfitting is a constant struggle and one that I think is very case by case based on the overall goal of the tool and its application. I think borrowing methods from data scientists has enhanced my own process and approach to tool construction. Holding back data to test the tool and debug with came from these readings.

Building a tool like this I had to develop a method to mine data from all over an asset, and also to compare between assets to establish what are the limits of plausibility. And from there to allow the end user to override and even dictate theses bounds. All this data though often became a luxury to make decisions with. There is a lot of logic built into the tool. From simply counting how many axels a mesh has, to knowing to remove a trailer hitch when there is cargo present. Often a components position relative to another needed to be considered, and would dictate one path or another, or simply if a transform needed to be positive or negative.

An area that was always challenging was the mating of parts that did not naturally fit together. Developing a system to measure the quality of fit between two components took multiple approaches before a robust one was found. This was crucial for choosing which method of fit was best for a particular combination of pieces. Traditional collision detection was slow, and requires small time steps to be robust. Clipping, conforming the ends, or perhaps even filling in the gap may be a more performant approach.

The tool could be optimized far more than it is. Complete asset construction takes on average about 10 seconds, with minor changes taking about 1-2 seconds. Based on crude testing you could generate about 1000 assets in just over 2 hrs on one CPU. This is also with meshes that are generally in the 200-300K faces range. I would expect near real-time results with lower resolution assets. The system can also be expanded without too much effort. Adding additional seed meshes is about a 10 minute process, where a handful of expressions would need their bounds expanded to consider the new data. The real work is incorporating the new assets into realistic sizing which could likely be done via python dynamically spawning or perhaps even removing node branches in sync with the number of seed meshes. The system could also generate more variations per seed if it became more granular. For example allowing tires, wheel covers, or the cab region to be mixed and matched instead of remaining sub-components of larger sections.

Going forward I plan to investigate light fields, as a means to potentially score an assets visual uniqueness next to its peers. I think a lot of useful feedback and direction given by the user to the tool could result. This project was great, but is also a potentially bottomless pit of refinement. I am ready to move on and tackle the backlog of ideas I have on hold.

Tool : Helmet Tool WIP

This is a very early version of a character customization, and asset creation tool. One of the main goals of the tool is to work with “offline” artist created meshes in a meaningful way. Each element allows some degree of manipulation. A “carpet” tool allows meshes to conform and wrap to the surface they “creep” along. The visor is fully parameterized.

Going forward the goals will be adding UV’s, adding Substance based textures that adapt to changes. And of course butting this in engine. Cables that intelligently adjust to the surface. Currently this is just a glimpse.

Tool : Adaptive UV's

What if UV’s were aware of the textures they were being used with? l have been thinking about this for a long time. I often wished UV’s could easily adjust when I was consolidating assets and baking multiple texture maps into one consolidated UV space. I often wished I could set up a relationship between a UV island and a texture map. This is an attempt to do that. The idea of “smart” UV’s is actually the first piece of tech towards a much larger optimization tool I have had in the back of my head for a few years now. That tool would take a finished asset and massively reduce the texture resources needed for it with little to no quality loss. But I am not really ready to talk too much about that tool or process just yet. This version 0.1 of the SmartUV tool was more of a “can it be done?” and what issues are there to still solve. The most interesting part of the tool is probably setting up the correspondence between the UV islands and the image. I plan to allow the user to manipulate this relationship either to resolve situations where I got it wrong, or because of some user need/situation I have not thought of yet. There may also be reason to “lock” some UV islands out of manipulation.

Tool : Concept to Proxymesh v0.1

Initially, the thought of turning nothing but a drawing into a useable mesh in a production seemed like a crazy idea. But the more I thought about it, the more I had to try and see how far I could get. It turns out, quite far. This tool is still in its early days. I have several ideas on how to continue to improve the quality of the meshes it generates as well as increase the scope of the assets it can handle. I did the bulk of the implementation over a weekend, but came back a few times over the course of two weeks to try different approaches and to optimize the speed and stability of the tool. One of my early approaches was using the concept as a height map on the mesh, but this was both slow (30s to generate) and required a lot of memory. I suppose I could have tried intelligently pooling geometry but eventually I discarded this approach, which put me on a much better path and resulted in the tool below, which runs at nearly real-time.

Tool : Armature Builder v0.1

The Armature builder is really a framework on which to hang meshes and assemble more complex designs. The idea is that all of the pieces assemble and adjust in a sensible way; they are aware of and can adapt to each other. The template is like a puppet driving a much more complex machine. The robotic arm that is generated here is based on an input arm mesh, but could really be anything from a race car, in which tires and spoilers are added, subtracted or rearranged, to a head, in which horns, hair or helmets are draped. I have a lot of ideas on how this technology could be pushed further and used to cater to whatever genre of game is being made. This tool could generate a lot of variation from a relatively small amount of input meshes over the course of a production and into DLC.

Tool : Panel Generator v0.1

The Panel Generator tool was born out of a desire to automate the creation of fitted armour suits. I also see potential for this tool to generate organic scale patterns or even just cobble stones with random extrusion amounts. With version 0.2 of this tool I will be looking to provide more artistic direction to the patterns generated as well as potentially emulating humanoid muscle groups, since that seems to be a common design choice.

Tool : Fibermesh to Game Hair v0.1

Authoring hair for video games has a lot of pain points (i.e., sorting hair cards, UV’ing, making adjustments, etc). The idea behind this tool was to take data from Zbrush, an application that most artists are comfortable in, and then create a fluid pipeline to convert a Fibermesh into a useable hair arrangement for a real-time engine. I personally have wanted to build a tool/script to layout hair cards in a spiral for years, but no tech artist or graphics programmer seemed keen enough on the idea. I also wanted tools that would allow me to sort the hair based on an arbitrary object that I placed. This tool dynamically sorts hair cards, laying out UVs based on their location relative to the crown object (red is closest, violet is furthest away). Because all hair can be unwrapped in a consistent way, the artist only needs to create a master or generic tangency map, anisotropic map, etc.

I created my own “polygon-based” creep SOP in order to have the crown object ride the meshes it is used on. My creep sop can use either UV’s (which the tool generates for itself) or a Raycast based method to stick and travel along the mesh.

I have a lot of additional features in mind for this tool. Version 0.2 will likely be based out of H14 (waiting for more stable builds) and will feature some non-destructive hair grooming and creation tools as well as the ability to transfer hair to other meshes. I also hope to be able to expand the users ability to edit hair cards by region.

Tool : Attribute assignment v0.1

The Attribute assignment tool was created because I use part naming a lot and it can be quite tedious to setup, especially when trying to match other meshes that have come before it. The tool takes a template mesh that has the part naming, or UV assignment which is desired on the target mesh. The tool then does a crude alignment moving the template into the same space as the target. A raycast is then done and the greatest distance is kept to drive the search and transfer distance. The tool also checks the target mesh for existing groups and names and weights the amount of existing correct attributes against the search distance in order to try and avoid stomping over correct values.

Version 0.2 will do a better job of aligning meshes. I have done some part recognition for other tools to find the head region on a mesh or the barrel of a gun. I would try and then use this information to align individual regions and deal with orientation differences.

Scale Girl

Some character artwork done using Modo, Zbrush, and Mari and then Modo again to render. This was my first serious project done with Mari. I learned a lot. I have to say Mari is a wonderful painting app I will be using for all my painted texture work now. Its rock solid and just laughs at any heavy data you throw at. It was wonderful to be able to paint on my highest level zbrush sculpt fluidly. I think this was the experience I dreamed of the first time I used Metacreations “Painter 3d” in 1999 to paint a 50 poly apple, and the workflow I wish Bodypaint grew into in the mid 2000’s.

This was also my first real foray into UDIM’s. I initially started the project in PTEX but found performance and workflow a bit smoother when I baked (using the awesome transfer tools in Mari) my PTEX textures into UV shells. Once you grasp the idea of setting up channels in Mari as your individual texture outputs you can create a very non-destructive iterative workflow. I was able to setup all of my numerous SSS related maps in Mari and then just use adjustment layers to dial in the contrast and saturation amounts to get the render I wanted in Modo.



scale_renders_realB_censorscale_renders_realC_censor

The final image is a real-time version of the asset rendered in Marmoset Toolbag. The textures were consolidated into 0-1 space using Mari’s texture transfer tools. The triangle count is 1/100th, and the texel space is 1/8th that of the images above.

Scale_girl_realtime

Sunset Overdrive is Out!


Sunset Overdrive is out!

I worked on this game from the very beginning, through all of its twists, turns, and re-boots. I learned a lot on the project. It forced a lot of painful, but positive growth at the studio, as this was by far the largest project the studio had ever taken on. Early on we had aggressive goals for number of characters on screen and the size and detail of the world. This forced us to consider new ways of working.

I did a lot of R&D on this project in several areas, from level creation to look dev and heavy NPR rendering techniques.

This project drove me to explore and begin to learn Houdini, and really think about what future game productions could look like. It changed my path, and I am very thankful for that.


Sunset_Overdrive_Art_by_Vasili_Zorin_mini

Tool : Procedural Tint mask Substance v0.1

With the amount of customization that happens in games now, in any genre, there is a ton of tint masks that need to be generated by artists. This tools aim was to automate the generation of these masks. I start by taking the final colour maps, find the Albedo by removing shading and lighting info as best I can. I then identify the 3 dominant colours of the map. This test is a fairly worst case scenario where most colours in the texture are in the same family so the gamut is small.

Adaptive Pixelization

This Substance analyses an image and adaptively pixelates it. The image is rendered at three different rates of pixilation. Then, based on the contrast between the neighbouring pixels, a large, medium, or fine pixel size is chosen to describe that area of the image.

Adaptive pixelization of details

Reel 2014

Demo Reel from 2014

Work up to the summer of 2014.

Apr 2018
Oct 2017
Mar 2017
Nov 2015
Mar 2015
Feb 2015
Jan 2015
Oct 2014
Sep 2014
© 2015 Kenneth Finlayson Contact Me