Ken's Sandbox

Asset Creation Evolved…
Nov 2015

Tool : Component-Based Synthesis

Building on my pervious armature work and inspired by the 2012 Siggraph paper “A probabilistic model for component-based shape synthesis” I set out two months ago on weekends and evenings to explore the world of Asset Synthesis. I have been interested in this area for some time and intend to continue to explore this domain with organics, expand on some preliminary work with texture synthesis, and work towards more generalized solutions.

I ended up looking into the area of machine learning as a result of this project. I think ML has huge potential in the creation of intelligent tools and content generation. Procedural content is great but often lends itself to only certain types of assets and complexities before hitting diminishing returns. It also often lacks the aesthetics of handmade assets. To me the holy grail of content generation is to enable a group of skilled artists to establish a style and a very high quality baseline that is then able to be replicated and expanded beyond what that team could otherwise produce without the sacrifice of quality or artistic control. I feel that expanding upon human loved assets is the way to do this. Therefore learning from existing content seems crucial.

I learned quite a lot from this project. One valuable lesson was to not always look for a solution that works 100% of the time. There may not be one. Instead consider multiple approaches and develop a process to measure and choose the best solution given the circumstance. Another concept that was new to me was the phenomenon of “overfitting” a problem. There was a lot of ML methodology I could relate to, and though I always strive to make the most robust tools I can, using the most generalized methods I can. It was enlightening to take a step back and realize that a solution that works 80% of the time on whatever you throw at it may be better than a solution that works 100% of the time, but only on a certain dataset. Overfitting is a constant struggle and one that I think is very case by case based on the overall goal of the tool and its application. I think borrowing methods from data scientists has enhanced my own process and approach to tool construction. Holding back data to test the tool and debug with came from these readings.

Building a tool like this I had to develop a method to mine data from all over an asset, and also to compare between assets to establish what are the limits of plausibility. And from there to allow the end user to override and even dictate theses bounds. All this data though often became a luxury to make decisions with. There is a lot of logic built into the tool. From simply counting how many axels a mesh has, to knowing to remove a trailer hitch when there is cargo present. Often a components position relative to another needed to be considered, and would dictate one path or another, or simply if a transform needed to be positive or negative.

An area that was always challenging was the mating of parts that did not naturally fit together. Developing a system to measure the quality of fit between two components took multiple approaches before a robust one was found. This was crucial for choosing which method of fit was best for a particular combination of pieces. Traditional collision detection was slow, and requires small time steps to be robust. Clipping, conforming the ends, or perhaps even filling in the gap may be a more performant approach.

The tool could be optimized far more than it is. Complete asset construction takes on average about 10 seconds, with minor changes taking about 1-2 seconds. Based on crude testing you could generate about 1000 assets in just over 2 hrs on one CPU. This is also with meshes that are generally in the 200-300K faces range. I would expect near real-time results with lower resolution assets. The system can also be expanded without too much effort. Adding additional seed meshes is about a 10 minute process, where a handful of expressions would need their bounds expanded to consider the new data. The real work is incorporating the new assets into realistic sizing which could likely be done via python dynamically spawning or perhaps even removing node branches in sync with the number of seed meshes. The system could also generate more variations per seed if it became more granular. For example allowing tires, wheel covers, or the cab region to be mixed and matched instead of remaining sub-components of larger sections.

Going forward I plan to investigate light fields, as a means to potentially score an assets visual uniqueness next to its peers. I think a lot of useful feedback and direction given by the user to the tool could result. This project was great, but is also a potentially bottomless pit of refinement. I am ready to move on and tackle the backlog of ideas I have on hold.

Apr 2018
Oct 2017
Mar 2017
Nov 2015
Mar 2015
Feb 2015
Jan 2015
Oct 2014
Sep 2014
© 2015 Kenneth Finlayson Contact Me