Building on my pervious armature work and inspired by the 2012 Siggraph paper “A probabilistic model for component-based shape synthesis” I set out two months ago on weekends and evenings to explore the world of Asset Synthesis. I have been interested in this area for some time and intend to continue to explore this domain with organics, expand on some preliminary work with texture synthesis, and work towards more generalized solutions.
I ended up looking into the area of machine learning as a result of this project. I think ML has huge potential in the creation of intelligent tools and content generation. Procedural content is great but often lends itself to only certain types of assets and complexities before hitting diminishing returns. It also often lacks the aesthetics of handmade assets. To me the holy grail of content generation is to enable a group of skilled artists to establish a style and a very high quality baseline that is then able to be replicated and expanded beyond what that team could otherwise produce without the sacrifice of quality or artistic control. I feel that expanding upon human loved assets is the way to do this. Therefore learning from existing content seems crucial.
I learned quite a lot from this project. One valuable lesson was to not always look for a solution that works 100% of the time. There may not be one. Instead consider multiple approaches and develop a process to measure and choose the best solution given the circumstance. Another concept that was new to me was the phenomenon of “overfitting” a problem. There was a lot of ML methodology I could relate to, and though I always strive to make the most robust tools I can, using the most generalized methods I can. It was enlightening to take a step back and realize that a solution that works 80% of the time on whatever you throw at it may be better than a solution that works 100% of the time, but only on a certain dataset. Overfitting is a constant struggle and one that I think is very case by case based on the overall goal of the tool and its application. I think borrowing methods from data scientists has enhanced my own process and approach to tool construction. Holding back data to test the tool and debug with came from these readings.
Building a tool like this I had to develop a method to mine data from all over an asset, and also to compare between assets to establish what are the limits of plausibility. And from there to allow the end user to override and even dictate theses bounds. All this data though often became a luxury to make decisions with. There is a lot of logic built into the tool. From simply counting how many axels a mesh has, to knowing to remove a trailer hitch when there is cargo present. Often a components position relative to another needed to be considered, and would dictate one path or another, or simply if a transform needed to be positive or negative.
An area that was always challenging was the mating of parts that did not naturally fit together. Developing a system to measure the quality of fit between two components took multiple approaches before a robust one was found. This was crucial for choosing which method of fit was best for a particular combination of pieces. Traditional collision detection was slow, and requires small time steps to be robust. Clipping, conforming the ends, or perhaps even filling in the gap may be a more performant approach.
The tool could be optimized far more than it is. Complete asset construction takes on average about 10 seconds, with minor changes taking about 1-2 seconds. Based on crude testing you could generate about 1000 assets in just over 2 hrs on one CPU. This is also with meshes that are generally in the 200-300K faces range. I would expect near real-time results with lower resolution assets. The system can also be expanded without too much effort. Adding additional seed meshes is about a 10 minute process, where a handful of expressions would need their bounds expanded to consider the new data. The real work is incorporating the new assets into realistic sizing which could likely be done via python dynamically spawning or perhaps even removing node branches in sync with the number of seed meshes. The system could also generate more variations per seed if it became more granular. For example allowing tires, wheel covers, or the cab region to be mixed and matched instead of remaining sub-components of larger sections.
Going forward I plan to investigate light fields, as a means to potentially score an assets visual uniqueness next to its peers. I think a lot of useful feedback and direction given by the user to the tool could result. This project was great, but is also a potentially bottomless pit of refinement. I am ready to move on and tackle the backlog of ideas I have on hold.
This is a very early version of a character customization, and asset creation tool. One of the main goals of the tool is to work with “offline” artist created meshes in a meaningful way. Each element allows some degree of manipulation. A “carpet” tool allows meshes to conform and wrap to the surface they “creep” along. The visor is fully parameterized.
Going forward the goals will be adding UV’s, adding Substance based textures that adapt to changes. And of course butting this in engine. Cables that intelligently adjust to the surface. Currently this is just a glimpse.
What if UV’s were aware of the textures they were being used with? l have been thinking about this for a long time. I often wished UV’s could easily adjust when I was consolidating assets and baking multiple texture maps into one consolidated UV space. I often wished I could set up a relationship between a UV island and a texture map. This is an attempt to do that. The idea of “smart” UV’s is actually the first piece of tech towards a much larger optimization tool I have had in the back of my head for a few years now. That tool would take a finished asset and massively reduce the texture resources needed for it with little to no quality loss. But I am not really ready to talk too much about that tool or process just yet. This version 0.1 of the SmartUV tool was more of a “can it be done?” and what issues are there to still solve. The most interesting part of the tool is probably setting up the correspondence between the UV islands and the image. I plan to allow the user to manipulate this relationship either to resolve situations where I got it wrong, or because of some user need/situation I have not thought of yet. There may also be reason to “lock” some UV islands out of manipulation.
Initially, the thought of turning nothing but a drawing into a useable mesh in a production seemed like a crazy idea. But the more I thought about it, the more I had to try and see how far I could get. It turns out, quite far. This tool is still in its early days. I have several ideas on how to continue to improve the quality of the meshes it generates as well as increase the scope of the assets it can handle. I did the bulk of the implementation over a weekend, but came back a few times over the course of two weeks to try different approaches and to optimize the speed and stability of the tool. One of my early approaches was using the concept as a height map on the mesh, but this was both slow (30s to generate) and required a lot of memory. I suppose I could have tried intelligently pooling geometry but eventually I discarded this approach, which put me on a much better path and resulted in the tool below, which runs at nearly real-time.
The Armature builder is really a framework on which to hang meshes and assemble more complex designs. The idea is that all of the pieces assemble and adjust in a sensible way; they are aware of and can adapt to each other. The template is like a puppet driving a much more complex machine. The robotic arm that is generated here is based on an input arm mesh, but could really be anything from a race car, in which tires and spoilers are added, subtracted or rearranged, to a head, in which horns, hair or helmets are draped. I have a lot of ideas on how this technology could be pushed further and used to cater to whatever genre of game is being made. This tool could generate a lot of variation from a relatively small amount of input meshes over the course of a production and into DLC.
The Panel Generator tool was born out of a desire to automate the creation of fitted armour suits. I also see potential for this tool to generate organic scale patterns or even just cobble stones with random extrusion amounts. With version 0.2 of this tool I will be looking to provide more artistic direction to the patterns generated as well as potentially emulating humanoid muscle groups, since that seems to be a common design choice.
Authoring hair for video games has a lot of pain points (i.e., sorting hair cards, UV’ing, making adjustments, etc). The idea behind this tool was to take data from Zbrush, an application that most artists are comfortable in, and then create a fluid pipeline to convert a Fibermesh into a useable hair arrangement for a real-time engine. I personally have wanted to build a tool/script to layout hair cards in a spiral for years, but no tech artist or graphics programmer seemed keen enough on the idea. I also wanted tools that would allow me to sort the hair based on an arbitrary object that I placed. This tool dynamically sorts hair cards, laying out UVs based on their location relative to the crown object (red is closest, violet is furthest away). Because all hair can be unwrapped in a consistent way, the artist only needs to create a master or generic tangency map, anisotropic map, etc.
I created my own “polygon-based” creep SOP in order to have the crown object ride the meshes it is used on. My creep sop can use either UV’s (which the tool generates for itself) or a Raycast based method to stick and travel along the mesh.
I have a lot of additional features in mind for this tool. Version 0.2 will likely be based out of H14 (waiting for more stable builds) and will feature some non-destructive hair grooming and creation tools as well as the ability to transfer hair to other meshes. I also hope to be able to expand the users ability to edit hair cards by region.
The Attribute assignment tool was created because I use part naming a lot and it can be quite tedious to setup, especially when trying to match other meshes that have come before it. The tool takes a template mesh that has the part naming, or UV assignment which is desired on the target mesh. The tool then does a crude alignment moving the template into the same space as the target. A raycast is then done and the greatest distance is kept to drive the search and transfer distance. The tool also checks the target mesh for existing groups and names and weights the amount of existing correct attributes against the search distance in order to try and avoid stomping over correct values.
Version 0.2 will do a better job of aligning meshes. I have done some part recognition for other tools to find the head region on a mesh or the barrel of a gun. I would try and then use this information to align individual regions and deal with orientation differences.
Some character artwork done using Modo, Zbrush, and Mari and then Modo again to render. This was my first serious project done with Mari. I learned a lot. I have to say Mari is a wonderful painting app I will be using for all my painted texture work now. Its rock solid and just laughs at any heavy data you throw at. It was wonderful to be able to paint on my highest level zbrush sculpt fluidly. I think this was the experience I dreamed of the first time I used Metacreations “Painter 3d” in 1999 to paint a 50 poly apple, and the workflow I wish Bodypaint grew into in the mid 2000’s.
This was also my first real foray into UDIM’s. I initially started the project in PTEX but found performance and workflow a bit smoother when I baked (using the awesome transfer tools in Mari) my PTEX textures into UV shells. Once you grasp the idea of setting up channels in Mari as your individual texture outputs you can create a very non-destructive iterative workflow. I was able to setup all of my numerous SSS related maps in Mari and then just use adjustment layers to dial in the contrast and saturation amounts to get the render I wanted in Modo.
The final image is a real-time version of the asset rendered in Marmoset Toolbag. The textures were consolidated into 0-1 space using Mari’s texture transfer tools. The triangle count is 1/100th, and the texel space is 1/8th that of the images above.
With the amount of customization that happens in games now, in any genre, there is a ton of tint masks that need to be generated by artists. This tools aim was to automate the generation of these masks. I start by taking the final colour maps, find the Albedo by removing shading and lighting info as best I can. I then identify the 3 dominant colours of the map. This test is a fairly worst case scenario where most colours in the texture are in the same family so the gamut is small.
Demo Reel from 2014
Work up to the summer of 2014.