A Sharing of How to Make of Disney/Pixar’s “Soul”(3)
Fox Renderfarm, as the best-animated film render farm, will still shares with you the process about how to make Disney/Pixar's "Soul" by 3d artists Carldrifter. And this is part three of the article.
2. Create volume
The staff uses Houdini to create the volume of each frame and provide the IDshader with volume-type signals. In order to ensure that the animation can correctly read the translucent volume information, most of their workflow is based on the camera. Each character is composed of several separate volumes; most of these volumes will have partial or complete self-masking, which can create a cleaner effect without layering. For example, the teeth and tongue are precisely cropped based on the camera to match the movement of opening the mouth. The eyeball is a surface model, not a volume, so we create a mask map instead of cropping the eyes.
Other material features such as eyes, eyebrows, cheeks, and lips are volume objects; many are projected from the camera. They use this to adjust the color and shader and adapt to the shader. We then calculate the corresponding volume normal based on the surface model normal and the camera projection. For the soft normal volume, they converted the surface to SDF, extended the SDF, added an average blur, and then converted the SDF back to the Surface to get the smooth normals mapped back to the original model.
For all backgrounds and other roles in the soul, the staff created the Houdini network to achieve this effect, such as the two roles on the right in Figure 1. They only need certain specific controls. Because each role has its own specific form, most of them share the same topology, which can greatly carry out batch linear management. The same applies to the soul cluster role of the remaining clusters.
For the production of expression lines, the left and middle in Figure 4 are extruded to generate extrude based on the camera directly through the original mesh. For the contour line, the dot product between the model normal and the camera angle is obtained.
Their Houdini network generates many precise volume channels. This requires a lot of and frequent calculations. So a system called Houdini export was created to realize batch parallel data calculation in the rendering farm. The system can be highly adjusted, allowing different departments to make different precision arrangements according to their needs.
In the early production, character artists used Pixar's existing visual effects process. visual effects artists also use volumes that require a lot of calculations. They have developed parallel calculations and work methods for large amounts of data storage. Other departments export a USD file with animation, each frame is stored separately, but this cannot be used immediately in parallel processing. Because there is a race condition when the file is written out. In turn, Pixar's visual effects artists used the tool USD Stitch Clips to stitch each frame into USD value clips.
How to Render High-quality Images in Blender
How to render large scenes with Redshift in Cinema 4D
Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?
How the Redshift Proxy Renders the Subdivision
Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel
Blender vs Maya vs Cinema 4D, Which One Is The Best 3D Software？
China Film Administration Release New License Logo
Corona Renderer Learning - Denoising
Previous: ZBrush Tutorials: The Making of Hard Surface Models(1)
Next: A Sharing of How to Make of Disney/Pixar’s “Soul”(2)