A Sharing of How to Make of Disney/Pixar’s “Soul”(3)
Fox Renderfarm, as the best-animated film render farm, will still shares with you the process about how to make Disney/Pixar's "Soul" by 3d artists Carldrifter. And this is part three of the article.
- Create volume
The staff uses Houdini to create the volume of each frame and provide the IDshader with volume-type signals. In order to ensure that the animation can correctly read the translucent volume information, most of their workflow is based on the camera. Each character is composed of several separate volumes; most of these volumes will have partial or complete self-masking, which can create a cleaner effect without layering. For example, the teeth and tongue are precisely cropped based on the camera to match the movement of opening the mouth. The eyeball is a surface model, not a volume, so we create a mask map instead of cropping the eyes.
Other material features such as eyes, eyebrows, cheeks, and lips are volume objects; many are projected from the camera. They use this to adjust the color and shader and adapt to the shader. We then calculate the corresponding volume normal based on the surface model normal and the camera projection. For the soft normal volume, they converted the surface to SDF, extended the SDF, added an average blur, and then converted the SDF back to the Surface to get the smooth normals mapped back to the original model.
For all backgrounds and other roles in the soul, the staff created the Houdini network to achieve this effect, such as the two roles on the right in Figure 1. They only need certain specific controls. Because each role has its own specific form, most of them share the same topology, which can greatly carry out batch linear management. The same applies to the soul cluster role of the remaining clusters.
For the production of expression lines, the left and middle in Figure 4 are extruded to generate extrude based on the camera directly through the original mesh. For the contour line, the dot product between the model normal and the camera angle is obtained.
Their Houdini network generates many precise volume channels. This requires a lot of and frequent calculations. So a system called Houdini export was created to realize batch parallel data calculation in the rendering farm. The system can be highly adjusted, allowing different departments to make different precision arrangements according to their needs.
In the early production, character artists used Pixar's existing visual effects process. visual effects artists also use volumes that require a lot of calculations. They have developed parallel calculations and work methods for large amounts of data storage. Other departments export a USD file with animation, each frame is stored separately, but this cannot be used immediately in parallel processing. Because there is a race condition when the file is written out. In turn, Pixar's visual effects artists used the tool USD Stitch Clips to stitch each frame into USD value clips.
A Sharing of How to Make of Disney/Pixar’s “Soul”(2)
This time, Fox Renderfarm, as your best cloud rendering service provider, will still shares with you the process about how to make Disney/Pixar's "Soul" by 3d artists Carldrifter. And this is part two of the article.
- Look Dev
The setting of the soul is similar to the light and shadow effect of prisms for light reflection. The difficulty lies in creating a balanced appearance between softness/lightness but with sufficient clarity so that shapes and expressions can be easily recognized. This is to ensure that the character achieves 3S effects while not being overly hard-surfaced.
They wrote a custom material node, IridescenceDriver or IDriver, to achieve the purpose of the dual Henyey-Greenstein volume shader to achieve this non-real lighting effect. In general, the IDriver material reads a custom color through the model normal to set the color of albedo, which is used to set fake diffuse and specular. In order to make the character look softer, the normal details of the model are filtered out.
For the smaller and more complex shapes of the face, they modeled and set up animated lines to highlight contours or wrinkles.
The rigging also outputs lines to enhance the details of hair and clothes. They added contour lines to the characters' hands to enhance the recognition of gestures and actions, and make the actions clearer. This is the contour line extracted by dotproduct calculation through the vertex normal direction and the camera angle direction. The animator has additional parameters to control the visibility, density and thickness of these lines.
The protagonist Joe has some accessories to reinforce its characteristic mark in the soul world. For example, his hat is like a part of his body, so the effect of the hat looks the same as the body. However, although his glasses are necessary, they are inanimate, so they are colorless. The hard edges of the glasses keep it translucent and have obvious readability. The hair of the crowd soul in the background is also set like Joe's hat, and most of the accessories are desaturated with a little rainbow color, such as Joe's glasses.
Animated Movies Rendering VS Video Games Rendering
Video Games Rendering
Involving the difference between offline and instant.
The rendering of animated movies is very inspecting the hardware level. Generally, it is cost a long time in the industrial-grade cluster workstations, always several hours to several weeks. Then, the final works to the audience is ok.
The quality of an animated movies is high. There are a lot of 2K and 4K works. It is not a problem to show on IMAX. It is depend on hardware devices and the long time rendering.
The rendering of the video game is different. When the player is sitting next to the game, it must complete one frame in tens of milliseconds.
Game rendering, including mobile and PC games, the screen is no bigger than the screen of the theater, so the requirements are much lower, generally will not exceed the mainstream TV level, so we describe a high-definition game will often admire "Smell the aroma of the graphics card", mobile games and PC games are the same, playing games is actually playing a graphics processing unit (GPU).
The difference of engine.
Animated movies rendering pursues reality, using global light source, real-time calculation of particle effects and fluid effects.
Video game rendering generally uses pre-optimized models and material compression techniques, simple local light sources. However, the excellent story video game is another thing, it needs to be downloaded separately, just like watching in the video.
The above is the difference between animated movies rendering and video games rendering.
Let’s talk about the difficulty of film and television rendering.
Usually 2K movies use an rendering time of about one hour per frame, while Hollywood mainstream movies have resolutions of 2K, 4K, 6K, and 8K, and each quality time will increase by four times.If the scene involves complex calculations such as particles and fluids, a frame may take more than 10 hours of rendering time.
The huge cluster of render farm solves this problem well. Its mode is similar to that of conventional cloud computing, where 3D programs are rendered in remote servers, and user terminals access resources through high-speed internet access via the web or clients. The instruction is sent from the user terminal, and the server executes the corresponding rendering task according to the instruction (allocating multi-node multi-frame simultaneous rendering, which will greatly shorten the rendering period), and the rendering result picture is transmitted back to the user terminal for display.
The cloud render farm can be understood as a high-performance cloud computing cluster system (thousands to tens of thousands of high-performance rendering nodes).
NE ZHA, The First Chinese Animated Film Released in IMAX
As the first Chinese animated film released in China's IMAX theater, NE ZHA(哪吒之魔童降世) has assembled China's top domestic animation team, and produced for three years. The film has more than 1,400 special effects shots, nearly 80% of which are composed of special effects shots, and this well-received animated film will be released on July 26, 2019, in mainland China.
Ne Zha, which is one of the most rebellious young heroes in Chinese legends, the film NE ZHA is adapted from the classic Chinese mythology, but the subversion of the traditional image has been subtly adapted to tell the story of what happened. The story of the devil Ne Zha, who refuses to admit life. This animated film is directed by Yu Yang (Dumpling), a brilliant director in Chinese animated film industry, launched by Horgos Caitiaowu Pictures Co., Ltd, Chengdu Coco Cartoon Co., Ltd. and Octmedia. At the same time, Fox Renderfarm also provided cloud rendering services for the film, escorting the final result of the production.
The animated film ZE ZHA subverts the traditional image, which has been well received by everyone since its publication, and the production team is really painstaking about this film. As the film's release time is approaching, it has also released official trailer and posters. After the film Monkey King: Hero Is Back and White Snake: Origin, will ZE ZHA become another good work at the Chinse animated film history? Let’s looking forward to the box office.
How to render large scenes with Redshift in Cinema 4D
Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?
How the Redshift Proxy Renders the Subdivision
How to Render High-quality Images in Blender
China Film Administration Release New License Logo
Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel
Corona Renderer Learning - Denoising
Redshift for Cinema 4D Tutorial: Interior Design Rendering