Fazbear & Friends Episode #4 [VERSION A] is OUT now!!

Fazbear & Friends

CIRCUS BABY VS FREDDY! - Fazbear & Friends Episode #4 [VERSION A] is OUT now!! Let's give it a watch.

The MV was made by youtuber ZAMination, rendered with the best cloud rendering service provider, Fox Renderfarm.

Fazbear and Friends is an animated series based on the main cast of "Five Nights at Freddy's". The show follows the wacky misadventures of Freddy, Bonnie, Chica, Foxy, and their many friends as they get themselves into trouble while trying to manage their pizzeria.

Welcome to join us

render farm free trial

Recommended reading


How to Render High-quality Images in Blender

2020-03-09


How to render large scenes with Redshift in Cinema 4D

2019-10-15


Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?

2019-04-15


How the Redshift Proxy Renders the Subdivision

2018-12-28


China Film Administration Release New License Logo

2019-01-09


Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel

2018-12-26


Blender vs Maya vs Cinema 4D, Which One Is The Best 3D Software?

2020-01-16


Corona Renderer Learning - Denoising

2019-05-15


Partners

Interested

A Sharing of How to Make of Disney/Pixar’s “Soul”(3)
A Sharing of How to Make of Disney/Pixar’s “Soul”(3)
Fox Renderfarm, as the best-animated film render farm, will still shares with you the process about how to make Disney/Pixar's "Soul" by 3d artists Carldrifter. And this is part three of the article. 2. Create volumeThe staff uses Houdini to create the volume of each frame and provide the IDshader with volume-type signals. In order to ensure that the animation can correctly read the translucent volume information, most of their workflow is based on the camera. Each character is composed of several separate volumes; most of these volumes will have partial or complete self-masking, which can create a cleaner effect without layering. For example, the teeth and tongue are precisely cropped based on the camera to match the movement of opening the mouth. The eyeball is a surface model, not a volume, so we create a mask map instead of cropping the eyes.Other material features such as eyes, eyebrows, cheeks, and lips are volume objects; many are projected from the camera. They use this to adjust the color and shader and adapt to the shader. We then calculate the corresponding volume normal based on the surface model normal and the camera projection. For the soft normal volume, they converted the surface to SDF, extended the SDF, added an average blur, and then converted the SDF back to the Surface to get the smooth normals mapped back to the original model.For all backgrounds and other roles in the soul, the staff created the Houdini network to achieve this effect, such as the two roles on the right in Figure 1. They only need certain specific controls. Because each role has its own specific form, most of them share the same topology, which can greatly carry out batch linear management. The same applies to the soul cluster role of the remaining clusters.For the production of expression lines, the left and middle in Figure 4 are extruded to generate extrude based on the camera directly through the original mesh. For the contour line, the dot product between the model normal and the camera angle is obtained.!A Sharing of How to Make of Disney Pixar’s Soul -4 3.PipelineTheir Houdini network generates many precise volume channels. This requires a lot of and frequent calculations. So a system called Houdini export was created to realize batch parallel data calculation in the rendering farm. The system can be highly adjusted, allowing different departments to make different precision arrangements according to their needs.In the early production, character artists used Pixar's existing visual effects process. visual effects artists also use volumes that require a lot of calculations. They have developed parallel calculations and work methods for large amounts of data storage. Other departments export a USD file with animation, each frame is stored separately, but this cannot be used immediately in parallel processing. Because there is a race condition when the file is written out. In turn, Pixar's visual effects artists used the tool USD Stitch Clips to stitch each frame into USD value clips.
More
2021-01-25
A Sharing of How to Make of Disney/Pixar’s “Soul”(2)
A Sharing of How to Make of Disney/Pixar’s “Soul”(2)
This time, Fox Renderfarm, as your best cloud rendering service provider, will still shares with you the process about how to make Disney/Pixar's "Soul" by 3d artists Carldrifter. And this is part two of the article.!A Sharing of How to Make of Disney Pixar’s Soul -1 1. Look DevThe setting of the soul is similar to the light and shadow effect of prisms for light reflection. The difficulty lies in creating a balanced appearance between softness/lightness but with sufficient clarity so that shapes and expressions can be easily recognized. This is to ensure that the character achieves 3S effects while not being overly hard-surfaced.!A Sharing of How to Make of Disney Pixar’s Soul -2They wrote a custom material node, IridescenceDriver or IDriver, to achieve the purpose of the dual Henyey-Greenstein volume shader to achieve this non-real lighting effect. In general, the IDriver material reads a custom color through the model normal to set the color of albedo, which is used to set fake diffuse and specular. In order to make the character look softer, the normal details of the model are filtered out.!A Sharing of How to Make of Disney Pixar’s Soul -3For the smaller and more complex shapes of the face, they modeled and set up animated lines to highlight contours or wrinkles.The rigging also outputs lines to enhance the details of hair and clothes. They added contour lines to the characters' hands to enhance the recognition of gestures and actions, and make the actions clearer. This is the contour line extracted by dotproduct calculation through the vertex normal direction and the camera angle direction. The animator has additional parameters to control the visibility, density and thickness of these lines.The protagonist Joe has some accessories to reinforce its characteristic mark in the soul world. For example, his hat is like a part of his body, so the effect of the hat looks the same as the body. However, although his glasses are necessary, they are inanimate, so they are colorless. The hard edges of the glasses keep it translucent and have obvious readability. The hair of the crowd soul in the background is also set like Joe's hat, and most of the accessories are desaturated with a little rainbow color, such as Joe's glasses.
More
2021-01-22
Animated Movies Rendering VS Video Games Rendering
Animated Movies Rendering VS Video Games Rendering
Involving the difference between offline and instant.The rendering of animated movies is very inspecting the hardware level. Generally, it is cost a long time in the industrial-grade cluster workstations, always several hours to several weeks. Then, the final works to the audience is ok.The quality of an animated movies is high. There are a lot of 2K and 4K works. It is not a problem to show on IMAX. It is depend on hardware devices and the long time rendering.The rendering of the video game is different. When the player is sitting next to the game, it must complete one frame in tens of milliseconds.Game rendering, including mobile and PC games, the screen is no bigger than the screen of the theater, so the requirements are much lower, generally will not exceed the mainstream TV level, so we describe a high-definition game will often admire "Smell the aroma of the graphics card", mobile games and PC games are the same, playing games is actually playing a graphics processing unit (GPU).The difference of engine.Animated movies rendering pursues reality, using global light source, real-time calculation of particle effects and fluid effects.Video game rendering generally uses pre-optimized models and material compression techniques, simple local light sources. However, the excellent story video game is another thing, it needs to be downloaded separately, just like watching in the video.The above is the difference between animated movies rendering and video games rendering.Let’s talk about the difficulty of film and television rendering.Usually 2K movies use an rendering time of about one hour per frame, while Hollywood mainstream movies have resolutions of 2K, 4K, 6K, and 8K, and each quality time will increase by four times.If the scene involves complex calculations such as particles and fluids, a frame may take more than 10 hours of rendering time.The huge cluster of render farm solves this problem well. Its mode is similar to that of conventional cloud computing, where 3D programs are rendered in remote servers, and user terminals access resources through high-speed internet access via the web or clients. The instruction is sent from the user terminal, and the server executes the corresponding rendering task according to the instruction (allocating multi-node multi-frame simultaneous rendering, which will greatly shorten the rendering period), and the rendering result picture is transmitted back to the user terminal for display.The cloud render farm can be understood as a high-performance cloud computing cluster system (thousands to tens of thousands of high-performance rendering nodes).
More
2018-12-06
  • Foxrenderfarm

    Powerful Render Farm Service

  • TPN

    Business Consulting

    Global Agent Contact:Gordon Shaw

    Email: gordon@foxrenderfarm.com

    Marketing Contact: Rachel Chen

    Email: rachel@foxrenderfarm.com

    Connect with Us

Fackbook Customer Reviews