What is the difference between pre-rendering and real-time rendering?

Black Myth Wukong

Pre-Rendering vs Real-time Rendering

"Avatar" directed by James Cameron spent four years and nearly 500 million US dollars to open a new world of science fiction for everyone. The CGI characters, Names, in the film look exactly the same as the people in the real world. And the realistic sci-fi scenes are shocking. However, these wonderful images are inseparable from the CG artists and pre-rendering technology.

Avatar

In order to solve the rendering tasks of "Avatar", the Weta Digital supercomputer processes up to 1.4 million tasks per day to render movies, running 24 hours a day with 40,000 CPUs, 104TB memory and 10G network bandwidth. It took 1 month in total. Each frame of "Avatar" needs to be rendered for several hours, 24 frames per second. Hence, the powerful rendering cluster capability is really important to the CG studio.

What is pre-rendering?

Pre-rendering is used to create realistic images and movies, where each frame can take hours or days to complete, or for debugging of complex graphics code by programmers. Pre-rendering starts with modelling, using points, lines, surfaces, textures, materials, light and shadow, visual effects and other elements to build realistic objects and scenes. Then, computing resources are used to calculate the visual image of the model under the action of factors, such as viewpoint, light, and motion trajectory according to the predefined scene settings. The process is called pre-rendering. After the rendering is completed, the frames are played continuously to achieve the final effect.

It is mainly used in the fields of architecture archive, film and television, animation, commercials, etc., with the focus on art and visual effects. In order to obtain ideal visual effects, modelers need to sculpt various model details during the production process; animators need to give the characters a clever charm; lighting artists need to create various artistic atmospheres; visual effects artists need to make visual effects realistic. Commonly used pre-rendering softwares include 3ds Max, Maya, Blender, Cinema 4D, etc., which are characterized by the need to arrange the scene in advance, set the relevant rendering parameters, such as shadow, particle, anti-aliasing, etc.), and then use a PC or render farm to render with unsupervised calculation.

BTW, you can use a local machine or a cloud render farm for rendering. Fox Renderfarm can provide rendering technical support for the software mentioned above.

Each frame in the pre-rendered scene is present. Once the rendering is started, each frame takes several seconds, minutes or even hours to render. A large amount of memory, CPU/GPU, and storage resources are consumed during the rendering process, which is a computing resource-intensive application. Especially in film and television projects, there are usually scheduled requirements so that rendering tasks need to be completed within a specified time. Currently, tasks are basically submitted to cloud rendering farms for rendering. Cloud rendering farms, such as Fox Renderfarm, are professional service companies that can provide massively parallel computing clusters.

After pre-rendering, the task is basically the finished work that has been rendered. If you want to calculate and see the scene in real time on an operable online service or online game, we have to talk about real-time rendering.

What is real-time rendering?

In August 2020, a live demonstration of the action role-playing game "Black Myth: Wukong" produced by Game Science Corporation from China became popular in Chinese social networks. The top-notch pictures, rich details, immersive combat experience, and sufficient plot interpretation in the demonstration restore an oriental magical world. Every beautiful scene in the game is rendered in real time.

Black Myth Wukong

Black Myth Wukong

Real-time rendering is used to interactively render a scene, like in 3D computer games, and generally each frame must be rendered in a few milliseconds. It means that the computer outputs and displays the screen while calculating the screen. Typical representatives are Unreal and Unity. Games like Game Science are built using Unreal Engine 4. The characteristic of real-time rendering is that it can be controlled in real time and is very convenient for interaction. However, the disadvantage is that it is limited by the load capacity of the system. And if necessary, it will sacrifice the final effect, including model, light, shadow and texture, to meet the requirements of the real-time system. Real-time rendering can currently be applied to 3D games, 3D simulations, and 3D product configurators and others.

Real-time rendering focuses on interactivity and real-time. Generally, scenes need to be optimized to increase the speed of screen calculation and reduce latency. For the user, every operation, such as a finger touch or click the screen, will make the screen to be recalculated. And the feedback needs to be obtained in real time. Thus, real-time rendering is very important. In simulation applications, the data shows that only if the latency is controlled within 100ms, people will not obviously perceive the inconsistency of video and audio.

In recent years, with the improvement of GPU performance, the speed of real-time calculation is getting faster; and the accuracy of the calculation images is getting higher. Especially with the application of Ray-tracing and other technologies, the effect of real-time rendering becomes more realistic. These top technologies are also obvious trends in future production. If you want to learn more about the real-time rendering, please feel free to contact us.

Welcome to join us

render farm free trial

Recommended reading


How to Render High-quality Images in Blender

2020-03-09


How to render large scenes with Redshift in Cinema 4D

2019-10-15


Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?

2019-04-15


How the Redshift Proxy Renders the Subdivision

2018-12-28


China Film Administration Release New License Logo

2019-01-09


Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel

2018-12-26


Corona Renderer Learning - Denoising

2019-05-15


Blender vs Maya vs Cinema 4D, Which One Is The Best 3D Software?

2020-01-16


Partners

Interested

How to Make a Viking Warrior in Maya and ZBrush(2)
How to Make a Viking Warrior in Maya and ZBrush(2)
Fox Renderfarm, the best CPU and GPU render farm, will still bring you the tutorial about how to make a Viking Warrior in Maya and ZBrush from 3D Character Artist Yuyong Jie. Production of low-polygon models After the high-poly model was completed, I made a topology on it. The character's face and limbs are reduced to the low-polygon model obtained by subdividing the original model, and then enter TopoGun3 for appropriate modification and matching. If the requirements are not very high, we can also directly use the plug-in to delete the model to get the final model. If some places are very different, we may need to re-topology. The topology needs to pay attention to the position of the line, the joints need to be more, and the invisible faces are deleted. If you encounter large undulating folds, we need to increase the line appropriately. Baking and material making After all the objects are topological, I split the low polygon model again by UV, and consider how many textures to make according to the expected requirements. PS: The structure that can be straightened by UV should be straightened as much as possible so that the UV space utilization rate will be higher when placed. After setting up the UV, the model can be baked. I usually use MAYA for baking, but this time I want to try Marmoset Toolbag3. It is said that his operation is also very convenient. Next, import the low poly model into Substance Painter for material production. By the way, pour the previously baked textures into Substance Painter, and then bake the remaining unbaked textures to complete the texture creation. First, the skin texture is performed. Here I use the Specular Glossiness workflow for texture production. I paste the previous XYZ colour map on the colour channel and then create a new filling layer to adjust the facial colour and makeup according to the original painting. After the makeup is determined, the gloss of the picture will be engraved. The gloss of different parts will be different. For example, the T area of the face will have more oil, which will be more shiny, and the gloss around the eyes and lips will not be the same. To the same, adjust according to the specific situation. After finishing the face material, make the character's hair. Here I first use Maya’s XGEN to make the hair texture, and then paste the texture on the patch, and then combine the patches into different densities. The hair group is then placed on the head one by one, and the shape is adjusted through the lattice or bending life, soft selection, etc. There is no technical content in the hair placement, mainly because of the styling and patience of the hair, and it also takes a lot of time to adjust. After finishing the face and other skin parts, the next step is to make the materials for the remaining equipment. Here I use the workflow of Metallic Roughness to make. First, distinguish the colours and textures for different materials, and then make them separately For meticulous adjustments, a generator that calculates dark and bright colours is combined with some grayscale textures to portray the surface texture of the object. For example, the colour and roughness of the dirty and worn surface will change. After the surface of the object is dirty, the metallicity and roughness will become different. Pay attention to the original painting. We can also learn from the reference picture to bring more inspiration to ourselves. When the object is used, make it old accordingly. Lighting and rendering After finishing the materials of all the objects, export the textures and perform lighting and rendering on the model. I used Marmoset Toolbag 3 for rendering. First, I imported all the low polygon models into Marmoset Toolbag 3, and then mapped the corresponding models. According to the situation, I can adjust the properties of the shader to make the material effect better. For example, I used the 3S effect on the skin here to make the skin look more transparent. Before lighting, we choose the ambient light we want and then turn on the light. The atmosphere created by the light can be based on the effect we want. To do this, we can refer to the lighting in some movie screens. I tried the main light, and added a slightly darker light on the other side as auxiliary light, and added a contour light on the back so that a simple three-point light source was formed. Next, I can adjust the shadow of the light. The edges are soft and excessive, and then adjust some of the camera's effect attributes to make the picture softer or sharper. After everything is set, the output is rendered, and the work is complete! If we want to pose the character, we can use Maya to bind the bones first, then import the model into ZBrush, just separate the adjacent objects separately, import them into ZBrush to group them, and then select the corresponding group or object to adjust it.
More
2021-06-07
How to Make a Viking Warrior in Maya and ZBrush(1)
How to Make a Viking Warrior in Maya and ZBrush(1)
Fox Renderfarm, the best CPU and GPU cloud rendering service provider, will bring you a tutorial about how to make a Viking Warrior in Maya and ZBrush from 3D Character Artist Yuyong Jie. In the following production, he'll show you how to use ZBrush and Maya to make a Viking-style female warrior. The main process is divided into several parts, including finding references, modeling, baking and texturing, and light rendering. Reference and modeling I looked for a lot of concept maps on the internet and finally decided on one of them. Then follow the concept map to find different references, such as a large number of references to bodies, clothing, Armor, etc. I think these references are very useful. I then found a basic female model of ZBrush, which can also be carved directly into ZBrush, of course, but using the basic model modification saved me a lot of time. After adjusting the scale and structure, I copied a copy of the model, and to save the number of faces, I deleted the parts that were blocked by the clothes. Keep the original model and use it to do MarvelousDesigne's cloth solving and build equipment. When I carve the model, I carve the face and other exposed parts in detail. The difficulty of this part is mainly the muscle structure. In addition, I can carve the outline of the hair as a guide to making the hairstyle. After making sure the character's image is consistent with the concept map, I import the model with the lowest number of faces into Maya, split the UVs, and then import it into ZBrush, without changing the model. And all the subdivisions are there, and the technology adds the subdivision to the model with the split UVs, ready for the skin texture projection later. Next, I started to prepare to project the XYZ skin texture map in ZBrush. First, I created a rectangular patch in Maya with the aspect ratio close to the XYZ texture map that needs to be projected. Then I imported the patch into ZBrush and used the ZWrap plug-in. Carry out the topology and have completed the UV model, and then paste the XYZ displacement map on the model Topological with the ZWrap plug-in of ZBrush just now, and then use the ZWrap plug-in of ZBrush to transfer the textures on the topological model to the original model and export Stickers. Finally, link the MAP to the displacement map, and use the skin texture Alpha for the details of the hands and legs. Adjusting the number of model faces I finished the details of my face, hands, legs, etc., And then began to work on Armor accessories and clothing. First, I'll make a medium-polygon model in Maya, and then import it into ZBrush to carve. The medium polygon model is mainly to make the main concave-convex structure out, and then add lines appropriately, or directly reduce the subdivision to get a relatively matched low polygon model, and properly arrange the model to make the lines more reasonable and the matching degree better. Once a polygon model with a medium number of faces is made, we can deeply sculpt the model. For part of the metal pattern, I use pattern Alpha production, first, reduce the model subdivision export split UV, and then use Photoshop to align Alpha to UV, paying attention to the head and tail of the pattern connection. Then import the split UV model into ZBrush, then use the Alpha mask, and then expand the height to get the pattern effect.
More
2021-06-04
What is Rendering? How to Accelerate the Rendering Speed?
What is Rendering? How to Accelerate the Rendering Speed?
In order for 3D projects to be displayed visually at all, they have to be rendered. Although this process is done by computer processors, this work step is very time-consuming due to the enormous computing power. But what exactly happens when rendering? To put it simply: A finished single image or an entire scene from moving images is calculated pixel by pixel from the raw data of a 3D model previously created in a corresponding program. In the case of moving images, one must also note that one second of the finished scene consists of several “frames”. For a movie, for example, that's 24 frames per second. And when rendering, each individual frame is calculated as a single image. The duration of the calculation of these frames are depending very much on the quality and the level of detail of the project, but above all on the set lighting conditions and the corresponding surface reflections, the given perspective and of course any errors that may be contained. In the end, the calculations are extremely complex and result from the previous work steps such as modeling, texturing, shading, lighting and animation. And so rendering is also the last big step in the implementation of a 3D project. 600 Years of rendering time for a character from "The Load of the Rings"? To give you an idea of what that means in practice: In the 2003 Oscar for the best visual effects film "The Lord of the Rings: The Two Towers", the character "Treebeard" - an animated tree creature - was in the long version can be seen for 13:45 minutes. This means that a total of 19,800 frames (i.e. single images) had to be rendered. In this figure, the calculation of a single frame took about 48 hours, which in turn means a total of 950,400 calculation hours for all frames together. That would be the equivalent of 39,600 days or over 600 years for a single computer processor. No wonder, then, that the executing VFX studio Weta Digital increased the already enormous technical equipment of around 2,250 processor cores by another 500 while production was still running. In the end, with this concentrated computing power, rendering “Treebeard” alone should have taken about 14 days. But the long version of the film lasts almost four hours and consists of a lot of animated content. It's hard to imagine how long it must have taken to render the entire movie. And mind you, this does not even include the artistic work, only the calculation is finished images. 60 Million rendering hours for the Disney film "Frozen" But whoever thinks that rendering will be accelerated by technological developments is only partially correct. Because the developing technological possibilities naturally also improve the quality of the projects. The production team of the world-famous Disney animation film "Frozen" from 2013 cope with a total of 60 million rendering hours. And although the Walt Disney Animation Studios had increased the number of processor cores from 5,300 to 30,000 within two years and thus had one of the world's largest data centres, rendering the entire film still took over 83 days with all the computing power 24 hours without a break. But Disney doesn't just work on one project, so it probably took longer. And another small comparison: Incidentally, a single supercomputer from this time would have taken more than 900 years to render the film ... These are of course examples from professional productions with the highest standards. And of course, for you and your studio, how to accelerate the rendering speed? The best cloud rendering service provider, Fox Renderfarm with enormous computing power worth choosing. Fox Renderfarm was founded in 2011 and is now the largest provider in this area in Asia. In a so-called “[render farm]( "render farm")”, many physical servers and thus computer processors are combined to form a gigantic “supercomputer”. At Fox Renderfarm, for example, there are currently over 20,000 physical servers that are currently used for rendering by over 1.5 million end-users from over 50 countries. In total, Fox Renderfarm "renders" 1 petabyte of finished graphics results per month for its customers, which corresponds to one million gigabytes. Join and try $25 free trial now! What is cloud rendering? What is cloud-based rendering?
More
2021-04-08
Fackbook Customer ReviewsFoxrenderfarm

Powerful Render Farm Service

    Business Consulting

    Global Agent Contact:Gordon Shaw

    Email: gordon@foxrenderfarm.com

    Marketing Contact: Rachel Chen

    Email: rachel@foxrenderfarm.com