Pre-Rendering vs Real-time Rendering
"Avatar" directed by James Cameron spent four years and nearly 500 million US dollars to open a new world of science fiction for everyone. The CGI characters, Names, in the film look exactly the same as the people in the real world. And the realistic sci-fi scenes are shocking. However, these wonderful images are inseparable from the CG artists and pre-rendering technology.
In order to solve the rendering tasks of "Avatar", the Weta Digital supercomputer processes up to 1.4 million tasks per day to render movies, running 24 hours a day with 40,000 CPUs, 104TB memory and 10G network bandwidth. It took 1 month in total. Each frame of "Avatar" needs to be rendered for several hours, 24 frames per second. Hence, the powerful rendering cluster capability is really important to the CG studio.
What is pre-rendering?
Pre-rendering is used to create realistic images and movies, where each frame can take hours or days to complete, or for debugging of complex graphics code by programmers. Pre-rendering starts with modelling, using points, lines, surfaces, textures, materials, light and shadow, visual effects and other elements to build realistic objects and scenes. Then, computing resources are used to calculate the visual image of the model under the action of factors, such as viewpoint, light, and motion trajectory according to the predefined scene settings. The process is called pre-rendering. After the rendering is completed, the frames are played continuously to achieve the final effect.
It is mainly used in the fields of architecture archive, film and television, animation, commercials, etc., with the focus on art and visual effects. In order to obtain ideal visual effects, modelers need to sculpt various model details during the production process; animators need to give the characters a clever charm; lighting artists need to create various artistic atmospheres; visual effects artists need to make visual effects realistic.
Commonly used pre-rendering softwares include 3ds Max, Maya, Blender, Cinema 4D, etc., which are characterized by the need to arrange the scene in advance, set the relevant rendering parameters, such as shadow, particle, anti-aliasing, etc.), and then use a PC or render farm to render with unsupervised calculation.
BTW, you can use a local machine or a cloud render farm for rendering. Fox Renderfarm can provide rendering technical support for the software mentioned above.
Each frame in the pre-rendered scene is present. Once the rendering is started, each frame takes several seconds, minutes or even hours to render. A large amount of memory, CPU/GPU, and storage resources are consumed during the rendering process, which is a computing resource-intensive application. Especially in film and television projects, there are usually scheduled requirements so that rendering tasks need to be completed within a specified time. Currently, tasks are basically submitted to cloud rendering farms for rendering. Cloud rendering farms, such as Fox Renderfarm, are professional service companies that can provide massively parallel computing clusters.
After pre-rendering, the task is basically the finished work that has been rendered. If you want to calculate and see the scene in real time on an operable online service or online game, we have to talk about real-time rendering.
What is real-time rendering?
In August 2020, a live demonstration of the action role-playing game "Black Myth: Wukong" produced by Game Science Corporation from China became popular in Chinese social networks. The top-notch pictures, rich details, immersive combat experience, and sufficient plot interpretation in the demonstration restore an oriental magical world. Every beautiful scene in the game is rendered in real time.
Real-time rendering is used to interactively render a scene, like in 3D computer games, and generally each frame must be rendered in a few milliseconds. It means that the computer outputs and displays the screen while calculating the screen. Typical representatives are Unreal and Unity. Games like Game Science are built using Unreal Engine 4. The characteristic of real-time rendering is that it can be controlled in real time and is very convenient for interaction. However, the disadvantage is that it is limited by the load capacity of the system. And if necessary, it will sacrifice the final effect, including model, light, shadow and texture, to meet the requirements of the real-time system. Real-time rendering can currently be applied to 3D games, 3D simulations, and 3D product configurators and others.
Real-time rendering focuses on interactivity and real-time. Generally, scenes need to be optimized to increase the speed of screen calculation and reduce latency. For the user, every operation, such as a finger touch or click the screen, will make the screen to be recalculated. And the feedback needs to be obtained in real time. Thus, real-time rendering is very important. In simulation applications, the data shows that only if the latency is controlled within 100ms, people will not obviously perceive the inconsistency of video and audio.
In recent years, with the improvement of GPU performance, the speed of real-time calculation is getting faster; and the accuracy of the calculation images is getting higher. Especially with the application of Ray-tracing and other technologies, the effect of real-time rendering becomes more realistic. These top technologies are also obvious trends in future production. If you want to learn more about the real-time rendering, please feel free to contact us.