What is a GPU render farm
What is a GPU render farm? A simple understanding of a GPU render farm is a render farm that supports GPU rendering. At the NVIDIA conference on Wednesday, December 18, 2019, Fox Renderfarm announced the launch of cloud rendering services powered by NVIDIA RTX next year. RTX has made a breakthrough in the price/performance ratio of cloud rendering on Fox Renderfarm. 12 times faster and 7 times lower. After all, as of 2019, the Top3 movies in Chinese film history are all rendered by Fox Renderfarm.
We know that under the same conditions, CPU rendering speed is much slower than GPU rendering. Fox Renderfarm is the largest render farm in Asia that both supports CPU rendering and GPU rendering. After summing up multiple rendering projects, GPU rendering can make the original Scenes that take 485 hours to render are now rendered in less than 40 hours! The GPU will increase rendering speed while saving customer costs.
GPU rendering in the field of film production and animation is constantly maturing. In the production of "Deadpool", the well-known Blure Studios used GPU rendering to create a movie preview movie that is very close to the final product. Originally, "Deadpool" was not favored by the film investors, but after the short film came out, it was enthusiastically sought after by netizens. This made the producers finally decide to invest and shoot the film.
"We want the best preview in front of Fox Film," Margo said.
Blur Studios uses Autodesk 3ds Max to produce CG asset content from scratch, and then uses the GPU rendering plug-in V-Ray RT of Chaos Group for rendering jobs, and it only runs on NVIDIA GPUs, which is 15 times faster than the time in using the CPU renderer.
This is just a small example of GPU rendering applications, but it can be seen that GPU rendering will have a lot to do in the commercial rendering market.
So what are the advantages of GPU rendering? How are they different?
Why is there a GPU? When we talk about GPU rendering, this is an issue that cannot be bypassed. The real GPU, also known as the Graphics Processing Unit, was first proposed by NVIDIA when it released the GeForce 256 graphics processing chip in 1999. The core technologies are hardware T & L, cubic environment texture mapping and vertex blending, and texture compression. And bump mapping maps, dual texture four-pixel 256-bit rendering engine, and more. The birth of GPU and the continuously evolving technology evolution since then are to meet the increasingly complex graphics computing needs and achieve a more realistic and vivid real-time sensory experience.
CPU on left, GPU on right
Now the GPU realizes a large number of coordinate and light source transformations through hardware T & L. The CPU does not need to calculate a large amount of lighting data. It can obtain better performance directly through the graphics card. At the same time, the 3D model can be described with more polygons to obtain more delicate effects.
NVIDIA GPUs helped Tim Miller of Blur Studios create realistic images. Therefore, GPU rendering has advantages in hardware support, coupled with continuous maturity and optimization in software support, making this GPU-based render farm solution have a very broad market application prospect.
How to Render High-quality Images in Blender
How to Render Large Scenes with Redshift in Cinema 4D
Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Is The Best 3D Renderer?
Why V-Ray Render is Black in SketchUp And How To Fix it？
How to Reduce Noise in Corona Renderer - Corona Denoising (2022 Updated)
What is the difference between pre-rendering and real-time rendering?
How the Redshift Proxy Renders the Subdivision
Blender vs Maya vs Cinema 4D, Which One Is The Best 3D Software？
Previous: Fox Renderfarm News Roundup for December 20, 2019
Next: NVIDIA GTC China Conference Focuses on AI, Autonomous Driving, Gaming, and HPC