Three Aspects to See the Differences Between GPU and CPU Rendering(1)
The Background of Graphics Card
Probably around the year of 2000, the graphics card was still referred as the graphics accelerator. If something is referred as “accelerator”, it is usually not a core component, just think about Apple’s M7 coprocessor. As long as there is a basic graphics output function you are perfectly fine to access the monitor. By then, only a few high-end workstations and home use consoles could see separate graphics processors. Later, followed by the increasing popularity of PC, and the development of games and monopoly software like Windows, hardware manufacturers started to simplify their work process, which subsequently leads to the fact that graphics processors, or we can say graphics, gradually became popular.
The GPU manufactures exist as much amount as CPU manufactures, but only three of them(AMD, NVIDIA, and Intel) are familiarized and recognized by people.
To see the difference between GPU and CPU rendering, let's start with the following three aspects.
1. The Functions of GPU and CPU
To understand the differences between GPU and CPU, we also need to first understand what GPU and CPU are signed up for. Modern GPU’s functionality covers all aspects of the graphics display. You might have seen the below picture before, that is a trial test of the old version Direct X - a rotating cube. Displaying such cube takes several steps. Let's think fromabout a simple start.
Imagine the cube is unfolded to an “X” image excluded line shape. Then imagine the line shape to eight points without the connecting lines (The eight points of the cube). Then we can go ahead to figure out how to rotate the eight points so as to rotate the cube. When you first create this cube, you must have created the coordinates of the eight points, which are usually represented by vectors, three-dimensional vectors, at least. Then rotate these vectors, which are represented by one matrix if it’s in linear algebra. And vector rotation is to multiply the matrix and vectors. Rotating these eight points equals to multiply the vectors and matrix by eight times. It is not complicated. It is nothing more than calculation, with a large calculating amount. Eight points represent eight times calculations, so does 2000 points means 2,000 times calculation. This is part of the GPU's work — the transformation of vertex, is also the simplest step. But there is so much more than that.
In conclusion, CPU and GPU are signed up for different tasks and are applied for different scenarios. CPU is designed with strong commonality and logical analysis functionality to deal with a variety of data types, to categorize a large workload of data and to handle process interruption. All of the above leads to an extremely complicated CPU internal infrastructure. But GPU is specialized in a highly-consistent, independent large-scale data and concentrated computing external environment that can not be interrupted. So CPU and GPU show a very different infrastructure (schematic diagram).
How to render large scenes with Redshift in Cinema 4D
Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?
How the Redshift Proxy Renders the Subdivision
How to Render High-quality Images in Blender
China Film Administration Release New License Logo
Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel
Corona Renderer Learning - Denoising
Redshift for Cinema 4D Tutorial: Interior Design Rendering
Previous: Three Aspects to See the Differences Between GPU and CPU Rendering(2)
Next: Interview with Federico Ciuffolini, the 1st Winner of Hum3d Survial Car Challenge