Three Aspects to See the Differences Between GPU and CPU Rendering(2)

2. The Magic Inside of CPU and GPU

The above picture origins from the NVIDIA CUDA document. The green color represents computing units, orange-red color represents storage units, and the orange color represents control units.

The GPU employs a large number of computing units and an extremely long pipeline, with only a very simple Cache-free control logic is required. But CPU is occupied by a large amount of Cache, with complex control logic functionality and lots of optimization circuits. Compare to GPU, CPU’s computing power is only a small part of it.

And that’s most of the GPU's work look like, large work load of computation, less technique but more repetitive computing required. Just like when you have a task of doing the number calculation within 100 for a hundred million times, probably the best way is to hire dozens of elementary school students and divide the task, since the task only involve repetitive work instead of highly-skills requiring. But CPU is like a prestigious professor, who is proficient in high-level of math field, his ability equals to 20 elementary students, he gets higher paid, of course. But who would you hire if you are Fox Conn’s recruiting officer? GPU is like that, accumulates simple computing units to complete a large number of computing tasks, while based on the premise that the tasks of student A and student B is not dependent on each other. Many problems involving large numbers of computing have such characteristics, such as deciphering passwords, mining, and many graphics calculations. These calculations can be decomposed into multiple identical simple tasks. Each task can be assigned to an elementary school student. But there are still some tasks that involve the issue of "cloud." For example, when you go to a blind date, it takes both of the parties’ willingness to continue for further development. There's no way to go further like getting married when at least one of you are against this. And CPU usually takes care of more complicated issue like this.

Usually we can use GPU to solve the issue if it’s possible. Use the previous analogy, the speed of the GPU's computing depends on the quantity of students are employed, while the CPU's speed of operation depends on the quality the professor is employed. Professor's ability to handle complex tasks can squash students, but for less complex tasks with more workload required, one professor still can not compete with dozens of elementary students. Of course, the current GPU can also do some complicated work, which is equivalent to junior high school students’ ability. But CPU is still the brain to control GPU and assign GPU tasks with halfway done data


3. Parallel Computing

First of all, let's talk about the concept of parallel computing, which is a kind of calculation, and many of its calculation or execution processes are performed simultaneously. This type of calculation usually can divide into smaller tasks, which can then be solved at the same time. Parallel computing can co-process with CPU or the main engine, they have their own internal storage, and can even open 1000 threads at the same time.

When using the GPU to perform computing, the CPU interacts mainly with the following:

Data exchange between CPU and GPU

Data exchange on the GPU

In general, only one task can be performed on one CPU or one GPU computing core (which we usually call “core”) at the same time. With Hyper-Thread technology, one computing core may perform multiple computing tasks at the same time. (For example, for a dual-core, four-thread CPU, each computing core may perform two computing tasks at the same time without interruption.) What Hyper-thread do is usually double computing core’s logical computing functionality. We usually see that the CPU can run dozens of programs at the same time. In fact, from a microscopic point of view, these dozens of programs are still dependent and serial computing supported, such as on four-core and four-thread CPU. Only four operations can be performed at a time, and these dozens of programs can only be executed on four computing cores. However, due to the fast switching speed, what is shown on the macroscopic is the superficial fact that these programs are running “simultaneously”.

The most prominent feature of GPU is: large numbers of computing cores. The CPU's computing core is usually only four or eight, and generally does not exceed two digits. The GPU computing core for scientific computing may have more than a thousand calculation cores. Due to the huge advantage of the number of computing cores, the number of computations that the GPU can perform far outweighs the CPU. At this time, for those calculations that can be performed in parallel computing, using the advantages of the GPU can greatly improve the efficiency. Here let me explain a little bit of the task's serial computation and parallel computation. In general, serial computing is doing the calculation one by one. In parallel computing, several parallel computing is to do it simultaneously. For example, to calculate the product of real number a and vector B=[1 2 3 4], the serial calculation is to first calculate a*B[1], then calculate a*B[2], then calculate a*B[3], and finally calculate a*B[4] to get the result of a*B. The parallel calculation is to calculate a*B[1], a*B[2], a*B[3], and a*B[4] at the same time and get a *B results. If there is only one computing core, then four independent computing tasks cannot be executed in parallel computing, and can only be calculated in serial computing one by one; But if there are four computing cores, four independent computing tasks can be divided and executed on each core. That is the advantage of parallel computing. Because of this, the GPU has a large number of computing cores, and the scale of parallel computing can be very large, computing problems that can be solved by parallel computing, which showing superior performance over CPU. For example, when deciphering a password, the task is decomposed into several pieces that can be executed independently. Each piece is allocated on one GPU core, and multiple decipher tasks can be performed at the same time, thereby speeding up deciphering process.

But parallel computing is not a panacea, it requires a premise that the problems can be executed in parallel computing only when they are able to be decomposing into several independent tasks, which many of them are not able to be decomposed. For example, if a problem has two steps, and the second step of the calculation depends on the result of the first step, then the two parts cannot be executed in parallel computing and can only be sequentially executed in sequence. In fact, our usual calculation tasks often have complex dependencies that can not be parallel computed. This is the big disadvantage of the GPU.

About GPU programming, there are mainly the following methods:

  

Welcome to join us

render farm free trialfox got talent

Recommended reading


China Film Administration Release New License Logo

2019-01-09


Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel

2018-12-26


How the Redshift Proxy Renders the Subdivision

2018-12-28


Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?

2019-04-15


Corona Renderer Learning - Denoising

2019-05-15


Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?

2019-04-15


Is This Simulated By Houdini? No! This Is Max!

2019-02-22


Arnold Render Farm | Fox Render Farm

2018-11-27


Partners

Interested

A Guide to the Career Growth of VFX Compositors(1)
A Guide to the Career Growth of VFX Compositors(1)
As the leading cloud rendering services provider in the CG industry, Fox Renderfarm has an outstanding team with over 20 years’ experience in the CG industry. Team members are from Disney, Lucasfilm, Dreamworks, Sony, etc. With professional services and industry-leading innovations, they serve leading VFX companies and animation studios from over 50 countries and regions, including two Oscar winners. In this artichle, we will share a guide to the career growth of VFX Compositors from Zhiyong Zhang, Compositing Technical Director in Base FX(Base Media is the leading visual effects and animation studio in Asia). Part 1 Introduction What is compositing? There are many different explanations about compositing on the Internet. Since it's difficult to explain, I would analyze compositing from some shots. Like the scene in "Monster Hunt", the foreground is made with a 3D model, and the background is made with a combination of the original paintings and pictures. That is to do some MP, color adjustments, elements setting and real shot adding. Finally, all the pictures are going to be integrated, which is a compositing process. The professional value of composting Compositing is the last part of the visual effects in the later period. You need to blend and match three-dimensional things with the real shot through compositing techniques and methods. Through techniques and methods, add some 2D or 2.5D elements to make the picture more interesting. Compositing is making the projects that the entire visual effects team makes. Hence, compositors are responsible for the entire team, striving to adjust the project to the final optimal effect. This is the professional value of compositing. The artistic charm of composting For me, every adjustment on color, virtual focus, interactive light and others will make the scene more realistic or better-looking. Every step of the operation will improve the scene, bringing the compositors hope and happiness. It is awesome to do better in the time allowed and finally show something beautiful. This is the artistic charm of composting. Part 2 Six basic skills of compositors The first basic skill is erasing. Many scenes have tracking points when shooting on the green screen, or there are some things that can’t be removed from the scene. We need to do erasing, which is very basic but important. The second skill is rotoing and chroma key. The chroma key is to keying the green screen. There is also a key called no green screen keying, which is the rotoing. The third skill is to match colors. Adding an element to the real shot to change the color of the material is actually matching the material, not creation. The fourth skill is to match the virtual focus or depth of field. The material itself will actually have virtual focus, so the compositor needs to also be good at photography, otherwise, it may become a bottleneck for the compositor. The ideas of the director and the main photographer will be presented in the scenes. We use compositing to re-express what they want to express in CG. We are doing matching rather than recreating. The fifth skill is tracking. Most compositors do 2D trackings, such as billboards or on-screen patches. And sometimes they also do 3d tracking. The last one is the matching noise. For some demanding projects, it is necessary to switch between red, green, and blue screen, or even zoom in to the local area for comparison. It also depends on the broadcast platform. Generally, if it is broadcasting on a TV platform, the noise is not easy to see the noise. If in the cinema, as long as the brightness of the cinema is large enough, the noise of some scenes can be easily seen. The noise matching can make the picture more realistic. Part 3 Standards for cinema-level film compositing requirements The standards here are not officially stipulated standards but are just some common standards that I personally summarize based on experience. I divided the cinema standards for film compositing into two types: technical requirements and artistic requirements. Technical Requirements First of all, it depends on whether the color of the original material is changed. If you are not careful or have moved other parts of the screen including the color of the material for other reasons, it may cause customer dissatisfaction. Secondly, whether the maximum and minimum values of the screen are consistent with the material or not. Of course, the maximum and minimum values are not for a certain scene, but for all the materials of a project. Thirdly, whether the metadata is correct or not. If the editing requires metadata to be accurately restored, it needs to be correctly restored to the material. Generally, at the beginning of the project, it needs to agree with the previous DI or editing on how to set back. Fourthly, whether the shifting is stuck will also be a technical requirement. The shifting should be discussed with the editor. If the editing does not require any shifting, even if it seems to be stuck, it may be a fast switch of the front and rear scenes. This is one kind of editing methods. if you do something to make it smoother, re-reverting to the clip may cause problems. The last is whether there is any obvious flaw in the picture. The so-called flaw is whether there is a black border on the edge of the picture and whether the edge of the real shot is stretched. Artistic requirements The artistic requirements will be significantly different from the previous technical requirements. We will check whether the details of the keying edges are enough to maintain the authenticity of the picture. In the normal budget, the details of the edges can be kept authentic. If the edges look fake, it may make the whole picture fall short. Whether the bright and dark parts of the screen is matching with the material within the acceptable range? The dark and bright parts here refer more to brightening everything in the viewing facility, to see if the added thing is the same color as the real shot and whether the degree of the color cast is the same. Without considering the budget, as a compositor, the only requirement is to do something better, so try to match it. Then, whether the compositing elements, the virtual focus of the background and the depth of field match the real shot. The depth of field of the real shot sometimes will be very shallow, sometimes it will be very deep. Generally, there will be a range, this range is taken by the lens and the camera. We can add an element in it, and change the depth accordingly. In addition, we must also consider whether the simulation of the lens effect meets the real shooting standard. With a normal lens, the virtual focus is 1:1, and with an anamorphic lens, the virtual focus is vertical, similar to 2:1 circular virtual focus. While doing compositing, it needs to pay special attention to matching. The next thing to consider is whether the addition of smoke, flame and other atmospheres is truly fused. The light and shadow, the virtual focus and the sharpness are matched with the whole, and the things that really melt into it are realistic. The last thing is to consider the interactive effects, the corresponding interaction between the real shot material and the CG elements. When we do the compositing, in addition to shadows, there are some light interactions, we should consider whether to add them. In the process of compositing, we need to give ourselves the opportunity to exercise and do more fine, otherwise, we can't make progress. In fact, the audience is very smart now, watching a lot of blockbuster movies. Therefore, we should try our best to be realistic in the time cost of cinema-level movies. If the audience feels satisfied in the cinema, it will be successful.
More
2020-07-03
How does the studio directly produce the final shot? (2)
How does the studio directly produce the final shot? (2)
The rising expressive power of Real-time rendering Jon Favreau's biggest concern about The Mandalorian is whether the game engine can reach the visual effect rendering level of Star Wars on the TV budget. But when he saw a piece of sand similar to episode 2, he abandoned the thought. Sandy environment is built with assets scanned with photos and rendered in real-time In the virtual studio, all the scene elements and lights can be switched and upgraded freely, and the feel of virtual reality is maintained. Part of the virtual environment of The Mandalorian directly uses assets created by Unreal Engine for some video games, saving a lot of asset construction time and opening up the possibility of asset sharing between the two industries. Real-time rendering has been able to better present the highly reflective materials often involved in science fiction dramas The virtual scenes on the set sometimes deceive the eyes of the people on the scene. Jon Favreau said, "There was a person who came to the studio and said,'I thought you wouldn't build the whole scene here.' I said, no, we didn't build here, in fact, there only have tables.' Because the LED wall rendering is based on a camera, there is parallax. But even if you look at it casually next to you, you will still think that you are watching a live-action scene. " Entering the virtual studio, the actor has entered the virtual world and the final scene The changes of Hollywood led by virtual production technology In addition to accelerating the cycle and increasing the turnover of the budget, virtual production has also brought revolutionary convenience to actors and other teams. Actors can see the situation of the environment in real-time to perform and interact. "This not only helps the cinematography but also helps the actors understand the surroundings, such as the horizon. It also provides interactive lighting", Jon Favreau described it as “a huge breakthrough”. Virtual production can also help fantasy characters interact with real people In addition to The Mandalorian, movie projects such as Lion King and Fantasy Forest have already begun to use game engines to produce movies, but they are still mainly in the visual preview stage. Besides, Steven Spielberg and Denis Villeneuve used Unity to help achieve their visual effects in the production of Number One Player and Blade Runner 2049 respectively. This method is gradually replacing the usual storyboard production method similar to hand-drawn comic books. Spielberg stood on the monitor to view the real-time synthesis of the shooting content In the virtual production process, VR technology is rapidly becoming a viable option for large studios and production companies. Director Jon Favreau used a lot of virtual reality technology in the recent remake of the live-action movie The Lion King, which fundamentally created the entire virtual reality world of the movie. Lion King uses a lot of real-time virtual preview technology The popularization of virtual production technology In terms of budget, virtual production and LED display equipment are still relatively expensive at this stage.   Complete real-time compositing and film production at a lower cost Jon Favreau indicated that virtual production technology is a major leap forward in the film industry, allowing creators to make creative decisions before the project starts, rather than reinventing the process or after completion. Nowadays, more people can see the appearance of the lens directly during cinematography. More people can contribute their own ideas and learn from each other because they can see the final idea in advance. "In the past, when you went to a scene and left the green screen, you no longer care about it. But now, we have so many talented people and have accumulated more than one hundred years of filmmaking experience. Why should we give up just because we change the shooting system? Let’s continue to inherit the skills of film artists and develop tools that adapt to the times.” Virtual production technology will gradually become the routine process of film production in the future
More
2020-06-17
How does the studio directly produce the final shot? (1)
How does the studio directly produce the final shot? (1)
The Road to Innovation in Virtual Production How does the studio directly produce the final shot? The production team of Disney's new drama The Mandalorian used the real-time rendering technology to construct the digital virtual scene in advance. The Mandalorian, spin-off drama of The Star Wars launched by the Disney streaming platform The Mandalorian, the first spin-off drama of The Star Wars appeared on Disney+ on November 12, 2019, which was also the first blockbuster series made in Unreal Engine. The timeline of The Mandalorian is five years after Return of the Jedi, after the fall of the empire and before the emergence of the first order. The plot revolves around the travels of a lone bounty hunter in the outer reaches of the galaxy, far from the authority of the New Republic. The Mandalorian Epic Game, the development company of Unreal Engine, appeared in the list behind the scenes It is worth noting that Epic Games, the developer of the Unreal Engine, appeared on the list of thanks for the production of The Mandalorian, followed by a list of the entire virtual production team. Epic Game was previously committed to producing explosive games like Fortnite and other 3A-level masterpieces, including Kingdom Hearts III, Dragon Ball Fighter Z, and Star Wars Jedi Knight: Fallen Order and so on. In recent years, they are working with Lucasfilm to bring the real-time rendering capabilities of the Unreal Engine into the development of Disney’s streaming content for live-action production. Many shots background of The Mandalorian is presented directly from the LED wall. Jon Favreau is the producer and screenwriter of The Mandalorian, but also the pioneer using the virtual game engine in virtual production. At the SIGGRAPH 2019 computer graphics conference in Los Angeles, Jon Favreau shared an efficient way to use the game engine to help the film perform a virtual preview (Previs). "We used the V-cam system, which is essentially making movies in VR. All results would deliver to the editor. It is like we are editing a part of the movie in advance, the purpose of which is to achieve the pre-conceived goal."Jon Favreau said. Use VR technology and real-time rendering technology to view the virtual environment of the movie An evolution from the virtual preview tool to the final image production tool The virtual production system no longer only provides virtual preview services for the film. The crew used the LED video wall in The Mandalorian set as the background of the live real-time camera composites (In-camera composites). The SFX team projects the pre-rendered content, such as the environment, onto the LED wall as a dynamic green screen. The virtual studio is a cube of virtual content wrapped by four LED walls. It is driven by the Unreal Engine, and the photographer can shoot the final images directly in the camera (In-camera ). The content displayed by the LED will be adjusted and transformed in real-time according to the position of the virtual camera. Live actors and LED walls in the virtual studio "We present real-time rendered content in the camera and get composing shots directly. Considering zooming and other requirements, for some types of cinematography, we are not only able to interact on the spot but also can see the lighting, interactive light, layout, background, horizon, and others directly in the camera. Hence, these don't need to wait for the post-production." Jon Favreau noted.
More
2020-06-17
Foxrenderfarm

Powerful Render Farm Service

    Business Consulting

    Global Agent Contact:Gordon Shaw

    Email: gordon@foxrenderfarm.com

    null

    Media Contact: Rachel Chen

    Email: rachel@foxrenderfarm.com