What is Rendering? How to Accelerate the Rendering Speed?


3D Rendering

In order for 3D projects to be displayed visually at all, they have to be rendered. Although this process is done by computer processors, this work step is very time-consuming due to the enormous computing power.

But what exactly happens when rendering? To put it simply: A finished single image or an entire scene from moving images is calculated pixel by pixel from the raw data of a 3D model previously created in a corresponding program. In the case of moving images, one must also note that one second of the finished scene consists of several “frames”. For a movie, for example, that's 24 frames per second. And when rendering, each individual frame is calculated as a single image.

The duration of the calculation of these frames are depending very much on the quality and the level of detail of the project, but above all on the set lighting conditions and the corresponding surface reflections, the given perspective and of course any errors that may be contained. In the end, the calculations are extremely complex and result from the previous work steps such as modeling, texturing, shading, lighting and animation. And so rendering is also the last big step in the implementation of a 3D project.

600 Years of rendering time for a character from "The Load of the Rings"?

To give you an idea of what that means in practice: In the 2003 Oscar for the best visual effects film "The Lord of the Rings: The Two Towers", the character "Treebeard" - an animated tree creature - was in the long version can be seen for 13:45 minutes. This means that a total of 19,800 frames (i.e. single images) had to be rendered. In this figure, the calculation of a single frame took about 48 hours, which in turn means a total of 950,400 calculation hours for all frames together. That would be the equivalent of 39,600 days or over 600 years for a single computer processor.

No wonder, then, that the executing VFX studio Weta Digital increased the already enormous technical equipment of around 2,250 processor cores by another 500 while production was still running. In the end, with this concentrated computing power, rendering “Treebeard” alone should have taken about 14 days. But the long version of the film lasts almost four hours and consists of a lot of animated content. It's hard to imagine how long it must have taken to render the entire movie. And mind you, this does not even include the artistic work, only the calculation is finished images.

60 Million rendering hours for the Disney film "Frozen"

But whoever thinks that rendering will be accelerated by technological developments is only partially correct. Because the developing technological possibilities naturally also improve the quality of the projects. The production team of the world-famous Disney animation film "Frozen" from 2013 cope with a total of 60 million rendering hours. And although the Walt Disney Animation Studios had increased the number of processor cores from 5,300 to 30,000 within two years and thus had one of the world's largest data centres, rendering the entire film still took over 83 days with all the computing power 24 hours without a break. But Disney doesn't just work on one project, so it probably took longer. And another small comparison: Incidentally, a single supercomputer from this time would have taken more than 900 years to render the film ...

These are of course examples from professional productions with the highest standards. And of course, for you and your studio, how to accelerate the rendering speed? The best cloud rendering service provider, Fox Renderfarm with enormous computing power worth choosing. Fox Renderfarm was founded in 2011 and is now the largest provider in this area in Asia. In a so-called “render farm( "render farm")”, many physical servers and thus computer processors are combined to form a gigantic “supercomputer”. At Fox Renderfarm, for example, there are currently over 20,000 physical servers that are currently used for rendering by over 1.5 million end-users from over 50 countries. In total, Fox Renderfarm "renders" 1 petabyte of finished graphics results per month for its customers, which corresponds to one million gigabytes. Join and try $25 free trial now!

What is cloud rendering? What is cloud-based rendering?

Anima Render Farm Services, Welcome to Fox Renderfarm


3D Rendering

We are so happy to announce that Fox Renderfarm Supports anima® Now!

anima® is the 3d people animation application developed specifically for architects and designers, ideal for creating amazing 3d animated people quickly and easily!

With anima®, the native plugins for 3ds Max, Maya, Cinema 4D and UE4, you can easily add 3D people and crowds with facial gestures and clothing movements to your CG scenes!

As the authorized render farm of anima®, Fox Renderfarm supports the most popular 3D software, renderers and plugins all over the world, including 3ds Max, Maya, Cinema 4D, Blender, V-Ray, Redshift, etc.

With enormous computing power, Fox Renderfarm will continue to provide customers with a faster, safer, cheaper, smarter, and easier cloud rendering service, helping CG artists realize their dreams.

FGT Art 2021 January Winner Announced


3D Rendering

Let us introduce you to the first FGT Art of the Month 2021: Eternity (一笑一尘缘,一念一清静。)!

This amazing artwork, Eternity (一笑一尘缘,一念一清静。), is created by our friend Kay John Yim, an excellent Chartered Architect based in London. The artwork also has a beautiful caption:

To see a World in a Grain of Sand

And a Heaven in a Wild Flower,

Hold Infinity in the palm of your hand

And Eternity in an hour.

Who’s our next FGT Art winner? Shine your talent and win great prizes! Hit the link and know more

3ds Max Tutorials: The Production and Sharing of "Sci-Fi Guns"(3)


3d Max Render Farm

The best render farm, Fox Renderfarm still shares with you the production of the work "Sci-Fi Guns" which made with software 3ds Max and Substance Painter 2019 from 3d artist Zikai Wu. And this is part three of the sharing.


Next, start the texture production. First, divide the whole into several large layers, and then create a selection area based on the id map made before. First, create a layer group and add a color selection mask. Then make the texture of each part in each layer group.

When making textures, my production process is to pave the color first, and then add details. First, give a basic color, then copy the layer to make the darker part, add Dirt generator to this layer, if the effect does not meet your requirements, you can modify the generator parameters and add a brush tool to draw the part you want The effect can be. I usually add the Edge generator to the bright edges to increase the volume and make the overall effect clear. For example, small white particles on part of the surface: first create a new filling layer, add a black mask to the layer, add a filling layer to the mask, then find the picture you want in the grayscale material library and drop it on top, modify the parameters to Control the particle effect.

After the texture is finished, I will create a new layer on the top and add sharpening to enhance the clarity of the details.

For the gun, I used some glow textures and halo effects. These halos used some basic layers, and then used a mask to draw the shape, leaving only the Emiss self-illumination, and then changed it to blue. Then duplicate this layer, and add a Blur filter on the mask, modify the blur intensity to achieve the halo effect.

Other parts of the texture also use this method, and the texture is exported after production. When exporting textures, you need to pay attention to texture size, texture location, channel, and format.


After the texture is produced and exported, it enters the rendering stage. We also need to export the environment map used in Substance Painter to MarmosetToolbag 3 for use.

Import the low polygon and add the MarmosetToolbag 3 shader, paste the exported texture according to the channel, pay attention to the name of each channel. You need to flip the Y-axis on the normal channel, just click it. Because this also has an emissive map, you need to open the emissive channel in MarmosetToolbag 3. After the texture is the lighting, the lighting method: one main light source (warmer), one or two auxiliary lights. (Colder). The main light source is located 45 degrees above the front, and the auxiliary light is located on the side and back, which is mainly used to illuminate the dark parts and express silhouette contours.

After all the settings are adjusted, you can render. Below is the final rendering effect:

The above is the whole process of making this gun. In production, I keep trying to learn and find the most suitable way. I hope this tutorial can be of some help to you.

3ds Max Tutorials: The Production and Sharing of "Sci-Fi Guns"(2)


3d Max Render Farm

The best render farm, Fox Renderfarm shares with you the production of a work "Sci-Fi Guns" which made with software 3ds Max and Substance Painter 2019 from 3d artist Zikai Wu. And this is part two of the sharing.

Unwrapping UV

After the low-variability model is made, it is the job of splitting the UV. Before splitting the UV, you need to distinguish the smooth group first. My distinguishing standard is: the angle greater than 90 degrees is not separated, and the uniform smooth group is less than or equal to 90 degrees. Separate. Then there will be some large black parts after separation, which can be separated separately. When splitting the UV, first use the built-in tools to tile according to the smooth group, and then manually adjust some more complicated structures. When placing the UV in a straight line, try to keep it as straight as possible to facilitate the overall placement. The second is the allocation of UV space. The more important components will occupy a larger space, and the space utilization rate of UV should be maximized. Because most of this gun is symmetrical, half of the UV overlaps.

Normal map

Here is the use of Substance Painter for baking. Before baking the Normal map, the high polygons and low polygons must be matched first. The edges of the model must be matched. The second is model naming, for example, the low polygon qiang01_low, the high polygon is qiang01_high. The low polygon export format is fixed.

What is worth noting here is the export of high polygons. If you use one time to export all models, it will take a long time. And there may be some small problems when baking the Normal map, and then you have to go back to 3DSMAX to modify and re-export the entire model, which will take a lot of time. After a test, I will split the model into several parts and export them separately for easy modification.

When you are ready for these tasks, start to use Substance Painter to bake textures, and use the PBR production process, import high polygons, set the parameters, select the matching method as naming, and bake a vertical normal first. The effect of the vertical normal baking is that the internal structure is clear, the small details such as screws are not offset, but there will be some defects in the edge, so it is necessary to bake an average normal to combine the two normals to retain their respective advantages, namely Keep the edges of the average normal inside the vertical normal.

Then you need to re-import this normal into Substance Painter for use, and you need to check again for any problems for the first time.

For small flaws like this kind of edge, you need to enter Substance Painter again to modify the normal edge.

After the normal problem is solved, we will bake the id map. To bake the id map, you need to select this option, based on the texture color. The main reason is that I used high polygon layers to give textures of different colors in 3DSmax. Then you can bake other remaining textures after you get the normals and id.

3ds Max Tutorials: The Production and Sharing of "Sci-Fi Guns"(1)


3d Max Render Farm

The best 3ds Max render farm, Fox Renderfarm will share with you the production of a work "Sci-Fi Guns" which made with software 3ds Max and Substance Painter 2019 from 3d artist Zikai Wu. And this is part one of the sharing.

In the tutorial, he explained and showed in detail how to make high and low models in 3ds Max 2018, baking method of normal map and ID map, the texture map made by Substance Paint, and render the final effect in Marmoset Toolbag 3.

Let me show you the original concept map first, because its color matching and style are very beautiful, which is why I wanted to make it a 3D version at first sight. The overall style is distinguished, with rich details.


Because the concept map of this gun has only one front view, the side and back parts need us to find more references or use our imagination. When making, you can import the concept map into 3ds Max 2018 as a background reference, which can help us make the shape of the gun more accurate in the production, especially in some corners and structures.

In order to make the model more similar to the concept map, I used Line to create the model based on the outline of the concept map.

After the shape of the gun is determined, the details can be continuously added on this basis to optimize the proportion. When making some bumps on the gun body, I used some basic 3ds Max tools to squeeze and push and pull the model. The disadvantage of this production method is that the lines will be messy and more. Then, on the suggestion of a friend, I separated the details from the surface, but the details on the surface, and later put the normal map on the model by baking. This way can also ensure that the lines of the model are kept as simple and tidy as possible.

The model in this case is a hard surface model, so when making, adjust all the lines and add a smooth group. First, all models are distinguished according to similarities, and then the Chamfer function is added, and lines are added according to the smooth group.

Then add turbos smooth on this basis to check the model. If the corners are deformed by pulling, or other problems occur, you need to go back to the bottom layer and distinguish the smooth group. If there is no change, you need to add it to the Chamfer command Edit Poly modify and adjust the corner line. It should be noted that after adding this command, you can't just go back to the first level to modify, because I added this command last.

After modifying the model with a high face count, some checks are needed. For example, the position of the seams between the components, check the fit to see if the connection is better.

When making a low-polygon model, you directly use the plug-in to reduce the lines of the high-poly model. The low-polygon model has strict requirements on the number of faces, especially for the use of lines, only the places that are as useful as possible are reserved, and the extra places are deleted. Try to keep the straight line of the model as straight as possible, and the overall look is simple and smooth.

When making a low-polygon model, you directly use the plug-in to reduce the lines of the high-poly model. The low-polygon model has strict requirements on the number of faces, especially for the use of lines, only the places that are as useful as possible are reserved, and the extra places are deleted. Try to keep the straight line of the model as straight as possible, and the overall look is simple and smooth.

3D Tutorial: How to Make a Stylized Character(2)


3D Rendering

Continue part 1 of 3D Tutorial: How to Make a Stylized Character, the smarter and cheaper cloud render farm, Fox Renderfarm still shares with you the tutorial about how to make a stylized character.

Topology, UV, and baking

This role requires two parts that need to be topology, the character's head, and the mechanical headset. The topology of the mechanical part mainly uses 3ds Max's Toggle ribbon tool and modeling plug-in. The character head uses Zbrush's own topology tool Zwrap 1.0.

While topology, the model alignment mode is selected, adding the corresponding adsorption points according to the structure and the number of the low-poly point corresponds to the number of the high-poly.

Then we can calculate the match and complete the topology. All that is left is to adjust the low-poly details in 3ds Max and add models with details such as eyes.


After the topology is completed, it is now the work of the UV part. I use the 3ds Max plug-in Unwrap Pro and PolyUnwrapper v4.3.5 with tools, and the UV work on the face mainly use Unwrap Pro.


  1. Draw the vertex color map with Zbrush
  2. Marmoset Toolbag
  3. Baking normals and color maps Toolbag supports OBJ or FBX model formats. The following are the low-poly OBJ export settings,

A total of 7 textures have been output, as shown in the following figure,

Realistic skin texture drawing is done in Substance Painter. First, import the OBJ format model into Substance Painter, and then assign all the baked textures to the model. First, turn on the SSS effect, and then adjust the skin texture to a satisfactory effect. Finally, the PBR texture of the skin is exported.

Marmoset Toolbag 3 real-time rendering

There are two types of lighting settings, environment, and lighting settings. First, an HDRI environment map was used, then the main light source and a blue fill light were placed on the side. There is also a fill light on the front, which makes the overall facial lighting softer. lighting effects:

Final render:

3D Tutorial: How to Make a Stylized Character(1)


3D Rendering

1 Month rendered in 2 hours, the fastest GPU & CPU cloud render farm, Fox Renderfarm still share with you 3D tutorials. This project is made by the 3D character artist, JFD. In this tutorial, he will show you the process of making a realistic character. Zbrush and 3ds Max are used to make models. Toolbag is used for topology. Substance Painter is used for textures, and Photoshop is used for final compositing.

Software used: Zbrush, 3ds Max, Toolbag, Substance Painter, Photoshop

Gathering Reference

I wanted to make a realistic character to practice what I just learned. In the process of collecting information, I want to add a bit of technology to the character to increase his sense of mystery. So I chose to add a pair of headphones full of science fiction.


The whole work is divided into three parts, Blocking, high-poly, and high-poly for mechanical parts. First of all, I started to model the character's head. I used a basic head model in Zbrush to modify it as needed and sculpted a high-poly on this basis. I can organize and save the model from my previous work as a basic model.

In this process, first, use Zbrush to sculpt the head mechanism, and then use the skin brush to sculpt the details of the skin. I used the basic brush that comes with Zbrush in the whole process. And the details of skin pores use XYZ skin and Alpha channel mapping.

After the production of the head model is completed, it is the high-poly production of the mechanical part. This part of the model is first made in Zbrush to make the main shape and then placed in 3ds Max for topology and subsequent smoothing.

Hair making

Hair making is a challenge. This part includes four parts, hair, beard, eyebrows, and fine hair on the body. I found some good references.

The hair production mainly uses Zbrsuh's own hair system, Fiber Mesh, which can make the hair into a solid model, use the hairbrush to comb the hair, beard, eyebrows, eyelashes, and fine hair. We can see all the brushes I used in the picture below.

To make the hair, we need to group the model first. It is mainly used to select the hair growth area and add a mask to generate the hair so that the hair will be automatically grouped for easy adjustment of the shape. Another benefit of model grouping is that the length of the hair can be better controlled according to the grouping.

Before the beard is made, the models are also grouped and made with the Fiber Mesh brush. The beard is generated first, and then the position, length, and shape are adjusted. The sweat, eyebrows, and eyelashes are also made in the same way.

The following is the high-poly rendering effect tested using ZBrush's BPR renderer. We can clearly see the direction of the hair and beard hair, and the rest is the texture.

3ds Max Tutorials: Making of firearms(2)


3ds Max Tutorials

Continue the 3ds Max Tutorials: Making of firearms(1), the smarter and cheaper cloud rendering services provider, Fox Renderfarm still shares with you 3ds Max Tutorials: Making of firearms(2).

Low polygon

When making Low polygon, because these are relatively simple and hard things, we can use High polygon to reduce the line. If the structure is complex, we need to use the Topogun tool. When making Low polygon, we need to pay attention to some gun barrels. The line must be straight to facilitate the production of UVS and the baking of NormalMap later. It should be noted here that if it is not straight enough, it may cause the parts to twist.

In addition, we must ensure that Low polygon and High polygon match before baking, otherwise the baked NormalMap may encounter problems.

Another method is to click the number "0" button every time a low polygon of a model is made, and then pick the High polygon with Low polygon on the baking panel, bake the normal, and test whether the NormalMap is correct.

If there is a problem with NormalMap, you can use some plug-ins to perform Normal transfer to solve the problem of NormalMap.


A total of two textures are used here. It should be noted that the Low polygon in the UVS here is all turned into a triangular surface to import the Substance Painter to maintain the correctness. Hence the probability of error is relatively small.

Why is the UV now so broken? This is because there is a rule in the game that the UV must be disconnected where the model has hard edges. Otherwise, a black border will appear when rendering in the engine. That is not what we want to see. There is a certain gap between each UV because the texture has the concept of pixels. If the gap is too small, it will cause your texture pixels to be interspersed, which will cause the seams to be very difficult to handle when painting the texture later.

Finally, paste a checkerboard to see if the UV size is consistent. To ensure that the pixels occupied by each piece of UV are equal, the important part can be amplified by UV to make it occupy a larger proportion. The texture accuracy is also higher.


There are many ways to bake normals. I still use Substance Painter to name the baking methods. First in 3ds Max names the high poly and low poly models. The name of the high poly should correspond to the name of the low poly, such as jujiqiang00_low corresponds to jujiqiang00_high.

Then, export the low-poly format fbx, and the high-poly export obj, which can be baked with Substance Painter.

Note that when baking the normals, first bake the average normals, then bake the vertical normals, and finally use Substance Painter to repair the normals. When baking ID maps, the pictures that you often bake are black, which means that the material color is not turned on in the color source, and the high poly should be given different colors in advance in 3ds Max.


Here, I mainly explain the production of the metal gun body part.

Add a mask to the group, and then add a color selection to the mask. Absorb what you want to do in the colored part below the picture. I simply changed the color. The next step is to add various masks in detail.

I will analyze each layer from bottom to top because it is a relationship between the upper layer and the lower layer. I adjusted the basic material at the bottom layer. That is the basic color of this metal.

The picture below is the effect of the dark and bright colors we added, so will there be some color changes on our metal?

Next, make some small metal particles and dirt on it. Then do the frosting on the metal. It is the material directly searched on the Internet. Paste it to the base color to increase, add another layer of height to the top, and adjust the transparency through the height channel. The height of this layer can actually be made together with the layer below, but here it is made separately.

So here we can export the texture by pressing Shift+Ctrl+E. The next step is to import the file into 3ds Max to adjust the lights and use the rendered output to save the picture for later adjustment.

The above is a brief 3D tutorial for making this gun. Thank you for reading and hope you can learn more from it.

3ds Max Tutorials: Making of firearms(1)


3ds Max Tutorials

Your TPN-Accredited CPU/GPU cloud rendering services, Fox Renderfarm still share with you 3ds Max tutorials. In this tutorial, I will talk about using 3ds Max 2018, Zbrush, SubstancePainter, and other software to make a game-style firearm. I will not only talk about the process of how to make High polygon and Low polygon in 3ds Max, but also the baking method of normal map and ID map, the making of texture map in Substance Painter, and final rendering. Finally, I would like to thank my good friend "LAO MENG" for his great help in the preparation of this tutorial.


The picture below is a work related to the "Call of Duty" I found on the Internet. I like his firearms very much. These firearms are very good in appearance and texture!

A work of "Call of Duty"

For example, the works in this tutorial, although the appearance looks very normal, they can attract our attention. The work looks real, and the production of details is also very rich. The shape conforms to the basic structure of a gun. The material is distinct, with metal, leather, matte, etc.

We can clearly see its roughness, highlight, metallicity, and other parameter attributes. The combination of various materials makes the whole gun look harmonious, and the metal texture is used for a long time. The metal parts will also have dust and wear. The overall style is still in line with the current game requirements. At the same time, I also found other real gun pictures as references to analyze the metal texture and traces after use.


The blocking of the model is very simple, focusing on the proportion. If you cannot follow the correct scale, you can decompose the reference diagram. I mainly divided the gun into three big pieces, the butt, the body, and the barrel. In this way, we could determine the proportion of the three parts to arrange the overall size. Then the proportion of our small parts, screws, etc. can also be determined according to its proportion in the original painting.


It is necessary to add more details when making the mid-poly, but always pay attention to composition.

High polygon

The next step is High polygon carving. The following are the places we need to sculpt because adjusting the material parameters in SubstancePainter can’t get the best results. So those details that need to be carved still cannot be omitted.

When it comes to engraving leather, it needs to show the flexibility of the leather and the fit of the close part of the gun. Because the metal part is a mechanical product of industrial products, there is no need to add irregular scratches to it. Metal uses 3ds Max to make high poly, and it must be divided into groups in the Low polygon stage. Small scratches, abrasions, and dust can all be made with texture.

The Zoo Crew Is Gearing Up For Halloween


3D Rendering

The zoo crew is gearing up for Huluween.

Madagascar: A Little Wild - A Fang-tastic Halloween premieres October 21.

Join Alex, Marty, Gloria and Melman as they head to the Central Park Fright Fest to celebrate Halloween!

What is the difference between pre-rendering and real-time rendering?


3D Rendering

Pre-Rendering vs Real-time Rendering

"Avatar" directed by James Cameron spent four years and nearly 500 million US dollars to open a new world of science fiction for everyone. The CGI characters, Names, in the film look exactly the same as the people in the real world. And the realistic sci-fi scenes are shocking. However, these wonderful images are inseparable from the CG artists and pre-rendering technology.

In order to solve the rendering tasks of "Avatar", the Weta Digital supercomputer processes up to 1.4 million tasks per day to render movies, running 24 hours a day with 40,000 CPUs, 104TB memory and 10G network bandwidth. It took 1 month in total. Each frame of "Avatar" needs to be rendered for several hours, 24 frames per second. Hence, the powerful rendering cluster capability is really important to the CG studio.

What is pre-rendering?

Pre-rendering is used to create realistic images and movies, where each frame can take hours or days to complete, or for debugging of complex graphics code by programmers. Pre-rendering starts with modelling, using points, lines, surfaces, textures, materials, light and shadow, visual effects and other elements to build realistic objects and scenes. Then, computing resources are used to calculate the visual image of the model under the action of factors, such as viewpoint, light, and motion trajectory according to the predefined scene settings. The process is called pre-rendering. After the rendering is completed, the frames are played continuously to achieve the final effect.

It is mainly used in the fields of architecture archive, film and television, animation, commercials, etc., with the focus on art and visual effects. In order to obtain ideal visual effects, modelers need to sculpt various model details during the production process; animators need to give the characters a clever charm; lighting artists need to create various artistic atmospheres; visual effects artists need to make visual effects realistic. Commonly used pre-rendering softwares include 3ds Max, Maya, Blender, Cinema 4D, etc., which are characterized by the need to arrange the scene in advance, set the relevant rendering parameters, such as shadow, particle, anti-aliasing, etc.), and then use a PC or render farm to render with unsupervised calculation.

BTW, you can use a local machine or a cloud render farm for rendering. Fox Renderfarm can provide rendering technical support for the software mentioned above.

Each frame in the pre-rendered scene is present. Once the rendering is started, each frame takes several seconds, minutes or even hours to render. A large amount of memory, CPU/GPU, and storage resources are consumed during the rendering process, which is a computing resource-intensive application. Especially in film and television projects, there are usually scheduled requirements so that rendering tasks need to be completed within a specified time. Currently, tasks are basically submitted to cloud rendering farms for rendering. Cloud rendering farms, such as Fox Renderfarm, are professional service companies that can provide massively parallel computing clusters.

After pre-rendering, the task is basically the finished work that has been rendered. If you want to calculate and see the scene in real time on an operable online service or online game, we have to talk about real-time rendering.

What is real-time rendering?

In August 2020, a live demonstration of the action role-playing game "Black Myth: Wukong" produced by Game Science Corporation from China became popular in Chinese social networks. The top-notch pictures, rich details, immersive combat experience, and sufficient plot interpretation in the demonstration restore an oriental magical world. Every beautiful scene in the game is rendered in real time.

Real-time rendering is used to interactively render a scene, like in 3D computer games, and generally each frame must be rendered in a few milliseconds. It means that the computer outputs and displays the screen while calculating the screen. Typical representatives are Unreal and Unity. Games like Game Science are built using Unreal Engine 4. The characteristic of real-time rendering is that it can be controlled in real time and is very convenient for interaction. However, the disadvantage is that it is limited by the load capacity of the system. And if necessary, it will sacrifice the final effect, including model, light, shadow and texture, to meet the requirements of the real-time system. Real-time rendering can currently be applied to 3D games, 3D simulations, and 3D product configurators and others.

Real-time rendering focuses on interactivity and real-time. Generally, scenes need to be optimized to increase the speed of screen calculation and reduce latency. For the user, every operation, such as a finger touch or click the screen, will make the screen to be recalculated. And the feedback needs to be obtained in real time. Thus, real-time rendering is very important. In simulation applications, the data shows that only if the latency is controlled within 100ms, people will not obviously perceive the inconsistency of video and audio.

In recent years, with the improvement of GPU performance, the speed of real-time calculation is getting faster; and the accuracy of the calculation images is getting higher. Especially with the application of Ray-tracing and other technologies, the effect of real-time rendering becomes more realistic. These top technologies are also obvious trends in future production. If you want to learn more about the real-time rendering, please feel free to contact us.

Welcome to join us

render farm free trial

Recommended reading

How to render large scenes with Redshift in Cinema 4D


Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?


How to Render High-quality Images in Blender


How the Redshift Proxy Renders the Subdivision


China Film Administration Release New License Logo


Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel


Corona Renderer Learning - Denoising


Redshift for Cinema 4D Tutorial: Interior Design Rendering



Fackbook Customer ReviewsFoxrenderfarm

Powerful Render Farm Service

    Business Consulting

    Global Agent Contact:Gordon Shaw

    Email: gordon@foxrenderfarm.com

    Marketing Contact: Rachel Chen

    Email: rachel@foxrenderfarm.com