An Amazing CGI Personal Project Sharing Of Liu Yifei Likeness As Mulan
Arnold for Maya
Realistic performance is not the whole of 3D art. 3D technology is constantly developing. This is the important driving force for 3D character artists to create. In film and television works, the visual development of characters is the most crucial step. Jung-Won Park is a 3D character artist from South Korea, specializing in 3D modeling, character modeling and rendering. This time, we based on the Mulan, the 2020 movie, we will create techniques such as creating realistic and delicate skin textures and high-quality 3D character model rendering. The main used software and renderer are Maya, Arnold, ZBrush, Substance Painter, Mari .
1. Model creation
Before the model is created, the reference material collection and basic model adjustment must first be performed. Then, perform basic model disassembly UV, heavy topological operation, and detail engraving. This step is very important.
2. Texture Drawing
Draw a model, refine the color texture, and drawing texture of the head and neck, face, forehead, arm and other body parts. Then draw clothing, accessories, and swords. If you don't know the material of any item, please check your reference picture or find a satisfactory reference.
The Displacement map is the most important part of the entire texture drawing process. Zbrush & Mari software is required for collaboration and in-depth analysis of TexturingXYZ displacement map settings.
4. Camera principle and lighting
Camera principle and lighting adjustment, this step can make the works infinitely close to the real world. It is necessary to properly set and test the ratio of scene to real world, camera parameters, and basic lighting.
5. Highlighted texture creation
Create highlight texture, including marking the model's highlight position, making primary high-light reflections & secondary high-light reflections, and perfecting the highlight texture.
6. Skin material
The skin material is made to match the principle of light as much as possible.
7. Eye creation
The creation of the eye is believed to be a diffcult things for many students. First, create eyeball models, iris materials, and perform UV splitting, use PS to create scleral texture to create scleral materials. And finally, handle inner corner detailing.
8.XGEN fur creation
XGEN makes fur, which is the only way to pursue delicate results. Before making a production, first analyze the fur making ideas. Then, create the upper eyelashes, eyebrows, fluffy hair on the face, the top area of the hair, the hair of the forehead bangs, the hair on the face, and the fine hair on the back of the neck. Among them, the use of Fibermesh for hair styling, the application of random expression of hair distribution, increased hair volume, and the creation of ponytail will be the focus of learning.
9. Body and clothing materials
Use UDIM to create displacement texture for apparel gloves and master Substance to Arnold. Draw the displacement texture of the arm, highlight texture, and perfect the hand coloring material.
Model rigging and posture adjustment, adjust Pose in Zbrush, learn to use Maya's "transfer".
11. Light rendering
Light rendering can be combined with a variety of advanced rendering, such as Arnold. Apply HDRI lighting, add lighting layouts, adjust DOF & chromatic aberrations, handle rendering noise and finish.
Skin, detail, light
There are 3 core contents of character visual performance. Rendering how to obtain high-quality image performance, in the texture part and the rendering part of the work of the model artist's level of high judgment, Park Zhengyuan's skin, hair, lighting rendering performance is called crafts level.
SSS skin texture production,
Hello, Arnold GPU
Arnold render farm
**Arnold** is one of the most popular **render**ers in animation and film production, the software and **render**ers supported by Fox **Renderfarm** including **Arnold**, we are also an **Arnold** **render** farm**. With the recent release of **Arnold**-184.108.40.206, the first version of the **Arnold** GPU (beta) has been officially released. We can share some of the results and thoughts about the pre-test of the **Arnold** GPU.
The principle of Arnold GPU More than two years ago, Marcos Fajardo (author of the Arnold) talked about the possibility of Arnold GPUs for the first time at SIGGRAPH2015. At the time, a full-platform-supported GPU renderer based on the OpenCL computing framework was designed. However, the current Arnold GPU is based on NVIDIA Optix rendering architecture, the bottom layer is CUDA language, CUDA is much more friendly than OpenCL, the development progress is certainly faster, and the most important thing is that CUDA is faster and more stable. With the release of the RTX series of graphics cards, vector intersection processing is getting faster and faster, and it is more suitable for GPU ray tracing algorithms.
Arnold CPU and GPU effects The original purpose of Arnold GPU rendering was to keep the CPU and GPU rendering results close to each other, which is the effect and speed of the different rendering modes of the beta version. Test environment:
Katana3.0 KtoA 220.127.116.11 gpu19 GeForce GTX 1080 Using 8 render threads Parameter settings: AA samples = 6 GI diffsue samples = 3 GI specular samples = 3 GI transmisson samples = 3 GI diffsue depth = 2 GI specular depth = 3 GI transmisson depth = 8 Light sampling is the default Metal Metal (CPU on the left, GPU on the right) Rendering time: CPU - 2m13s GPU - 10s In summary, the metal BRDF material does not require much computing performance, and can be well calculated on the GPU, so the CPU and GPU noise is less, GPU rendering is much faster. When there are many hard surface materials in the scene, the GPU will show more powerful speed and performance without losing the rendering quality. Glass
Glass (CPU on the left, GPU on the right) Rendering time: CPU - 31m57s GPU - 17s Glass BTDF material, ray tracing renderer is too slow and too inefficient. Although the Arnold GPU renders extremely fast, it is obvious that the rendering details are lost and the noise is dense. Still using the CPU to render slowly is more reliable. Subsurface scattering
Subsurface scattering (CPU on the left, GPU on the right) Rendering time: CPU - 6m39s GPU - 36s Subsurface scattering(SSS), also known as BSSRDF, is the biggest surprise for Arnold GPU in so many tests. The Arnold GPU implements the random_walk BSSRDF in standard_surface. First, the Arnold 5 comes with two types of SSS: Dif Random_walk Diffusion is an empirically based SSS model that renders results faster because many of the parameters are obtained by looking up the table. This empirical model is also commonly used today. Random_walk is a complete light reflection calculation based on a real physical model. Going back to the Arnold GPU, it doesn't support diffusion but uses random_walk directly. The results are perfect, the noise is small, and the rendering is very similar to the CPU. In actual production, this is a CG production requirement that is fully qualified for film. Cornell Box
Cornell_box (CPU on the left, GPU on the right) Rendering time: CPU - 23m37s GPU - 1m27s The result is perfect, although the rendering time is slow, but the effect is amazing. Compared with Redshift, the speed is fiasco, and the effect is over. The biggest problem with Redshift is that indirect lighting tends to be too bright and dark details are not enough. The effect of the Arnold GPU is basically the same as that of the CPU. At this point, it is the best known in the current renderer (prman doesn't know). Please note here that the reason why the rendering time of the GPU is significantly faster than the CPU is because the sampling is the same, and the sampling result is the same, the noise of the GPU rendering result is significantly more than the noise of the CPU rendering result. Arnold Beta Edition Limitations Because Arnold is still in the early beta version, many features are not available. It should be a little early to use the Arnold GPU for production. Here are some of the bigger drawbacks. General restrictions GPU rendering, based on the same sampling, more noise than CPU GPU rendering will read all textures into memory and video memory, not supporting streaming texture GPU rendering does not support bucket rendering, all supported AOVs are left in memory GPU rendering does not support OpenVDB Shaders restrictions OSL Shaders is not supported yet Third-party Shaders are not supported at this time Does not support AOVs write, does not support write_aov Lights restrictions Cylinder_light is not supported yet Disk_light is not supported at this time Mesh_light is not supported yet Light_links is not supported at this time Light_filters is not supported at this time The future of Arnold GPU Arnold wants to use GPU rendering for movie-level projects, but because of the GPU's own shortcomings, many renderings must be calculated using the CPU, which is more efficient. Therefore, it is very important to switch between rendering between CPU and GPU freely. Based on this, Arnold can provide high quality CPU unidirectional ray tracing rendering while providing efficient GPU Optix ray tracing rendering. A set of APIs supports CPU and GPU Arnold was originally designed to use a set of APIs that are compatible with CPU and GPU rendering. Now Arnold GPUs are beginning to be compatible with some of the Maya native materials available in MtoA, at least for this, Arnold has made some progress. Support for OSL Shaders The latest news, from the developer forum, NVIDIA is working with Sony to develop OSL GPU-based compatibility, including several important features of OSL: Closure and LPEs. Will join the Arnold GPU soon. Rendering consistency Now it seems that Arnold still does a good job of restoring effects. GPU rendering tries to move closer to CPU rendering because the two rendering architectures are different and no longer extend. Reference: MIYAZAKI
Arnold Render Farm | Fox Render Farm
Fox Renderfarm is a global technology company specialized in cloud computing services for rendering in entertainment industry. We supports most popular 3D software, renderers and plugins all over the world, like 3ds Max, Maya, Cinema 4D, SketchUp, Blender, LightWave, V-Ray, Redshift, Corona, and more. Supported internal and external plugins for 3ds Max, Maya, or Cinema 4D are integrated to our Arnold support. GPU & CPU rendering are both available in arnold render farm. Fox Renderfarm fire up thousands of rendering nodes instantly, high availability, no waiting necessary, easy to use, refers to MPAA security standards, also a flexible render farm pricing for you. The more reason you choose us in arnold** render farm are below: The partner of Oscar winning production teams. With Hollywood level production experience for years. 24/7 Live customer care & technical support. Response time: 10-15 mins. Contact available via Skype/Email. As low as $0.04 per core hour Volume discounts available up to 60%. Fox Renderfarm has an outstanding team with over 20 years’ experience in CG industry. Team members are from Disney, Lucasfilm, Dreamworks, Sony, etc. With professional services and industry-leading innovations, they serve leading special effects companies and animation studios from over 50 countries and regions, including two Oscar winners. In 2015, Fox Renderfarm formed a global strategic partnership with Aliyun (Alibaba Cloud Computing) to provide global visual cloud computing services. Get $20 free trial to register and render**ing your artwork online now!
How To Use Pbr Process In Film And Television Rendering
Arnold render farm
In the last two years, the game art industry has ignited a fierce PBR boom, which is as hot as the "secondary era" of the year. For the PBR process that almost redefines the game art, many friends have a lot of doubts. For example, why does the effect of PBR look so good, whether it is necessary to learn and use the PBR process in the film and television industry. If so, how do we use it?
This article will discuss these issues in a very detailed and professional manner from the perspective of film and television practitioners, so that everyone can understand the ins and outs of all this. How to use the logic of film and television to understand PBR? PBR is an abbreviation for physically based rendering, which is based on physical rendering. However, this is just a name. It can only be said that it is close to physics to some extent, and there is still a gap from real physics. But the PBR process has been very good at describing most of the materials we encounter in daily life. For many friends who work in film and television industry, understanding the properties of PBR material has certain difficulties, because the vocabulary used is somewhat different from the material ball we use. The attributes used by PBR are mainly four—diffuse, roughness, metallic, normal (the main discussion here is the most popular metal-rough model, not the specular-glossiness model or others), where diffuse, roughness, normal and Consistent use in the film and television industry, referring to the diffuse color of the material, the degree of reflection blur, and the normal. However, the most difficult to understand is also the key to the difference, just in this metallic attribute. Metallic translates to metality. According to some documents, this is a very simple concept, which refers to whether the material is metal. This value accepts only two values in principle - zero or one. When Metallic is zero, this material is not metal, it is called dielectric material. When Metallic is one, then this material is metal. (Almost everything except metal is a dielectric) However, this is only a very emotional statement. For a rendering artist who already has a complete knowledge system, this parameter is unreasonable, and almost no existing experience can be found. In the language that the film and television practitioners can understand, the actual control of this parameter is F0, which is the reflectivity of the face perpendicular to the camera's line of sight in the Finnel reflection. The reason why F0 is called is because the angle between the camera line of sight and the normal of this area is 0 degrees.
Image source: Allegorithmic official help documentation When the metallic value is 0, the value of F0 is 0.04. When the metalness value is 1, the value of F0 is 1. When metallic is the intermediate value of 0-1, F0 is also between 0.04 and 1. The corresponding interpolation is estimated to be a linear difference. In this process, F90 does not change and is always constant at 1 (that is, the edge area circled in the figure, marked as 100%). About this attribute-F0, it seems that we have never used it? Actually not, in the default material aistandard of the arnold renderer, there is a property called Reflectance at Normal, which literally means the reflectivity of the normal direction, the so-called normal direction here is mentioned above. The angle between the camera line of sight and the face normal is 0 degrees of normal direction; then this property is actually the F0 attribute.
In this way, the textures output in the PBR process seem to correspond to the renderers we usually use. Therefore, in terms of reason, we can also use the texture of the PBR process. How to use PBR textures in our familiar renderers The method used is not very complicated in itself. But there are two situations to discuss. Because the translators used by film and television friends usually use different rendering methods to deal with reflections, they need to use different schemes to dock. Here, I will cite two representative renderers, arnold and vray. 1) Use PBR maps in arnold First, we want to export the PBR texture for use by the arnold renderer. Here, the texture software I use is a substance painter, since it provides a variety of flexible export solutions that speed up the workflow, I chose it here. When exporting, there is a arnold texture prefabrication available.
Among them, the diffuse texture is connected to the diffuse color channel, and the color space is sRGB. Specular textures connect specular color channels with a color space of sRGB. The Roughness map connects the specular roughness channel with a linear color space. The F0 map is connected to the Reflectance at Normal channel, and the color space is linear. (Here you check the fresnel option)
The Normal map connects to the Bump mapping channel, and the color space is linear. The bump 2d node is set to tangent space normals. The specular weight is constant at 1 during this process. The use of other textures is not discussed in this article. When the textures are connected and set correctly, and the color space of the renderer is also set correctly, the result rendered by arnold will be very close to the effect of the screenshot in the substance painter.
The effect of the screenshot in Substance painter
Arnold rendered effect Of course, because the rendering method of the substance painter has many approximate calculation methods, for example, the reflection effect is quickly realized by blurring the image of the environment ball and mapping it onto the model, instead of using the real ray tracing algorithm, so the final visual effect will be different from the rendering of the renderer. However, this difference is completely within the scope of acceptance. And you can make up for it by using the iray GPU renderer when previewing. 2) Use PBR maps in vray Similarly, first use vray prefabrication from the material painter to export a set of textures.
Most of the content is similar, except that reflection replaces specular, both of which are used to describe the color of the reflection, with different names and meanings. Glossiness is the opposite of Roughness. It is a description of the degree of roughness of the reflection, but one refers to the degree of clarity, and one refers to the degree of ambiguity. The most puzzling texture is a gadget called ior. The first contact with a friend will definitely be embarrassing. The word Ior is an abbreviation for index of refraction, which means refractive index. According to the physics knowledge of junior high school, the refractive index is something that is transparent glass, and should be the inherent property of a transparent object. How can it become a texture? What role can this texture play? Comparing with the previous arnold output texture, you will find that the ior map here corresponds to the F0 of the arnold output. Then make a simple guess, are there any intrinsic links between the two textures? The answer is also yes. We can find such a formula in the help document of allegorithmic:
Don't be afraid to see the formula, at least this one is very simple. This formula describes the relationship between F0 and ior. The left side of the equation is of course F0, while the n on the right refers to ior. Then through this formula, we can roughly feel that F0 and ior have an intrinsic correspondence. Now let's try to do a simple operation. When ior is equal to 1.5, it is very easy to calculate, F0 is equal to 0.04. Hey, it seems that this 0.04 is a bit familiar? What is this sense of sight? In fact, the number of 0.04 has already appeared in the previous article. "When the metal value is 0, the value of F0 is 0.04". 0.04 is the value that is mandatory for the F0 of all non-metallic objects in the PBR process. Correspondingly, the value of ior is 1.5. According to the knowledge system of film and television rendering, we know that the ior attribute can also be used to describe the intensity of reflection. Generally we will use the ior between 1.3 and 1.6 to set the non-metallic material, and the ior above 20 to set the metal material. Put a little knowledge here. Seriously seeing friends here may have a question, that is, we usually do rendering, ior may give more change, but according to what I said before, it seems that the non-metallic substance in PBR, the constant use of 1.5 ior does not change ? Isn't the reflection intensity the same? Can the material from this be correct? You can take a look at the scene of the opening of Shenhai 4, they also use this system to do, and nothing wrong. Why is there a problem with the constant ior of 1.5? Because the change of F0 in all non-metallic substances in nature is nothing more than between 0.02-0.05 2, the difference is very small, even if all use 0.04, there is not much change. Some software also considers this problem, adding a parameter to the basic PBR parameters to slightly control the intensity of F0, so that the artist has more room for operation. For example, the material of the Unreal 4 game engine has a specular attribute, but this specular is completely different from the traditional specular. It actually plays a role in making F0 change from 0.02 to 0.06. You can click on the link at the back of this article. There is a slider on the help file of Unreal 4 to drag the observation effect.
The PBR texture is connected to the vray material, which is almost identical to the arnold method except that the ior texture is connected to the material's ior channel. And here is a very noteworthy thing, here is the ior texture we have been talking about, strictly speaking, it should be 1/ior texture.
You can look at the output of this 1/ior texture, you can imagine, if the output is ior texture, because the value of ior, non-metal is 1.5, metal is also greater than 20, are greater than 1, the value is reflected in If it is on the map, it must be a pure white piece. It will not be such a grayscale image.
Then, do we want to invert the 1/ior texture in the 3D software, and then insert the ior channel after getting the real ior value? If you are using a vray renderer, there is no need for this, because vray will automatically recognize the value of the ior channel inserted. If it is less than 1, it will automatically be treated as 1/ior. The above, through the use of PBR textures in the arnold and vray renderers to sort out the use of F0 textures and ior textures, and other various renderers are nothing more than one of these two methods. One interesting thing is that the redshift renderer borrows a lot of other renderers, so the shader inherits the interfaces of F0 and ior. At the same time, in the new version of the rsMaterial material (that is, the open source alsurface material upgrade version), actually made the metalness attribute, which can be directly connected to the PBR process like the game engine, it is also quite tidal.
Source of PBR material realism Many friends have been wondering why the game's picture will be greatly improved after using the PBR process, and even has begun to surpass the film-level effect. Is the process used in film and television more than the PBR process? Through the previous article, we will find that the textures output in the PBR process are also used in various renderers, and the rendering effect is close to or even better. That is to say, the effect of the game PBR process, the production method of film and television can be completely re-enacted or even surpassed, but the reverse is not possible. The PBR process is only a good definition of the two types of materials, dielectric and metal, but there are big gaps in the transparent material 3s material. So, why is the PBR process still giving the public a more realistic impression than the film and television process? Through the above description, whether you find one thing, the PBR process has only two modifications to the reflection parameters - F0 and Roughness. It seems to be very limited, it is very stifling the artist's creative space. In contrast, the material we usually use has unlimited modification rights to the reflection, not only can adjust the intensity of F0, but also adjust the intensity of F90, and can also adjust the body. Intensity, you can also adjust the transition curve from F0 to F90. This makes our work seem more free and more creative, but in fact, It is the software that gives us too much freedom that makes us make a mistake. If our aim is to create truth, then this freedom runs counter to us. In the PBR process, even the opportunity to adjust the intensity of the reflection is not given to you. No matter how you adjust it, you can never break the conservation of energy, can't eliminate the Fresnel reflection, and can't create the material that doesn't exist in nature. It is this seemingly tough restriction that enhances the accuracy of the PBR process and is more in line with industry needs in most cases. Many times, it is not that the level of art in the game is higher now, but that there is almost no chance of making mistakes. The rendering artists engaged in the film and television industry want to achieve the same quality, but they need higher learning costs. Only by understanding all the knowledge behind it will you control yourself and not make mistakes. Hope it helps you.
The Essential Thinking Of Roughness And Anisotropy (4)
Arnold render farm
In the article “The Essential Thinking Of Roughness And Anisotropy (3)”, we have understood that the essence of roughness is bump. Then we finally have to start the theme of our issue - use roughness to restore the bump effect of the ring scratches.
Using a rendered flowmap image, use this map to perform a series of node transformations, and finally insert an anisotropy rotation channel to make the anisotropic highlights rotate for the purpose. In fact, the essence of the flowmap is the position of the plane coordinates, and the value of XY is recorded in the red and green channels. Converting to an anisotropy rotation map is to calculate an angle by the position of a point, and use this angle to control the rotation. The core is to use a function called atan2 (some tools have this, such as Arnold and infrastructure designer, some tools are not, such as V-ray and redshift)
When attempting to render this effect using a non-Arnold renderer, the conversion of the flowmap to the anisotropy rotation map cannot be achieved due to the lack of mathematical tools. So I tried to use Substance Designer as an intermediate software for conversion. The following is a screenshot of the node in the pixel process in SD. The main function is to convert the flowmap into an anisotropy rotation map.
The result of the conversion is this, it can be seen that the black and white scratches (and some of the scratches are negative, so the whole picture looks very dark):
Finally, use the anisotropy rotation map generated in this SD directly, you can also render the effect we want.
Arnold\V-Ray\Octane\Conora\RedShift, Which Renderer Is The Best?
V-ray cloud rendering
Nowadays, many people use 3D software to create their works, so the use of the **render**er is essential. There are many popular **render**ers, including V-Ray, Octane, **Arnold**, Conona, RedShift. But for many new 3D artist, which **render**er is the best?
The principles of these renderers are basically the same, but the operation methods are somewhat different. Of course, the focus of each rendering is different. Let us take a look at it now. Basic introduction to the renderer:
The work from the Runner in Evermotion 2017, rendered by V-Ray.
V-Ray The V-Ray renderer focuses on interior design and outdoor design for 20 years. The biggest feature is that it is almost half of the building performance, and the latest 4.0 version of the various functions are also very perfect, many works can be fake. Disadvantages, perhaps because V-Ray has too many setting parameters, material adjustment and lighting performance is very difficult, not very friendly to novices!
Aronld official show Arnold Arnold is very powerful. The physics-based ray tracing engine CPU renderer is the most stable! Of course, the CPU is not strong enough, the rendering speed is also anxious, especially the rendering of transparent objects such as glass, as well as its powerful node-like operation, so that novices love and hate it! Of course, as an attitude to study the renderer, Arnold is highly recommended to learn!
RedShift Next, talk about RedShift, the GPU-based renderer, the most intuitive experience is that the real-time preview and rendering speed is fast, and has a powerful node system like Arnold. It is more suitable for artistic creation, especially for animation. As for the authenticity of the rendering effect, it takes a lot of time to practice and accumulate knowledge.
Octane Octane is also a GPU-based renderer. First of all, its SSS and replacement functions are quite good. The speed is also very fast, the light is softer, and the rendering looks very comfortable. Similarly, if the slightly larger scene is solved by PL, the efficiency will drop, but the speed is not fast. There are also problems with dryness that can sometimes become one of the features of the work! PS: The current version of OC does not support A card, nor can it exclude lights.
The work from the first place of Evermotion 2017, rendered by Corona. Conona is also a highly recommended renderer. It comes from V-Ray. It can be seen as a simplified version of V-Ray. LightMix lighting solves the trouble of late dimming. The material system will not have to worry about it by default. Very small and practical, quick to get started, of course, just out of the official version, it is said that next year will be a node system, while the animation is somewhat weak! If you do indoor and outdoor design, static frame rendering can be used, after all, it is best at the ray tracing rendering engine! In fact, the renderer is just a step in the work, we can choose different renderers according to different work content, find the best for you.
How Does Maya Bring Arnold To Render Transparent Textures?
Maya render farm
In the maya production scene, it is more common to use transparent maps, such as leaves, generally do not specifically engrave the leaf model when making the leaf model, but through the texture, a patch can be saved. Model engraving time, because making in large scenes is very time consuming, and small places such as leaves do not take too much time to process.
Create a patch in the scene
- Create a camera and lights, then we directly render with arnold in a gray patch.
We went to find a texture of the leaves, down from the Internet, there is no alpha, we posted to try, we create an aiStandardSurface1 shader ball.
- We put our leaf texcure on the color and give this shader to our face.
- Now, rendering work as below, we put a thing under the patch to facilitate observation.
- Then consider how to remove the extra white next to the leaves, we need to deal with transparency in this parameter we can paste a black and white textures, black is no data information, that is, the value is 0, pure white is 1, so you can make the texture transparent, and then we will process the image of our leaves, make the thing you don’t need black, and make the one you need into white. In the nuke inside, here you need to pay attention to the difference, the lower version is processed in semi-transparent like the Transmission deal with Weight in pictrue, , I am using a high version of Arnold.
We put the processed image in the opacity channel. Now we have the leaves, and there are no edges.