Using redshift to render a work Zen Bamboo
This time, the cheapest for VFX, animation, CGI projects in the world, Fox Renderfarm wants to use a case to show the powerful rendering features of Redshift.
In this case, C4D and Redshift are used to render a photo-level picture of fresh bamboo. The main elements in this picture are composed of several parts, bamboo and glass bottles, which are the most prominent places in the picture. The difficulty lies in the part of the bamboo, especially the part of the bamboo texture.
The elements of the scene are simple, including tables, books, clocks, bottles and plants in the foreground. It is also possible to use some existing models directly, and choose an angle that shows the bamboo. It is still very simple to see the entire project now.
The bottle uses an existing model and there is a layer of water inside the model.
There is still a problem. The bottle is empty and needs to add some stones and bamboo. So, to make a stone first, you can add a launcher under Simulate / Particles / Emitter to make a stone.
Create a sphere to make a stone, then make a different shape, drag all the stones under the emitter, then play to a suitable height and convert the stone into an object. The floating stones can be hidden.
Create a bamboo using a Tube.
Create other items for the scene, the wall and the wooden table.
Place the camera:
The material starts from the simplest. All the props do not use the Redshift material, but use the Cinema 4D's own shader.
The background wall adds a default material to place the lights.
Add wood grain to the board.
Next, the bottle material uses the default Glass. After adjusting the value, copy a layer of glass material to the water level, modify the preset Water, and continue to adjust the value.
The stone material uses a random color that makes the stone look more versatile.
The bamboo material uses a gradual overshoot to increase exposure.
Make a bamboo node using Vertex Attribute and Ramp.
Lighting and adding other items
In addition to Dome Light, you need to add an Infinite Light to change the brightness and keep the scene from being too bright. Adding a plant from the own plant library to the foreground of the lens, you can see that the effect of the lens is already very good. But trying to achieve the best results requires constant experimentation.
Add other items, clocks, books, milk cartons, and photo frames to the scene, all using existing models and materials. Then output the test results.
Adjust brightness, contrast, exposure, and change colors.
How to render large scenes with Redshift in Cinema 4D
Redshift render farm
How to manage large scenes in Cinema 4D's Redshift renderer, how to optimize things in the scene, and how to optimize rendering settings in the Redshift renderer. These are relatively far-reasonable things. How to optimize them is very helpful for scene management.
- Proxy file
For the Redshift renderer, you can generate proxy files from the scene and export them before rendering. For trees, the scenes with a lot of flowers and grasses are very suitable for the Redshift renderer. This is the theme that the leading in the CG industry, Fox Renderfarm, brings to you in this article.
How to generate a proxy file? Now you need to generate the proxy file and export the Redshift Proxy (.rs) to where it is needed. Proxy settings: after the ‘Add’, ‘Proxy to Scene’ are checked, the generated proxy file will generate a proxy file in the location where you want to export the file. If the exported file is animated, then ‘Range’ must be changed to ‘All Frames’. Then the generated proxy file will not occupy too many scene resources, and the production software operation will be smooth. And if you use a proxy file for replication, it will also reduce the resources of the system.
Like the scenario below, most of the models are proxyed, so the rendering resources needed are even more economical. Take the following scenario as an example. Its number of faces is very large, and it will be stuck when it is operated. And all the models in the scene do not use proxy files, so you must optimize the rendering settings.
Before optimization, you must have a certain understanding of the working principle of Redshift. In the default case of Redshift, it does not emit light to the object through the light to show the light and dark effect. Redshift is the light emitted by the camera, hit on the object, and then the object will look for the source of the light to present the light and shade, which is somewhat different from other renderers.
The two values in the picture below are generally used to reduce noise. Samples Min means that the minimum amount of light emitted from the camera is four, and the value below Samples Min can adjust the secondary reflection of the object.
The Samples MAX value, if adjusted too high, the camera calculations consume more resources, rendering time will increase a lot. In general, the values are around 64. However, if the motion blur is turned on, then the value should be at least 256 or so.
Samples Min=4, Samples MAX=256
Samples Min=4, Samples MAX=16
If the noise is obvious, try to adjust the ‘Brute Force GI/Number of Rays’ under GI.
The ‘General/Primary GI Engine’ under GI is a primary feature, and the Secondary GI Engine controls the number of bounces.
Calculating the GI information of the scene under /Primary GI Engine/Irradiance Caching will be faster. The number of light carried by Num Rays, Adaptive Amount represents the attenuation ratio of divergent rays. If Num Rays is 200, then Adaptive Amount 0.8 will decay by 80%.
The above are several ways to optimize large scenes and rendering. By optimizing the scene and rendering settings, you can save more time and resources during rendering, and better complete the scene production.
Redshift for Cinema 4D Tutorial: Interior Design Rendering
This time, the CG industry's leading service provider, Fox Renderfarm brings you a case study tutorial of Redshift for Cinema 4D for interior rendering. This is a basic case of Redshift's interior rendering, so the case uses an existing interior model, the focus of this case is the lighting adjustment and rendering settings.
The effect in the image below is to use the Octane renderer, which will now be recreated using Redshift.
Before you make the lights, turn on the GI and choose the most appropriate way for this scene. The Primary GI Engine chose Irradiance Cache, and the scone GI Engine chose Irradiance Poince Cloud.
In order to match the light source, an Area Light is placed in each window to simulate the skylight. The advantage of using three lights separately is that it can be adjusted separately and saves rendering efficiency.
The default material for Cinema 4D will be darker, so before the rendering test, you can create a new Redshift material, adjust the Color Picker up to 70%, and turn off the Weight. Give the material to the model, and the rendering test shows that the model is much brighter.
Now you can see that the overall color of the light is white, so you need to adjust the color, first change all the light modes to Temperature, the window light is blue, and the room is warm. And make up a light on the left side.
Adjust the brightness of the light to the right position. If the picture is still dark, you can also adjust the camera's Film Speed (ISO). Now you can see that the light is no longer a problem.
Use glass material. First adjust the glass material, you can directly use the preset glass material that comes with Cinema 4D. You can make a material and copy it to adjust other glass materials on this basis. Next to the toilet and sink, the bathtub is made of ceramic material, and the Chrome ball is used here.
The hangers and door handles are made of metal. You can also create a preset metal ball to give them and adjust them on this basis.
The mirror material uses the Material shader, and the Reflection/Fresnel Type is changed to IOR, and the IOR is changed to 30, which is basically a mirror effect.
The following content is selected from other materials. The wall uses a tile-like texture with wood grain on the floor. Because this material is built into the model, so just paste it back, but the details can be adjusted.
Now you can see that the material of the large model has been adjusted, leaving some details, such as baskets, branches in vases, books, bath towels, etc.
If the scene is exposed, you can adjust the value of the camera's Exposure/Tone Mapping/Allowed Overexposure, where the value that allows exposure within the camera is controlled.
Put it into Photoshop after rendering, the final result,
Content quoted from:
Say Goodbye To Model Chamfering, Introduce Roundcorners To You
Redshift render farm
The demos in the article are all based on the Redshift 2.623 postFX special version renderer, but other mainstream renderers (Arnold, v-ray, etc.) have similar functions. Arnold needs to be in the latest version, which is a recent new feature. It is available during the use of MtoA3.1.1 Arnold Core 126.96.36.199.
1) Basic introduction of RoundCorners technology
The round corners technology was first used in the mental ray era, which was before 2010. This means that the technology itself is very old. However, it rarely appears in various rendering tutorials, so the penetration rate is quite low. Recent test results show that this technology can be very useful in many cases and is a technology that is seriously underestimated.
Its main functions are divided into two pieces:
1.let the original hard edge produce a smooth chamfer effect.
2.Make natural overgrowth at the junction of the two intersecting objects.
The specific way to do this is to create a RoundCorners node and connect the output to the bump input of rsMaterial to create an edge chamfer effect. The Radius value controls the size of the chamfer.
If you do not check the Consider Same Object Only, it means that all objects with this material will have a chamfer effect, and a blending chamfer effect will be formed between different objects, that is, the effect of Figure 3. If the "Consider Same Object Only" is checked, it means that there is no chamfer effect between different objects, that is, the effect of Figure 2.
Because round corners will take up the bump input, if the material itself has a bump, then the bump blending technique is needed. Create a bump blender node, mix the process bumps, and check the additive mode to get the perfect overlay effect.
2) The principle of producing good-looking highlights
When making the model, everyone knows that the model needs to be chamfered and cannot have a hard edge. Perhaps the explanation you have heard is that objects in real life cannot be so sharp, there are always some rounded arcs, the difference is only the size of this arc.
This statement is of course correct, but it is not a conclusion drawn from the perspective of rendering. As shown in the figure below, the reflection area of a square box without chamfering is shown, and the dark diagonal line marks the approximate reflection area.
The green area is marked with words, which is a reflection blind zone. It means that when the light is in this "reflection blind zone", the model can't observe the highlights. At this time, the model texture looks very much like lambert! This is a very bad phenomenon.
If the model is chamfered, then this reflection dead zone will not exist, and all the lights in the "original reflection blind zone" will form a slender and beautiful highlight on the chamfer. This will give a good representation of the texture of the material. (The reflection area of the chamfer will be very large, and it is easy to form a high light. It is easy to observe the phenomenon around it.)
The difference between chamfering or not, the contrast between the two figures is perfectly reflected once again. Note the image below, where the highlights appear, basically in the chamfered area.
This section mainly explains the principle and flaws of forming a good-looking highlight. The specific solution is to chamfer the model, or the cheaper and convenient method is the RoundCorners technology introduced in the article.
3) Limitations of RoundCorners technology
The chamfering effect produced by RoundCorners is generated by bump. It also achieves the effect by changing the normal direction of the surface of the model, but does not change the structure of the model itself. So when the angle of observation turns to some cases, the area of the chamfer becomes When the outline of the model is observed, an abnormal visual effect is observed.
Think about it, RoundCorners' chamfer is equivalent to shrinking these hard cut corners in a circle. This is a visual change; however, the real model is actually unchanged, so the conflict between the two forms. Incorrect visual effects.
Some people may have thought that they can try to solve this problem by using the RoundCorners node as a replacement. Unfortunately, both the arnold and redshift renderers are not.
At present, this problem cannot be solved temporarily. This technology is very convenient. To be clear, we use RoundCorners to form very small chamfers. When the chamfer is small, it is difficult to observe the problems described in this section. We can apply this technique to the mid-range and perspective of rendering.
Very close-up close-ups are still used to achieve super high precision using model chamfering. Or when the required chamfer curvature is relatively large, it needs to be directly made on the model.
4) RoundCorners response to permutation
Because there are a lot of situations in which the replacement is used in the film and television rendering, I also carried out the relevant test of the replacement, and the effect is satisfactory.
The rendering of the following three images uses two patches, one of which uses a black and white checkerboard for replacement and the other with a water surface. And use the RoundCorners node to control the tension between the water surface and the wall to form a curved arc.
The final result can be seen, is successful, RoundCorners responded very well to the displacement effect.
In the three figures, the radius values of the RoundCorners node are 0, 0.02, and 0.1 respectively.
It can be seen that 0 and 0.02 produce an essential difference in the high gloss of the contact surface. It is exactly the same as the content explained in the previous article.
The effect of 0.1 is actually wrong. It can be seen that the amount of curvature generated by the water surface is obviously too large, but this is more likely to have a high light effect.
Radius = 0
Radius = 0.02
Radius = 0.1
5) RoundCorners practical application (with scanning material)
Finally, with the application of the scanning material, the effect here is closer to the actual production application in the project. What quality effect RoundCorners can achieve.
Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Renderer Is The Best?
V-ray cloud rendering
Nowadays, many people use 3D software to create their works, so the use of the **render**er is essential. There are many popular **render**ers, including V-Ray, Octane, Arnold, Conona, **RedShift**. But for many new 3D artist, which **render**er is the best?
The principles of these renderers are basically the same, but the operation methods are somewhat different. Of course, the focus of each rendering is different. Let us take a look at it now. Basic introduction to the renderer:
The work from the Runner in Evermotion 2017, rendered by V-Ray.
V-Ray The V-Ray renderer focuses on interior design and outdoor design for 20 years. The biggest feature is that it is almost half of the building performance, and the latest 4.0 version of the various functions are also very perfect, many works can be fake. Disadvantages, perhaps because V-Ray has too many setting parameters, material adjustment and lighting performance is very difficult, not very friendly to novices!
Aronld official show Arnold Arnold is very powerful. The physics-based ray tracing engine CPU renderer is the most stable! Of course, the CPU is not strong enough, the rendering speed is also anxious, especially the rendering of transparent objects such as glass, as well as its powerful node-like operation, so that novices love and hate it! Of course, as an attitude to study the renderer, Arnold is highly recommended to learn!
RedShift Next, talk about RedShift, the GPU-based renderer, the most intuitive experience is that the real-time preview and rendering speed is fast, and has a powerful node system like Arnold. It is more suitable for artistic creation, especially for animation. As for the authenticity of the rendering effect, it takes a lot of time to practice and accumulate knowledge.
Octane Octane is also a GPU-based renderer. First of all, its SSS and replacement functions are quite good. The speed is also very fast, the light is softer, and the rendering looks very comfortable. Similarly, if the slightly larger scene is solved by PL, the efficiency will drop, but the speed is not fast. There are also problems with dryness that can sometimes become one of the features of the work! PS: The current version of OC does not support A card, nor can it exclude lights.
The work from the first place of Evermotion 2017, rendered by Corona. Conona is also a highly recommended renderer. It comes from V-Ray. It can be seen as a simplified version of V-Ray. LightMix lighting solves the trouble of late dimming. The material system will not have to worry about it by default. Very small and practical, quick to get started, of course, just out of the official version, it is said that next year will be a node system, while the animation is somewhat weak! If you do indoor and outdoor design, static frame rendering can be used, after all, it is best at the ray tracing rendering engine! In fact, the renderer is just a step in the work, we can choose different renderers according to different work content, find the best for you.
The Essential Thinking Of Roughness And Anisotropy (2)
Redshift render farm
As a leading cloud rendering service provider, we are also a Redshift render farm, we published an article about “The Essential Thinking Of Roughness And Anisotropy (1)” last week. However, If you follow the view, you will find that you don't need the roughness parameter at all. The rougher the reflection, the better the texture with a higher bump strength is solved, let’s continue to discuss the roughness and anisotropy. In theory, this statement is correct, and it is also very physical and in line with people's real life experience. But this kind of practice of using roughness to make roughness, there is a professional word to describe him, everyone who plays rendering should be familiar with it - brute force The reason is the same. This way, when you render GI, use the brute force algorithm to mean that the microstructure you express may be very accurate, but the rendering speed is extremely slow. maxAA = 4 maxAA = 117 maxAA = 1024 maxAA = 8192 As can be seen from this set of graphs, when the roughness parameter is used to adjust the reflection roughness, there is no noise at very low sampling. But with bump control, very high sampling is required to effectively reduce noise. And even if the sample is given to the system with a maximum value of 8192 (the pictures are rendered using the Redshift renderer, the sample unit of rs is the square of the value of the sample equivalent to subdivs, where 8192samples is equivalent to about 90 of the subdivs; In addition, using maxAA to improve quality rather than using lighting samples is because the noise is not caused by insufficient light sampling, but the texture is very fine and must be super-photocamera to sample, and still see obvious noise. That is to say, this idea is only theoretically correct, but it is a big mistake in production and industrial processes. Then the roughness parameter was introduced to solve the problem discussed above.
How the Redshift Proxy Renders the Subdivision
Redshift render farm
I think this question should be a very common problem and a problem that is often encountered in movie animation projects, so I am going to write a tutorial on subdivision today. Everyone knows that in the Maya scene, if you want to create a proxy, you must first export and then create it. Then let's first talk about the Export Proxy: Redshift - Proxy - Export, be sure to click on the small box behind the Export. Then, keep the General Options - File Type - Select Redshift Proxy in default, meaning that this proxy is a Redshift proxy. The next step is the important what we need to know: File Type Specific Options – General, and there is an Export Polygon Connectivity Data under General. What does this option mean? The official Redshift documentation explains this: The Export Polygon Connectivity Data option should be used if you plan on applying any tessellation & displacement to the proxy. Enabling this option does increase the size of the proxy file, so should only be used if necessary. In other words, if you want the Tessellation and Displacement options under the properties of your proxy to work, you must check the Export Polygon Connectivity Data option when exporting the proxy, otherwise, no matter what proxy file will be set, there is no subdivision or permutation in the proxy model. Except for one case, you have already set up all the model substitutions and subdivisions before exporting the proxy. Replacement I can guarantee that all projects are set up, but the subdivision is not necessarily true. For now, you can know that why I want to export the proxy, because this is the key to whether the redshift proxy can render the subdivision. And the last command is below: Sequence -Export Sequence, it's easy to know, export the animation sequence. After finishing the proxy export, now say about Create Proxy: Redshift - Proxy – Create. After the creation, put the proxy file you exported into it. Under your proxy properties, check Overrides - Tessellation & Displacement. Then find the Redshift Proxy Placeholder Shape panel property Redshift - Tessellation - Enable, check Enable is OK, as for the subdivision parameters, according to your needs to modify it yourself.
The Essential Thinking Of Roughness And Anisotropy (1)
Redshift render farm
This is a technical article of a scientific nature, the ultimate goal is to achieve a ring effect. In the process of implementation, the two concepts of Roughness and Anisotropy that need to be understood are analyzed in essence.
Outline The essence of Roughness The efficiency problem of roughness calculation The nature of anisotropy Flowmap to aniRotationMap The principle of making aniRotationMap directly by bypassing flowmap Simple implementation in Substance Designer The remaining problems are solved, why should we turn 90 degrees? Change the fast renderer, I said to have Redshift To sum up The essence of Roughness: Roughness is the essence of this parameter. We first think about why there is a clear and fuzzy difference in the reflection of objects, because of the microstructure. Some objects have a neat microstructure and a clear reflection. Some objects have irregular microstructures, and the reflections on the macro are blurred.
The keyword came, and the bumps were uneven. What CG attributes do you think of this word? Yes, it is bump. Roughness == Bump. You may not believe that evidence is here. I made two goals in Maya. The left ball uses the roughness to control the change of reflection roughness
The right ball uses a whiteNoise texture attached to the bump, and the roughness change is achieved by controlling the strength of the bump (the roughness value is constant at 0, you can understand that this parameter is dropped on the shader ball, no more). >
This whiteNoise is generated from the Substance designer software.
In Maya, this whiteNoise made 100 uv repeats to ensure that the texture was dense enough to approach the so-called microstructure. At the same time, it is very important to turn off the filter of the image itself to ensure that the texture is blurred when it is rendered too small (this will lose the original microstructure and cannot achieve the test results in this article) Take the Redshift renderer as an example:
In this comparison, you will find that even if the right ball does not use the roughness parameter, the final effect looks very close to the effect of the normal adjustment of the roughness parameter. Is the roughness too slow? In our previous production process, the anisotropy was only used on certain materials that were particularly noticeable. This may be one of the missing parts of the material. No matter what you do, you have to be brave to try! Reference: Daiwei
The Essential Thinking Of Roughness And Anisotropy (3)
Redshift render farm
We know that the nature of roughness is due to the unevenness of the microscopic surface, so the anisotropy is generated, and the unevenness of the microscopic surface must be avoided.
The general microscopic surface, because of the randomness of its irregularities, shows that the high light diffusion is uniform in all directions. We used a whiteNoise to simulate this effect. Let's take a look at this picture and feel the microscopic surface roughness of the object, random and uniform. ;
But there are also some surfaces on the object that are another form, such as:
This microscopic surface has extremely strong regularity and directionality. Once this structure is present, the macroscopic reflection of the object will be anisotropic. This kind of microstructure is the essence of anisotropy. Still use the same brute force approach to reproduce this process in the renderer. The shader on the left is only adjusted for the anisotropy value, from 0 to 1. (Each renderer may be different. For example, arnold needs to be adjusted with the roughness value. The light adjustment anisotropy parameter is useless. Here only the principle is used, and the details are not expanded.) ;
The shader on the right uses a bumper map that was just generated using the anisotropic_noise node in the Substance designer. Also repeat 300 times and insert bump to force the reflection blur, the animation is still the bump strength of k. This time, it is a reflection blur with anisotropic!
Let's take a look at the picture and see what this bump map is like on the model.
The Essential Thinking Of Roughness And Anisotropy (4)
Arnold render farm
In the article “The Essential Thinking Of Roughness And Anisotropy (3)”, we have understood that the essence of roughness is bump. Then we finally have to start the theme of our issue - use roughness to restore the bump effect of the ring scratches.
Using a rendered flowmap image, use this map to perform a series of node transformations, and finally insert an anisotropy rotation channel to make the anisotropic highlights rotate for the purpose. In fact, the essence of the flowmap is the position of the plane coordinates, and the value of XY is recorded in the red and green channels. Converting to an anisotropy rotation map is to calculate an angle by the position of a point, and use this angle to control the rotation. The core is to use a function called atan2 (some tools have this, such as Arnold and infrastructure designer, some tools are not, such as V-ray and redshift)
When attempting to render this effect using a non-Arnold renderer, the conversion of the flowmap to the anisotropy rotation map cannot be achieved due to the lack of mathematical tools. So I tried to use Substance Designer as an intermediate software for conversion. The following is a screenshot of the node in the pixel process in SD. The main function is to convert the flowmap into an anisotropy rotation map.
The result of the conversion is this, it can be seen that the black and white scratches (and some of the scratches are negative, so the whole picture looks very dark):
Finally, use the anisotropy rotation map generated in this SD directly, you can also render the effect we want.
The Essential Thinking of Roughness and Anisotropy (6)
Redshift render farm
We put the textures we just made in SD into the **render**er and try to **render** them. In fact, there are still several problems that are quite serious, let's solve them one by one.
First of all, if you directly insert this image into the anisotropy rotation channel, all other parameters are adjusted correctly and directly rendered with the Arnold renderer. The result will be very strange.
Why? In fact, the previous article has always had a big doubt left there without explanation, that is, in the process of flowmap to aniRotation map, the previous steps can be easily understood, the last operation -0.25, means the meaning of everyone now It should be understood that it is rotated 90 degrees. But why do you want to do this rotation? We built a simple scenario to illustrate this problem. In the figure, a row of cylinders is lined up, the cylinders are horizontal in the figure, and their highlight directions are vertical, and the two are perpendicular to each other. In fact, this figure is also a brief display of the nature of anisotropy.
This figure illustrates a problem in which the direction of anisotropic highlight stretching, and the structural direction of the groove itself, are vertical. When we have lateral grooves, the specular pattern we want should be vertical. The black line in the figure represents the groove, and the white line represents the highlight.
And what aniRotate texture we use, if the color is pure black, what effect? Yes, the default is not to rotate the horizontal highlights. Looking at our previous production process, the horizontal grooves are always black. So, this highlight direction is in one direction with the groove!
So we have to rotate the highlights by 90 degrees! In Arnold, the brightness of the texture is +0.25 (or the same).
We got this effect!
Change a fast renderer like Redshift. In Arnold we have made the benchmark effect, but can't stand the slow rendering speed. We try to reproduce this technique in some GPU renderers. In this article, we mainly analyze the problems in the Redshift renderer. Because the anisotropic rotation direction of Redshift is the opposite of that of Arnold, after performing the same operation above, an inversion operation is required.
Although the scratches are there, the results are still very strange.
Convert the original scratch map into a mask, as long as the scratches are now pure white. Push this mask out of the alpha channel of the image.
Then in the renderer, the mask is output to the anisotropy property, so that only the scratched places have the opposite sex, and the place without scratches is just ordinary highlights. Adding a remapHSV node slightly adjusts the anisotropy intensity of the scratched area, which belongs to the effect adjustment node. This depends on the individual requirements.
With the mask we can fine tune many parameters separately. After the adjustment, the effect is very ok.
What makes the most sense is not that the problem itself is solved, but the ideas behind the solution. Although I understand the essence behind Roughness Anisotropy, I still don't have enough thorough understanding of their characteristics. Like the method described in the article, although I have seen those practices, I didn't understand it at the time, and I didn't try it. Using Roughness with Anisotropy does simulate and replace the effect of bump or normal to a large extent. This will make the rendering efficiency much better, and even the effect that could not be achieved due to the parameter breaking the upper limit is now possible. There should be many reflections in nature that are somewhat anisotropic. This depends on how much of the structure is in the bumps when the artist is dealing with it. How many percent of the structure is in Roughness, as long as Roughness has, then more or less will reflect the phenomenon of the opposite sex, but the extent is different.
The Essential Thinking of Roughness and Anisotropy (5)
Redshift render farm
In this article, we first learn the principle of directly creating an anisotropy rotation map by bypassing the flowmap. Since the flowmap is not so intuitive, it cannot be used directly in some **render**ers (vray, **redshift**). We can start by trying to make an anisotropy rotation map directly.
The tool used is of course the program texture overlord - the substance designer. First of all, we have to clarify the goal, know exactly what the texture effect to achieve, and then want to make a plan. Now let's analyze what features have been verified and feasible. We will re-combine the pixel process node in SD before, use -0.25 instead of -0.25, this will make the final output picture brightness completely within the range of 0,1, so that we can observe the picture characteristics. (The only meaning of this operation is for good observation).
Now the picture looks like this, it shines a lot. All brightness is between 0, 1.
At this time, we use a node called histogram select in SD to observe the picture, so we can see how the scratches of different brightness in the picture are distributed. The figure below shows the effect of sweeping the original image from 0-1.
Here we can find two characteristics of this picture: Carefully staring at one end of a line, you will find that the line has rotated 360 degrees completely in the process. Under the same color gradation, the direction of the scratch is basically uniform. Why can such a feature create a ring-like scratch? This is related to the attributes on our shader. The material sphere has an anisotropy rotation property that specifically controls the direction of rotation of the opposite sex highlights. The input range of this attribute is 0,1, and the value has a corresponding relationship with the angle: 0 : 0° 0.25 : 90° 0.5 : 180° 0.75 : 270° 1.00 : 360° The middle is the linear difference, and you should understand this parameter very easily. The figure below shows the effect of anisotropy rotation changing from 0 to 1. The highlight is turned 360 degrees:
So in fact, we only need to make a scratch map that meets these two characteristics to achieve the goal: The brightness of each scratch changes with the angle The same angle of scratch brightness is the same This is shown by the diagram (rotate 0 = rotate 1)
After knowing the principles and goals, we have to find a way to achieve this, a simple implementation in Substance Designer. This one we only want to talk about ideas, because friends who will use SD software should be in very few. However, if you already understand the above ideas, you can do it in theory with ps, but it is quite a lot of trouble. Node graph is relatively simple.
The main thing is to use the tile sampler to do the scattering, the shape node is a square, pressed very flat to make scratch elements. A Gaussian noise makes the spread random. This gaussian noise should control the rotation and brightness of the scratch at the same time. The final effect is that the darker the gaussian noise is, the smaller the rotation angle is, and the lower the scratch brightness is. The resulting aniRotation texture effect, SD is naturally seamless.
Then use the histgram select node to check the effect, because we are using a black background, so when the brightness is 0, the large area of the screen is selected, this does not matter, the overall effect looks the same as we expected:
Because this is the simplest implementation, this scratch is not a fancy thing, it is straight. The core principle is the content described above.