How to Use VFace and Make Effects in Arnold?
In this article, Fox Renderfarm, the CG industry's leading cloud rendering service provider and render farm, will share with you how to use VFace and how to restore effects in the Arnold renderer. The author is CaoJiajun.Firstly I purchased some VFace materials from the official website to get the following files.!VFace materials!VFace materials!VFace materials!VFace materialsWe will mainly use the above files for this sharing, they are our main materials to make high quality details of the face. VFace provides 2 types of facial models, one for the head with open eyes and one for the head with closed eyes, choose one of them according to your needs. If you are doing a model that needs to be animated with expressions in post, I would recommend choosing the model with closed eyes, as the open eyes model will cause the eyelids to stretch when you do the blink animation. You don't need to worry about this for still-frame work.Let's start with the production process. It's actually very simple, wrap your own model with a VFace model through Wrap or Zwrap, then pass the map and finally render it in Maya or other 3D software. The process is simple but there will be a lot of things that need to be taken care of in there otherwise the facial details will not be rendered correctly. 1 Model CleaningFirst we need to load the model provided by VFace into ZBrush and match it to our sculpted model.Then you can head into Zwrap or Wrap for wrapping.Lastly, the wrapped model is imported into ZBrush to replace the VFace model.In ZBrush we use the Project brush to match the face of the wrapped model more precisely to our own sculpted model, once matched you will have a model that matches your sculpted model perfectly, at this point we can go into Mari for the map transfer.!Model Cleaning!Model Cleaning 2 Using Mari to Transfer the MapIn Mari we first set up the project, import our own sculpted model or the wrapped and matched XYZ model, then remove the other channels in the Channels and keep only the Basecolor channel, and we can customize the channels as we wish.!set up the project!set up the projectWhat we see now is how the model looks when imported into Mari. At this point we need to set the custom channels DIFF\DISP\UNITY\ to import the VFace map.Firstly, the DIFF channel is set at the original size of 16k and the Depth is set at 16bit (later on there can be more color depth control and of course it can be set to 8bit). The key point is that when the color depth is set to 16bit or 32bit, the color space needs to be set to linear and 8bit to srgb.Keep the size of displacement map at 16k. I recommend setting the Depth to 32bit, as you will get more detail of displacement, and keep the color space linear, with Scalar Data ticked (as the displacement map is a color map with 3 channels of RGB, you need to keep the greyscale data).The blend map settings are the same as the color map, but Scalar Data also needs to be ticked (this map is used as a color mask for toning or as a weighting mask).!Configuring channels!Configuring channels!Configuring channels!Configuring channels!Configuring channels!Configuring channelsNext we can use the object panel to append our own model in preparation for the transfer of the map.!transfer map!transfer mapRight-click on any channel and select the Transfer command in the pop-up menu to bring up the menu for transferring the map.In the transfer menu select the channel which needs to be transferred in the first step, set the transfer object in the second step, click on the arrow in the third step, set the size in the fourth step and finally click on the ok button.I generally recommend passing one channel at a time as it is very slow and takes a long time to wait. For size I usually choose 4k for color, 8k for displacement and 4k for mixing channels. This step requires a lot of patience!!VFace original effect!VFace original effectVFace original effect!The effect after transferThe effect after transfer!transfer settings!transfer settings!transfer settingsAfter the transfer we can export the map. The export map settings are shown in the figure. We need to pay attention to the color space setting (in the red box). The color space of the color channel is set to linear and should also be set to linear when exporting. The export of displacement and hybrid maps is a bit more unusual, as we set the color space to linear when creating the channel, but the export needs to be set to srgb, as both the displacement and hybrid maps are a combination of the 3 channels R,G,B to form a color map. Finally click the export button and it's done.!VFace original color effectVFace original color effect!Color effects after exportingColor effects after exporting!VFace original displacementVFace original displacement!Effect after exportEffect after exportIn short, your output map needs to be the same color as the map provided by VFace, either too bright or too dark is an error. 3 Arnold Rendering!Default settings!Default settingsDefault settingsAt this point we can go to Maya and render the VFace map we have created (we won't go into the lighting environment and materials here, we will focus on the link to the replacement map). First we import the passed VFace map and render it by default to see what we get. Obviously we get an ugly result, so how to set it to get it right?!Linking method for displacement map!Linking method for displacement mapHere we add an aisubtract node (which you can interpret as a subtraction or exclusion node), because the default median value of VFace is 0.5 and arnold prefers a replacement map with a median value of 0. So we enter the VFace color into input1 and change the color of input2 to a luminance value of 0.5. This is equivalent to subtracting the 0.5 luminance info from the default 0.5 median luminance of VFace, and we get a displacement with a median value of 0.!Median value 0.5Median value 0.5!Median value 0Median value 0!add aimultply node!add aimultply nodeAfter setting the median we can add an aimultply node. This node can be interpreted as a multiplyDivide node, which has the same function as Maya's own multiplyDivide node and controls the overall strength of the VFace displacement. We can output the color of the aisubract node to the input1 node of aimultply and adjust the overall strength of the detail displacement of VFace by using the black, grey and white of input2 (any value multiplied by 1 equals 1, any value multiplied by 0 equals 0, all the colors we can see in the computer are actually numbers to the computer. We can change the value and thus the strength of the map by simple mathematical calculations, once we know this we can see why we use the multiplyDivide node to control the strength of the displacement).!nodes settings!nodes settingsNext we add an ailayerRgba node. The R, G and B channels of the aimultipy are connected to the R channels of input1, 2 and 3 of ailayerRgba, and through the mix attribute of this node we can control the intensity of the displacement of each of the three VFace channels (R, G and B), and after a series of settings we can get a correct and controlled rendering of the VFace displacement.!VFace-dispVFace-disp!ZBrush-dispZBrush-disp!VFace+ZBrush DispVFace+ZBrush disp!ZBrush Export Displacement Settings!ZBrush Export Displacement SettingsZBrush Export Displacement SettingsAlthough we have a correct and controlled VFace displacement result, it does not combine with the displacement we sculpted in Zbrush and we need to find a way to combine the two to get our final displacement effect.Here I used the aiAdd node to add the two displacement maps together to get our VFace displacement + ZBrush displacement effect (of course you can also use Maya's plusMinus node).It doesn't matter how many displacement map elements you have (such as the scar on the face, etc.), you can structure them through the aiAdd node to get a composite displacement effect. The advantage of making it this way is that you can adjust the strength and weakness of each displacement channel at any time, without having to import and export them in different software. It is a very standard linear process approach.!Default effectDefault effect!After color correctionAfter color correctionFinally we apply the passed color to the subsurface color, and by default we get a very dark color mapping, which is not wrong. The VFace default model will be the same color. We can correct the skin color by using the hue, saturation and lightness of the colourCorrect node. This is why I choose 16bit colors to bake with, so I can get more control over the colors and get a correct result after color correction (of course the current result is just a rough mapping, we can still do deeper work on the map to get a better result).As a powerful render farm offering arnold cloud rendering services, Fox Renderfarm hopes this article can give you some help.Source: Thepoly
The Best Arnold Render Farm | Fox Renderfarm
Fox Renderfarm is a global technology company specialized in cloud computing services for rendering in entertainment industry. We supports most popular 3D software, renderers and plugins all over the world, like 3ds Max, Maya, Cinema 4D, SketchUp, Blender, V-Ray, Redshift, Corona, and more. Supported internal and external plugins for 3ds Max, Maya, or Cinema 4D are integrated to our Arnold support. GPU & CPU rendering are both available in arnold render farm. Fox Renderfarm fire up thousands of rendering nodes instantly, high availability, no waiting necessary, easy to use, refers to MPAA security standards, also a flexible render farm pricing for you. The more reason you choose us in arnold render farm are below: - The partner of Oscar winning production teams. - With Hollywood level production experience for years. - 24/7 Live customer care & technical support. - Response time: 10-15 mins. - Contact available via Skype/Email .- As low as $0.036 per core hour.- Volume discounts available up to 50%. Fox Renderfarm has an outstanding team with over 20 years’ experience in CG industry. Team members are from Disney, Lucasfilm, Dreamworks, Sony, etc. With professional services and industry-leading innovations, they serve leading special effects companies and animation studios from over 50 countries and regions, including two Oscar winners. In 2015, Fox Renderfarm formed a global strategic partnership with Aliyun (Alibaba Cloud Computing) to provide global visual cloud computing services. Get $25 free trial to register and rendering your artwork online with the best cloud rendering farm now!
The Method to Make LightGroup of Arnold 5.1
A friend asked me before, how to make LightGroup in Arnold 5.1 version, because with LightGroup, we do not need to divide the Light Render Layer. The method of Arnold 5.1's LightGroup is not the same as the previous version. I remember that the LightGroup property was added to the light at the beginning. This method only works for the alshader's shader, and then the light property bar is added in the AOV Light Group column, add LightGroup1 by ourselves in the AOV attribute, and apply theL.'LightGroup' command. After the Arnold 5.1 version, it was found that this method has failed. Even if you add the L.'LightGroup command to AOV, still not working. Next, let me talk about how Arnold 5.1 makes LightGroup. The first step is to create a light. In the AOV Light Group property column of the light, give it a name that you want of this LightGroup, I name it LGT_A here. The second step is to add the RGBA (cloud rendering note that, it is must in uppercase) channel to the AOV channel. Why should we add the RGBA channel? Because I found that only the RGBA channel has its Light Group List property bar showing its effect. If you add a LightGroup channel, you will find that this property is in the closed state, and this RGBA must be uppercase, quite fucking. Then enter the RGBA channel properties and select the LightGroup you added to the Light Group List option. My scene made 3lights, so there are three. LightGroup appears after the rendering below. Here is the rendering of render farm test:
Maya 2018 Arnold's Method of Making Motion Vector Blur in AOV Channel
Yesterday, I found that the Arnold of Maya 2018 has a directional error for the motion vector blur method made in the AOV channel. The motion blur method I wrote earlier applies to Maya 2016 and earlier versions, so I spent a little time studying it today. The problem has been solved now, and cloud rendering will use a small case to explain the production steps. Step 1: Creating the ball as a simple animated scene. Step 2: Turn on motion blur: Render a reference to motion blur. Step 3: Follow the method given in the official Arnold documentation: Change the camera's Shutter Start and Shutter End to 0.5 (why changed to 0.5 because my motion blur type is Center On Frame). Step 4: Create a channel for AOV motion blur: Here I will create two motion blur channels: The first is the motionvector channel that comes with Arnold. The second is to use the custom motion vector AOV of motion vector shader (giving the aiMotionVector material in the AOV channel). Cloud rendering note: MBlur is named according to your preference. The setting of aiMotionVector was wrote in the previous article. Step 5: Render this image to see two motion blur conditions. It can be seen that both the motionvector and MBlur motion blur channels in our AOV channel are in the wrong direction. Step 6: Next I will officially explain how to solve this problem: From the beginning. A: Turn on the Motion Blur button in the render settings: I chose the Center On Frame mode here. B: Change the Shutter Start and Shutter End in the camera Arnold attribute to 0.5, respectively, corresponding to the Center On Frame mode. (Note: When rendering the motion vector AOV, the camera Shutter Start and Shutter End should be the same value.) C: Create a new MBlur_2 channel in AOV (to compare with the previous MBlur.) D: Open the material editor and find the two nodes aiMotionVector and aiVectorMap under Arnold's material panel. 1) Connect the outColor of the aiMotionVector node to the input of the aiVectorMap node. 2) Then connect the outValue in the aiVectorMap node to the DefaultValue in the Shader property of the MBlur channel in AOV. Change the XYZ mode of the Order in the aiVectorMap node to YZX mode and remove the hook of the tangentSpace. 4) In the aiMotionVector node, enable Encoding Raw Vector. E. After the above steps are completed, start rendering the image and view it in Nuke. As shown in the figure: Maya's default motion blur is basically the same as the MBlur_2 channel we made in AOV, and the motion blur adjusted by Nuke is basically the same. Of course, I also found a bug in the Arnold renderer, its motion blur channel does not solve the shadow of motion blur, there can be seen in a big picture that cloud rendering attached below. Fox Renderfarm hopes it will be of some help to you. It is well known that Fox Renderfarm is an excellent CPU&GPU cloud render farm in the CG world, so if you need to find a render farm, why not try Fox Renderfarm, which is offering a free $25 trial for new users? Thanks for reading!
How to Quickly Create an Object ID For Arnold in Maya
When a friend is working on a project in the company, he has a problem. There are four or five characters of the same material in the scene. In order to facilitate adjustment, each character needs to be distinguished by a different ID. Each of the characters has its own materials and displacements, so let Fox Renderfarm, the best CPU & GPU cloud rendering service provider, tell us how to quickly create object IDs and distinguish roles of the same material.To create Arnold's object ID in Maya, you need to assign the material ball separately. If you consider the Displacement, we need to manually connect the displacement to the SG node of the ID’s material ball. Below are the steps of how to quickly assign an object ID:1. Switch the Arnold Translator property under the Arnold menu in the Object Attribute to Mesh Light. 2. Change color in R, G, B value you need.Exposure changed to 20, considering the brightness of self-illumination, you can increase it.3. Check Light Visible.4. Remove the Cast Shadows’ hook (do not let it produce shadow)5. Change the parameters such as Diffuse, Specular, sss, Indirect, and Volume to 0.Everyone knows that this is the self-illuminating property of the object, so first we have to let the object show its intrinsic color, all need to check Light Visible.Secondly, since it is self-illuminating, it will definitely affect the surrounding environment, so we need to turn off the self-illuminating Diffuse, Specular, sss, Indirect, Volume and other attributes, while not letting it produce shadows, remove the Cast Shadows hook.This method also fully considers the displacement, Normal and Bump. (This method may not be used in the new version of Arnold and layering with Render Setup. The new version of Aronld has a crypto method for distinguishing IDs.)SummaryThat's all the solutions that Fox Renderfarm has brought to you. We hope you were able to create the object ID for Arnold in Maya successfully. As an excellent cloud render farm, Fox Renderfarm will continue to bring the latest and most complete information to 3D artists.
Arnold Renders Motion Vector Channel
Arnold's motion vector channel has always been a bug.Today the best render farm are talking about another way to implement the correct Arnold motion vector channel.Step 1: Turn on Motion Blur Step 2: Manually create an MBlur channel (named according to your preferences)Step 3: Create two nodes aiUtility and aiMotionVector respectivelyTick aiMotion Vector's Encode Raw Vector Assign aiMotionVector to aiUtility's color Change aiUtility's Shade Mode to FlatStep 4: Change Shutter Start and Shutter End to 0.5 under the camera property Arnold menu that you are rendering. This ensures that the rendered material layer is free of motion blur and that the AOV channel has the correct motion vector channel.Cloud rendering give you a note: The values of Shutter Start and Shutter End under the camera properties for the three motion blur modes. Fox Renderfarm hopes it will be of some help to you. As you know, Fox Renderfarm is an excellent cloud rendering farm in the CG world, so if you need to find a render farm, why not try Fox Renderfarm, which is offering a free $25 trial for new users? Thanks for reading!
How to Render High-quality Images in Blender
Top 9 Best And Free Blender Render Farms of 2023
Revealing the Techniques Behind the Production of Jibaro "Love, Death & Robots", Which Took Two Years to Draw the Storyboard
How to Render Large Scenes with Redshift in Cinema 4D
Top 10 Free And Best Cloud Rendering Services in 2023
Top 8 After Effects Render Farm Recommended of 2023
How to Reduce Noise in Corona Renderer - Corona Denoising
Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Is The Best 3D Renderer?