CHRISTMAS SUPER SALE
CHRISTMAS SUPER SALE
close

Ballerina: A CGI Fantasy Created by Kay John Yim

Last edited on: 2023-11-22
Top News | Trending | Fox Renderfarm Interview
animation works

As an architect, Kay John Yim is busy with his daily work. But with his passion for CG art, he has taught himself more than 30 CG software and plug-ins in 2-3 years and has created many fantastic CG works in his spare time. His artworks are rich in detail, magnificent, delicate, and full of romantic imagination.

Collection of Works

John's recent works © Kay John Yim

John

Kay John Yim

Chartered Architect & CGI Artist

John grew up in Hong Kong, and graduated from the University of Bath (UK) with a degree in Science/Architectural Studies. And he was an exchange student in Architecture at Delft University of Technology (Netherlands). After graduation, he studied architecture at the Architectural Association School of Architecture. He is currently an architect at Spink Partners, a well-known British architectural design firm.

Kay John Yim’s personal site: https://johnyim.com/

ArtStation: https://www.artstation.com/johnyim

The making-of tutorial article of "Ballerina" was wrote by Kay John Yim for Fox Renderfarm, which is a leading cloud rendering service provider and GPU&CPU render farm:

Project "Ballerina" is a 30-second full CG animation, my first personal project to feature an animated photorealistic CG character staged within a grand Baroque rotunda lounge.

Ballerina © Kay John Yim

The animation is a representation of my inner struggles in all artistic pursuits, both metaphorically and literally.

Ballet, an art form widely known to have stringent standards of beauty and highly susceptible to public and self-criticism, is the metaphor of my daily professional and artistic practice. As an architect by day, I work on architectural visualizations, where every detail is being scrutinized by my colleagues, senior architects and clients. As an artist by night, I work on personal CG projects, of which I would do hundreds and up to thousands of iterations to get the perfect compositions and color schemes. No matter how proficient I become in my professional and artistic skills, the inner struggle never fades away.

FINAL_RENDER_STILL_IMAGE_1.jpg

Ballerina © Kay John Yim

The project is also literally a technical struggle - every step of the CG character creation process was alien to me. When I started working on the project, I struggled to find a comprehensive guide for creating photorealistic character animation - almost every article or tutorial I came across were either too specialized or too impractical for an indie CG artist to follow.

Through months of trial and error, I have since learned a lot about efficient character animation and rendering. This article is an intermediate guide for any indie artists like myself who want to take their CG art to the next level. As much as I would love to cater the guide for everyone, it is practically impossible to cover the nuts and bolts of every piece of software I use, hence I have included links to tutorials or resources wherever possible for beginners to follow along.

FINAL RENDER STILL IMAGE 2

Ballerina © Kay John Yim

The guide is divided into 4 main parts:

  • The Architecture
  • The Character
  • The Animation
  • Rendering

Software I used include:

  • Rhino
  • Moment of Inspiration 4 (MOI)
  • Cinema4D (C4D)
  • Redshift (RS)
  • Character Creator 3 (CC3)
  • iClone
  • ZBrush & ZWrap
  • XNormal
  • Marvelous Designer 11 (MD)
  • Houdini

1. THE ARCHITECTURE

My primary software for architectural modeling is Rhino.

There are many different ways to approach architectural modeling. Having used dozens of CAD and DCC software as an Architect, Rhino is arguably the best architectural modeling software for its accuracy and versatility. Rhino's main advantage over some other more popular DCCs like Cinema4D (C4D) or Houdini is its capability in handling very detailed curves in large quantities.

As an Architect, every model I built always started with a curve, usually in the shape of a wall section, cornice or skirting section, swept along another curve of a plan. Rhino's command list might seem overwhelming at first, but I almost exclusively used a dozen of them to turn curves into 3D geometry:

  • Rebuild
  • Trim
  • Blend
  • Sweep
  • Extrude
  • Sweep 2 Rails
  • Flow Along Surface
  • Surface from Network of Curves

The key to architectural modeling is to always use reference wherever possible. I always have PureRef open at the right bottom corner of my screen to make sure I model in correct proportions and scale. This usually includes actual photos and architectural drawings.

For this particular project I used the Amalienburg Hunting Lounge in Munich as my primary reference for the architecture.

SCREENSHOT 1

PureRef board for the project

I downloaded as many high-res references as possible, which included photos of different camera angles, different lighting and weather conditions. This gave me a wide range of details to work with, as well as a general idea of the space relative to human scale.

While the architecture consisted of 3 parts - the rotunda, the hallway and the end wall - they were essentially the same module. Hence I initially modeled one wall module consisting of a mirror and a window, duplicated and bent along a circle to get the walls of the rotunda.

GIF_1

Rhino modeling always begins with curves

GIF 2

wall module duplicated and bent along a curve

The module was reused for both the hallway and the end wall to save time and (rendering) memory.

Having built up a library of architectural profiles and ornaments over the past year, I was able to reuse and recycle profiles and ornaments for the modeling of the architecture.

Ornament modeling could be a daunting task, but with a couple of ornaments modeled I simply duplicated and rearranged them geometrically to get unique shapes.

GIF_3

Rhino ornament placement

The ceiling ornament for instance, was basically a single ornament that covered 1/8 of the dome surface, but radially duplicated 8 times to cover the entire ceiling. The same technique also applies to modeling of the chandelier.

All the objects within Rhino were then assigned to different layers by material; this made material assignment a lot easier later on in C4D.

assigning objects to layers by material

Notes:

The best way to get familiar with Rhino navigation is to model small-scale objects. Simply Rhino has a great beginner's series in modeling a teapot in Rhino:

I have posted a few WIP montages on my Youtube channel, while not meant to be tutorials, one should be able to get an overview of my modeling process: https://www.youtube.com/c/jyjohnyim

A detailed Rhino tutorial for modeling ornaments:

For anyone in a pinch, there are pre-built ornaments for purchase on 3D model stores like Textures.com; some ornament manufactures have free models available for download on Sketchfab and 3dsky.

Exporting from Rhino to C4D

After 4 days of architectural modeling, the Rhino model eventually consisted of 50% NURBS and 50% mesh. I used NURBS mostly for the primary architectural elements (walls, cornices, skirtings) and mesh for the ornaments.

Rhino is primarily a NURBS (Non-Uniform Rational B-Splines) software; and although NURBS models are very accurate in representing curve and surface data, most render engines or DCCs do not support NURBS.

For this reason I exported the NURBS and MESHES to .3dm and .FBX respectively, and used Moment of Inspiration (MOI) to convert the NURBS model to a mesh.

MOI has the best NURBS to quad mesh conversion(over Rhino or any other DCCs) - it always gives a clean mesh that could then be easily edited or UV-mapped for rendering.

exporting from MOI

Importing into C4D

Importing the FBX file into C4D was relatively straightforward, but there were a couple of things I paid attention to, notably the import settings, the model orientation and file unit, listed below in order of operation:

  1. open up a new project in C4D (project unit in cm);
  2. merge FBX;
  3. check "Geometry" and "Material" in the merge panel;
  4. change imported geometry orientation (P) by -90 degree in the Y-axis;
  5. use script "AT Group All Materials" to automatically organize Rhino materials into different groups.

importing FBX exported from MOI

GIF 7

importing FBX exported directly from Rhino

I modeled half of the architecture in rhino and then mirrored it as an instance in C4D, since everything is symmetrical.

GIF 8

C4D instance & mirroring

The floor (Versailles Parquet tiles) was modeled using photo-texturing method, most widely touted by CG artist Ian Hubert. I applied a Versailles Parquet tile photo as texture on a plane, then sliced up the plane with a "knife" tool to get the reflection roughness variations along the tile grouts. This allowed me to add subtle color and dirt variations with Curvature in Redshift.

The floor tile was then placed under a Cloner to be duplicated and spanned over the entire floor.

GIF_9

Cloning floor tiles

Notes:

C4D and Rhino use different Y and Z orientations, hence FBX directly exported from Rhino has to be rotated in C4D.

Download link for "AT Group all materials" script: http://www.architwister.com/portfolio/c4d-script-group-materials/

Ian Hubert's Youtube Channel has a lot of useful and efficient CG techniques, photo-texturing being one of the most popular:

https://www.youtube.com/c/mrdodobird/videos

Architectural Shading (Cinema4D + Redshift)

Since I grouped all the meshes by materials in advance, assigning materials was just as simple as dragging and dropping to the material groups as cubic maps or Tri-planar maps.

I used Textures.com, Greyscalegorilla's EMC material pack and Quixel Megascans as base materials for all my shaders.

For ACES to work correctly within Redshift, every texture has to be manually assigned to the correct color space in the RS Texture Node; generally diffuse/albedo maps belong to "sRGB", and the rest (roughness, displacement, normal maps) belong to "Raw".

My architectural shaders were mostly a 50/50 mix of photo texture and "dirt" texture to give an extra hint of realism.

RS Shader Graph of the wall material

RS Shader Graph of the wall material

2. THE CHARACTER

The base character was created in Character Creator 3 (CC3) with Ultimate Morphs and SkinGen plugins - both of which were very artist friendly with self-explanatory parameters.

Ultimate Morphs provided precise slider controls to every bone and muscle size of the character, while SkinGen gave a wide range of presets for skin color, skin texture detail and makeup.

I also used CC3's Hair Builder to apply a game-ready hair mesh to my character.

GIF 10

CC3 morphing & Hair Builder

Face Texturing

Face was the one of the most important parts of the CG character that required extra attention. The best workflow I found to add photorealistic detail was the "Killer workflow" using Texturing XYZ's VFace model and Zwrap.

VFACE is a collection of state-of-the-art photogrammetry human head models produced by Texturing XYZ; every VFACE comes with 16K of photoscanned skin textures, displacement and utility maps; Zwrap is a ZBrush plugin that allows one to automatically fit a pre-existing topology to a custom model.

The "Killer workflow" essentially matches the VFACE mesh shape to the CC3 head model; using the Killer workflow, I was able to bake all the VFACE details down to the CC3 head model once the 2 mesh shapes are matched up.

My adaptation of the "Killer workflow" can be broken down as follow:

  1. export T-posed character from CC3 to C4D;
  2. delete all polygons except the head of the CC3 character;
  3. export both CC3 head model and VFACE model to ZBrush;
  4. use MOVE/Smooth brush to maneuverer VFACE model to fit as closely as possible to the CC3 head model;
  5. launch ZWRAP, click and match as many points as possible, notably around the nose, eyes, mouth and ears;
  6. let ZWRAP process the matched up points;
  7. ZWRARP should then be able to output a VFACE model that matches perfectly to the CC3 head model;
  8. feed both models into XNormal and bake the VFACE textures to the CC3 head model.

GIF_11

matching points of VFACE (left) & CC3 HEADS (right) in ZWRAP

Notes:

Full "Killer Workflow" Tutorial on Textureing.XYZ's official Youtube channel:

I recommend save the matching points in ZWRAP before processing.

I also recommend baking all the VFACE maps individually in XNormal as they are very high-res and could crash XNormal when baked in batch.

Skin Shading (Cinema4D + Redshift)

Once I had the XYZ texture maps ready, I then exported the rest of the character texture maps from CC3.

After that, I imported the character into C4D, and converted all the materials to Redshift materials.

At the time of writing, Redshift unfortunately did not yet support Randomwalk SSS (a very realistic and physically accurate subsurface scattering model found in other renderers like Arnold), hence required a lot more tweaking when it came to rendering skin.

The 3 levels of subsurface scattering were driven by a single diffuse material with different "Color Correct" settings.

SCREENSHOT 3

RS Shader Graph of "Leg" material

The head shader was a mix of both the CC3 textures and VFACE textures; the VFACE multichannel displacement was blended with the "microskin" CC3 displacements map.

SCREENSHOT 4

RS Shader Graph of "Head" material

GIF 12

Character look-dev

SCREENSHOT 5

close-up render of the character

A “Redshift Object” was applied to the character to enable displacement - only then would the VFACE displacements show up in render.

Note:

Skin shading is one of the most advanced aspects in rendering. Linked below one of the most helpful tutorial for Redshift Skin shading:

Hair Shading

Having experimented with grooming using C4D Ornatrix, Maya Xgen and Houdini, I decided that using the baked hair mesh from CC3 for project "Ballerina" was leaps and bounds more efficient down the line.

I use a Redshift "glass" material with CC3 hair textures maps fed into the "reflection" and "refraction" color slots, as hair (in real life) reacts to light like tiny glass tubes.

Note:

For anyone interested in taking the CC3 hair to the next level of realism, CGcircuit has a great vellum tutorial dedicated to hair generation and simulation.

GIF_13

early test of CC3 mesh hair to hair geometry conversion in Houdini

3. THE ANIMATION

Character Animation (iClone)

I then exported the CC3 Character to iClone for animation.

I considered a couple of ways to approach realistic character animation, these included:

  1. using off-the-shelf mocap data (Mixamo, Reallusion Actorcore);
  2. comissioning a mocap studio to do bespoke mocap animation;
  3. using a mocap suit (e.g. Rokoko or Xsens) for custom mocap animation;
  4. old-school keyframing.

Having experimented with various off-the-shelf mocap data, I found Mixamo mocaps to be way too generic, most of which look very robotic; Reallusion Actorcore had some very realistic motions, but I could not find exactly what I needed for the project.

With no budget and (my) very specific character motion requirements, option 2 and 3 were out of the picture. This led me to old-school keyframing.

First I screen-captured videos of ballet performances and laid them out frame by frame in PureRef. I then overlaid the PureRef reference (in half opacity) over iClone, and adjusted every character joint to match my reference using “Edit Motion Layer”.

GIF_14

Pose 1

Pose 2

GIF 16

final character animation

The animated characters were then exported to Alembic files.

NOTE:

While my final project concept depicted ballerinas in slow motion, my original idea was actually to keyframe a 20-second ballet dance, which I very quickly realized to be bad idea for a number of reasons:

  1. in slow motion a lot of frames could be interpolated, but real time motion involved a lot of unique frames and hence required a lot more tweaking;
  2. subsequently more unique frames meant more rendering problems (flickering, tessellation issues etc.).

GIF 17

early test render of my original idea

Considering this as my first character animation project, I came to the conclusion of doing a slow-motion style sequence instead - 2 unique poses with 160 frames of motion each.

Garment Simulation

Cloth simulation was by far the most challenging part of the project.

The two major cloth simulation/solvers that I considered were Marvelous Designer (MD) and Houdini Vellum.

While Houdini Vellum was much more versatile and more reliable than Marvelous Designer, I personally found it to be way too slow and therefore impractical without a farm (one frame of cloth simulation could take up to 3 minutes in Houdini Vellum vs. 30 seconds in Marvelous Designer on a Threadripper PRO 3955WX with 128GBs ram).

Cloth simulation in MD, while generally a lot quicker to setup than Houdini vellum, was not as straightforward as I imagined.

Simulated garments in MD always came with some form of glitches; this included cloth jittering, piercing through character or just complete dislocations. Below are some of the settings I tweaked to minimize glitches:

  1. using "Tack" to attach parts of the garment to the character;
  2. increasing cloth "Density" and "Air Damping" to prevent garment from moving too fast and subsequently move out of place;
  3. simulate parts of the garment in isolation - though not physically accurate, allowed me to iterate and debug a lot quicker.

I also reduced "Gravity" in addition to the above tweaks to achieve a slow-motion look.

SCREENSHOT 7

MD Simulation Settings

GIF 18

MD simulation

Note:

Due to the license agreement of a sewing pattern I used, I am not able to share screenshots of my garment creation process. However the official Marvelous Designer Youtube channel has a lot of garment modeling livestreams which I find to be the most helpful resource for learning MD:

Alternatively there are a lot of readily available 3D garment online (notably on Marvelous Designer's official site or Artstation Marketplace) which I used as a basis for a lot of my projects.

MD is extremely prone to crashing, there is also a bug in both MD10 and MD11 that prevents saving of simulated garments 90% of the time, so always export simulated garment as Alembic files rather than relying on MD to save the simulation.

Simulation Clean-up

After dozens of simulations, I would then import the MD exported Alembic files into Houdini, where I did a lot of manual cleanups, this included:

  1. manually fixing collided cloth and character with "Soft Transform";
  2. reducing simulation glitches with "Attribute Blur";
  3. blending together preferable simulations from different alembic files with "Time Blend".

GIF 19

cleaning up simulated cloth in Houdini with "Soft Transform"

There are two tutorials that explain the Houdini cloth cleanup process in great detail, which I watched on a loop while working on the project:

Cloth Production in Houdini: https://www.cgcircuit.com/tutorial/houdini-cloth-in-production

Houdini Vellum Creature Setup: https://www.cgcircuit.com/tutorial/houdini-vellum-creature-setup

The cleaned-up cloth simulation was then exported as Alembic to C4D.

Alternative to Garment Simulation

For anyone frustrated by the impractical Houdini Vellum cloth simulation times and MD glitches, an alternative would be to literally attach the garment to the character's skin in CC3 - a technique most commonly found in game production.

attaching garment to character in CC3

While this is a great time-saver alternative, garment created in CC3 lacks realistic cloth movements and wrinkles; I recommend only using this method for objects tightly attached to the character (shoes) or only as a last resort for garment if MD cloth simulation keeps failing.

Note:

Linked below Reallusion's official guide for creating game-ready garments: https://manual.reallusion.com/Character_Creator_3/ENU/3/Content/Character_Creator_3/3/08_Cloth/Creating_Custom_Clothes_OBJ.htm

Garment Baking and Shading

Once I was done with cloth simulation in MD and clean-up in Houdini, I imported the Alembic file into C4D.

MD Alembic files always show up in C4D as one alembic object without any selection sets; this makes material assigning impossible.

This was where C4D baking came to play - a process I used for converting the Alembic file into C4D object with PLA (Point Level Animation):

  1. drag the alembic object into C4D timeline;
  2. go to "Functions";
  3. "Bake Objects";
  4. check "PLA";
  5. then bake.

Going through the steps above I was able to get a baked down C4D object that I could easily select polygons and assign multiple materials using selection sets.

I then exported an OBJ file from MD with materials, imported into C4D and dragged the selection sets directly onto the baked down garment object. This eliminated the need to manually reassign materials in C4D.

I used a blend of linen texture maps (from Quixel Megascans Bridge) and Redshift Car Shader to emulate sequins fabric (think "blink") found in a lot of professional ballet tutu dresses.

SCREENSHOT 8

close-up render of the fabric material

Note: Youtube Travis Davis has a tutorial demonstrating the exact procedures:

WARNING: do not use AO or Curvature nodes for the simulated garment materials (or any animated object), as they could potentially produce glitches in final renders.

4. RENDERING

Lighting & Environment

Although I tried to keep my lighting as minimal as possible, project "Ballerina" inevitably required a lot of tinkering due to the nighttime setting.

The nighttime HDRI did not provide sufficient ambient light to the interior space, and the chandelier bulbs were way too dim as the primary light source. Ultimately I placed an invisible spot light under the center chandelier and used a fake spot light that only affected all the architectural ornaments. The fake light provided an extra level of bounce light that gave just the right amount of illumination without ruining the moody atmosphere.

I also added a "Redshift Environment" controlled in Z axis multiplied with "Maxon Noise" to give more depth to the scene.

Exterior-wise, I scattered 2 variations of Dogwood Trees with C4D "Matrix" in the surrounding area. They were lit from ground up in the scene to give extra depth.

In summary lighting of the scene includes:

  1. Dome light (nighttime HDRI) x 1
  2. chandelier (mesh lights) x 3
  3. Spot Light (center) x 1
  4. exterior Area Lights x 4
  5. fake Area Light positioned under chandelier (includes architectural ornaments only)

SCREENSHOT 9

RS lights

Notes:

Redshift has a very good tutorial on Youtube on controlling the Redshift Environment:

The trees were generated with SpeedTree.

Lighting takes a lot of consistent practice to master; apart from my daily CG practice, I spent a lot of time watching b-rolls/breakdowns of movies - for instance I took a lot of inspiration from Roger Deakin's lighting and cinematography, as well as Wes Anderson's frame composition and color combinations.

Camera Movements

All my camera movements were very subtle. This included dolly, camera roll and panning shots, all driven with Greyscalegorilla's C4D plugin Signal.

I personally prefer using Signal for its non-destructive nature, but old-school key-framing would work just fine for similar camera movements.

SCREENSHOT 10

Signal Graph

Draft Renders

Once I had the character animations, cloth simulations and camera movements ready, I began to do low-res test renders to make sure that I would not get any surprises during the final renders, this included:

  1. flipbook (openGL) renders to ensure the timing of the animations were optimal;
  2. low-res low-sample full sequence renders to ensure there were no glitches;
  3. full-res (2K) high-sample still renders with AOVs (diffuse, reflection, refraction, volume) to check what contributed to the prevalent noise if any;
  4. submitting test render to Fox Renderfarm to ensure the final renders matched up with my local renders.

This process lasted over 2 months with iterations and iterations of renders and corrections.

GIF_21

close-up shot I

GIF_22

close-up shot II

GIF_23

final shot

Final Renders & Denoising

I used a relatively high-sample render setting for the final renders, as interior scenes in Redshift were generally prone to noise.

SCREENSHOT 11

SCREENSHOT 12

SCREENSHOT 13

SCREENSHOT 14

RS final render settings

I also had motion blur and bokeh turned on for the final renders - in general motion blurs and bokehs look better (more physically accurate) in-render compared to motion blurs and bokehs added via compositing.

Half of the final 2K sequence was rendered on a local workstation, while the rest was rendered on Fox Renderfarm, totalling about 6840 hours of render time on dual RTX 3090 machines.

I used Neat Video for denoising the final shot, whereas the closeup shots were denoised using Single Altus (in Redshift).

Note:

Always turn “Random Noise Pattern” off under Redshift “Unified Sampling” when using “Altus Single” for denoising.

Redshift Rendering GI Trick

Redshift's GI Irradiance Cache calculation could be quite costly; my final renders for instance have an average of 5 minutes of GI Irradiance Caching time for each frame.

In Vray there was an option in the IR/LC setting named "use camera path", designed specifically for scenes where the camera would move through a still scene. Once "use camera path" was enabled Vray would then only calculate one frame of GI cache for an entire sequence.

There is a Redshift Forum post written by Andrian that explains how he was able to replicate the same function in Redshift.

Borrowing a page from Vray, I use the following motion blur settings to calculate the first frame of Irradiance Cache:

RS rendering GI trick motion blur setting

RS rendering GI trick motion blur setting

The one Irradiance Cache is then used to render the entire sequence. Two shots of the project were rendered using one single GI cache, resulting in a 10% faster render time overall.

NOTE:

The GI trick only applies to shots with very little motion; when applied to the 2 closeup shots of project "Ballerina" for example, I got light patches and ghosting on the character skin.

Conclusion

Having spent months working on the project, I have gained an appreciation for traditional character animators - I never realized the amount of effort involved in crafting character animations, and the subtlety of details required to bring convincing CG characters to live.

Though I would not consider myself to be a character artist, I personally think Character Animations are really powerful in making CG environments relatable, and therefore would still be an essential part of my personal CG pursuit moving forward.

Latest News
Sony Pictures Releases Full Short of 'The Spider Within: A Spider-Verse Story'
2024-04-02
Sony Pictures Releases Full Short of 'The Spider Within: A Spider-Verse Story'
More
Animation Trends of 2024: The Rise of 3D and Hybrid Techniques
2023-11-30
Animation Trends of 2024: The Rise of 3D and Hybrid Techniques
More
Keywords
3D Art Competitions3D ToolTrailerFox Renderfarm PromotionCloud Rendering FarmAnnecy FestivalMILESTONESFox RenderfarmDesktop ClientBlenderBollywood filmsHum3D ContestNVIDIASilkroad Digital VisionMalaysian Animated FilmsIndiajoyArchitectural VisualizationCGarchitect Architectural 3DawardsBest cloud rendering servicesRender cloudVFXUpin & IpinGreen BookFox Renderfarm InterviewMission MangalKre8tif!Fox Renderfarm Mini ProgramCGCG Tech Summit Malaysiafilm workscloud renderingGolden Horse AwardShadowVFX BreakdownMorrigan FlebotteVWArtclub Annual *ContestAsswad Mouhamadanimation worksIMax StudioBoonie BearsRenderosityGary S. KennedyEvermotion Challenge 2018Brian BeaudryBobby BalaMohit SanchaniyaKatapix MediaFlying Car ProductionsRazerThe ShipmentFoxRenderfarmCG Tech SummitAlpacalypse ProductionshalloweenHum3d Survial Car ChallengeFederico CiuffoliniRalf SczepanIavor TrifonovClarisseCGTS MalaysiaIsotropixC4DTomasz BednarzV-RayCinema 4DMAXONsiggraph cafEvermotion challenge 2017CGTrader Space Competitionfilm of the yearLe Anh NhanPlanet UnknownFox Renderfarm 2017 AchievementsCSFFJulio Sotoboar 2017DeepUnreal Engine render farmChris SunGlass CageMaking Life of Bri' n Chrisanthem studiosThe RookiesPeter DraperMakuta VFXBaahubali 2CG CompetitionenchantedmobCG StudiosAcademy AwardsCGVrayweeklycgchallengeSketchUpsiggraph 2017Chris BuchalSIGGRAPH AsiaLightWaveIndigo RendererStop Motion AnimationV-Ray RTCPU RenderingNVIDIA IrayChaos GroupOctaneRenderRedshiftSTAR CORECICAFVRMr. HublotRibbitGPU RenderingLinuxMonkey IslandLuxRenderHPCRender FarmLife of BriWORLD LABMichael Wakelam3D RenderingOnline Render FarmAlibabaBaahubaliVAXMalaysia3D AnimationOscarSIGGRAPHCGTraderKunming Asia Animation ExhibitionEvermotionRenderMan
Christmas Super Sale 2024 - Fox Renderfarm
Categories
Fox Renderfarm InterviewFox Renderfarm NewsFox Renderfarm ProjectsCloud Rendering TutorialRender Farm TutorialCG ChallengesCG NewsTop NewsTrending
Recent News List
Interview with Ben White: Top Winner in the Ultimate Race Track Challenge
Interview with Ben White: Top Winner in the Ultimate Race Track Challenge
2024-12-13
What is the Best 3D Texturing Software?
What is the Best 3D Texturing Software?
2024-12-12
What is AI Rendering? How AI is Revolutionizing 3D Animation Production
What is AI Rendering? How AI is Revolutionizing 3D Animation Production
2024-12-11
시그라프 아시아 2024, 3년 만에 도쿄로 돌아온다
시그라프 아시아 2024, 3년 만에 도쿄로 돌아온다
2024-12-10
Memorable Highlights from SIGGRAPH Asia 2024
Memorable Highlights from SIGGRAPH Asia 2024
2024-12-10
A Complete Tutorial of Making Venom Effect in Blender
A Complete Tutorial of Making Venom Effect in Blender
2024-12-09
Official Trailer for Disney's 'Snow White' is Out
Official Trailer for Disney's 'Snow White' is Out
2024-12-06
Top 10 Stop Motion Animation Films Recommended
Top 10 Stop Motion Animation Films Recommended
2024-12-05
Kinetic Rush Insights: Sam Denison’s Top 100 Experience
Kinetic Rush Insights: Sam Denison’s Top 100 Experience
2024-12-04
Business Consulting

Global Agent Contact: Gordon Shaw

Email: gordon@foxrenderfarm.com

Marketing Contact: Rachel Chen

Email: rachel@foxrenderfarm.com

Message Us:
Newsletter
Keep up with our latest software updates, special offers and events!
Copyright © 2024 FoxRenderfarm.com. All Rights Reserved.