The Making of “Kagura”, A Photorealistic CG Animation Created by Kay John Yim
This June, John brought us a tutorial on making "Ballerina". And this time, he shares another well-written article about how he made his new personal project "Kagura", which is also rendered with Fox Renderfarm, a powerful 3D animation render farm.
Kay John Yim
Chartered Architect & CGI Artist
Kay John Yim's personal site: https://johnyim.com/
Project "Kagura" is a one-minute full CG animation, my second personal project to feature an animated photorealistic CG character.
The project concept is a fantasized version of Kagura (神楽) - a type of Shinto ritual ceremonial dance in Japan. Traditionally, the dancer herself turns into a god during the performance - here depicted as the dancer's ballerina tutu dress transforming into a hakama as she dances on the floating stage, purifying spirits of nature.
© Kay John Yim
The article focuses primarily on shot 3 & 4 of project "Kagura", where I detail my design and technical process of the 4 main aspects below:
- The Architecture
- The Animation
- The Transformation
Software I used include:
- Moment of Inspiration (MOI)
- Cinema 4D (C4D)
- Redshift (RS)
- Character Creator (CC)
- Marvelous Designer 11 (MD)
© Kay John Yim
© Kay John Yim
© Kay John Yim
1. THE ARCHITECTURE
The architecture is loosely based on Ookawaso Hotel's lobby in Fukushima Prefecture, Japan.
PureRef board for the project
It was probably one of the most challenging interior spaces I have ever modeled, due to the following:
a. Most photographs available online focus on the floating stage and thus were quite limited in showing the actual space;
b. With no access to architectural drawings, I had to eye-ball all the measurements from photographs;
c. The space does not conform to a single orthogonal grid, for instance, the stairs and the 1F walkway do not align with the columns.
I first gauged the size of the space by the balustrade height - as a rule of thumb, balustrades are usually 1.1 meter tall (varies slightly depending on exterior vs. interior space and the country's building regulation).
By estimation, the distance between columns is about 7.7 meters.
estimating measurements from photo
Looking at the orientation of the floating stage and the columns, I assumed that the space was designed with 2 sets of grids - a construction grid that aligned with the columns (which structurally holds up the space) and a secondary grid diagonal to the construction grid (which serves only as a design grid).
I drew up the construction grid uniformly (7.7 x 7.7 meters), and placed columns accordingly. Then I drew diagonal lines on top of the construction grid to get the secondary grid. This gave me a starting point for the floating stage as well as the 1F walkway.
drawing up the grids, stairs and 1F walkway
A large portion of the architectural elements then instantly fell into place according to the grids I drew up.
Having said that, the modeling process was not exactly straight-forward though. With the lack of references (especially for the corner details), I spent most of the time re-designing and tweaking wall panel sizes and wall stud positions to get to proportions that were aesthetically pleasing.
most elements fell into place according to the grids
modeling by grid
I then exported the Rhino model to .3dm, opened up in MOI and exported again into FBX. Doing so gave me clean, quad meshes that I could easily edit and UV-map in C4D.
While the majority of the space took less than a week to model, I spent an additional month solely on fine-tuning the details, tweaking the lighting, and framing a composition that I was satisfied with.
the final composition
2. THE ANIMATION
1) Character Animation
The character animation was created based on mocap animation, which could be found on Reallusion Marketplace.
I kept my animation workflow as simple as possible - in fact, I used exclusively "Set Speed" and "Edit Motion Layer" in iClone to get to the final character animation.
First, I imported my CC character into iClone, applied the mocap animation onto the character (by drag & drop), and altered the speed with "Set Speed" to create a slow-motion look.
slowing down mocap animation in iClone with "Set Speed"
Please see my previous article for CG Character creation: Ballerina: A CGI Fantasy Created by Kay John Yim
Altering the speed, however, exaggerated a lot of movement that looked distracting. Hence I played the character animation on loop and deleted keyframes that I found unnecessary.
I then used "Edit Motion Layer" to lift up the arms and modify the finger positions.
edit motion layer
2) Garment prep
Once I got a decent character animation, I moved on to MD and CC to prepare the garments for animation/simulation.
Cloth simulation in Marvelous Designer is extremely finicky - multiple layers of clothing too close together causes a lot of jittering, and that could take an infinite number of simulations to resolve. For the above reason, I separated the two sets of Marvelous Designer garments (ballet tutu & hakama) into 2 categories - Skintight vs Loose garments.
The Skintight garments would be animated in CC & iClone, a technique most commonly used in game production. This technique excels in speed but falls short in simulating loose garment details compared to MD. The Skintight garments in this project included:
a. ballet tutu leotard
b. hakama inner layer
The remaining Loose garments would be simulated in MD:
a. ballet tutu skirt
b. hakama excluding inner layer
Garment creation is beyond the scope of this article; I highly recommend Marvelous Designer's official YouTube channel for anyone getting started in MD:
3) Skintight Garment Animation
My preparations for the garments in CC are as follows:
a. export garment from MD to FBX as T-pose
b. import FBX into CC by "Create Accessories"
c. assigned "Skin Weight"
d. export to iClone
The skin-tight garment would then be automatically applied to the animated character in iClone.
ballet tutu leotard animation in iClone
4) Loose Garment Simulation
In general, MD simulates garment better using CPU (over GPU) when there are multiple layers of clothing. Having separated the tutu leotard from the tutu skirt in this particular case, I found GPU simulation actually gave a cleaner and faster simulation than using CPU alone.
ballet tutu skirt simulation
For the hakama, I wanted to create a calm but otherworldly aesthetic, so I reduced the "gravity" under "simulation settings” to 0, and upped the "air damping" to 5. This resulted in a constantly floating sleeve and a clear silhouette throughout the animation.
With all the garments animated and simulated, I exported all of them as separate Alembic files.
The Character was exported as an animated FBX from iClone.
5) Post-simulation clean-up in Houdini
Garment simulated in MD could sometimes result in too many details or polygons with messy connectivity. The former I personally found distracting and the latter would cause problems down the line in C4D when used in combination with "Cloth Surface".
I imported the Alembic files into Houdini and used "Attribute Blur" to smooth out the garment, eliminating extra wrinkles.
3. The Transformation
1) Setting up the Camera
Having imported the character FBX and all the Alembic files into C4D, I then move on to setting up my camera based on the character animation. This prevented me from spending extra time working on details that would not be visible in the final shot.
I use "PSR" under "Constraint" to bind the camera's height position to the character's "neck" position - doing so stabilized the camera and avoided distracting movements.
2) Tutu Dress to Hakama
The transformation of the tutu dress into hakama was driven by a combination of "PolyFx" and animated Fields within C4D.
working with "PolyFX"
C4D's "PolyFx" breaks down objects by their polygons - any Mograph effectors assigned thereafter will then affect the object on a per-polygon basis rather than affecting the object itself as a whole.
I assigned a "PolyFx", a "Random Effector", a "Plain Effector" and a "Spherical Field" to each of the following parts:
- tutu leotard
- tutu skirt
- hakama sleeve
- hakama top (outer layer)
- hakama top (inner layer)
- hakama bottom
Each of the "Spherical Field"s was then bound to the character's skeleton "pelvis", specific steps detailed as follows:
a. create a "PolyFx";
b. group the "PolyFx" with the garment;
c. assign a "Plain Effector" to the "PolyFx", set scale parameter to -1;
d. assigned a "Spherical Field" to the "Plain Effector";
e. assigned a "Random Effector" to the "PolyFx" and drag the same "Spherical Field" created in the above steps under the "Random Effector", set all rotation parameters to 360;
f. create a Null;
g. assign a "Constraint" to the Null;
h. under Constraint check "PSR";
i. search for "pelvis" in the Object Manager and drag it under the "PSR";
j. place a "Spherical Field" under the Null;
k. now the "Spherical Field" will move along with the character.
With the “Spherical Field” bound to the character, I animated the sizes of the "Spherical Fields" and tweaked the timing to different garment parts to gradually scale down/scale up by their polygon divisions.
a breakdown of the animated ballet tutu dress under object manager
When in doubt type "SHIFT-C" then type in the Mograph or function you are looking for - I use "SHIFT-C" all the time in C4D.
garment animation driven by "PolyFx"
3) Tutu Skirt to Butterfly
In addition to the garment transformation drive by "PolyFX", I added an extra layer of animation with a "Cloner" of animated butterflies - this created an illusion as if the tutu skirt disintegrated into a swarm of butterflies and flew away.
I use an animated butterfly created by Travis David (download linked below) cloned onto the simulated tutu skirted, driven with a “Plain Effector” in scale to make them appear and disappear in flow with the “PolyFx” animation.
garment transformation with butterfly "Cloner"
For the final rendering, I added "Cloth Surface" and "Subdivision" to each garment part to break up the polygons into even smaller parts - this resulted in an illusion of the tutu dress being disintegrated and subsequently reintegrated into the hakama.
Technically speaking, it was a relatively simple animation, the most challenging parts were timing and developing an aesthetic that flowed naturally with the character movement. The 10 seconds of transformation alone took me more than 2 months to get to the final version; I was constantly adjusting the Spherical Fields' animation through the plugin “Signal”, rendering the viewport sequence, tweaking and re-rendering over and over again. "Cloth Surface" and "Subdivisions" are computationally expensive - each viewport frame took at least 2 minutes to process, totalling about 10 minutes per viewport sequence render.
final shot 3 breakdown
I kept my texturing workflow fairly simple - apart from the characters, I used Megascans material and foliage in the final renders.
2) Redshift Limitations & Workarounds
Though Redshift is my favorite offline renderer for its unmatched rendering speed, there were a few limitations regarding Motion Blur and Cloner/Matrix that I had to workaround in preparation for the final rendering.
"Motion Blur" - "Deformation Blur" to be specific - contributes to the realism of CG animation. However, there is a known limitation of Redshift automatically disabling "Deformation Blur" on "PolyFX" objects. This would cause glitches (objects look as if they pass through each other) in the final render if "Deformation Blur" is turned on globally. While keeping global "Deformation Blur" on, I added a Redshift Object tag on every character and garment object and unchecked "Deformation Blur" on the RS object tags.
On the other hand, while "Cloner" and "Matrix" both serve the same purpose of cloning objects, they differ in viewport feedback and rendering speed.
Using "Cloner" has the advantage of "you-see-what-you-get" in the viewport, as opposed to using "Matrix" where you have to render out the frame to see the final result.
Rendering-wise, "Matrix" has the advantage of being rendered by Redshift much more efficiently than "Cloner" - taking Shot 4 for instance, the final render duration per frame is 3 hours using exclusively "Cloner" as opposed to 2.5 hours using exclusively "Matrix".
Hence, I used "Cloner" while working on the shot composition and used "Swap Cloner/Matrix" to replace all "Cloner" with "Matrix" for the final render.
"Cloner" viewport feedback
"Matrix" viewport feedback
3) Redshift Environment
I used Redshift Environment to give all the shots an atmospheric and mysterious look; it also helped to convey the depth of the scene, especially in a busy composition like Shot 4.
The Redshift Environment’s Volume Material was driven by two "Nulls" in height; a fake “Spot Light” directly above the dancing character and two “Area Lighting” from below the stage also contributed to the Redshift Environment.
Please see Redshift’s official tutorial on driving Redshift Environment with "Nulls":
4) Redshift Proxies
Having finalized the look of the shots, I exported as many objects as possible into Redshift Proxies for rendering efficiency. I used "Redshift Proxy Exporter" to batch export objects - this saved me a lot of time, especially when exporting foliage.
Redshift Proxy Exporter link:
With everything replaced as Redshift Proxies, this brought my final render time per frame from 2.5 hours down to 2 hours.
"Kagura" is by far the most challenging personal project I have ever done, since I had little to no experience in motion graphics or character animation half a year ago. I learned along the way as I worked on projects "Kagura" and "Ballerina" all through trial and error, rendering out iterations after iterations throughout the past 6 months.
With Reallusion and Fox Renderfarm's support, I eventually brought "Kagura" to live, and this has been the most rewarding project since I began my CGI journey.
For any self-taught CG artist out there like myself, who is constantly struggling to up their quality and skill set, I would like to share a quote by American novelist Anne Lamott - the quote originally refers to writing but it deeply resonated with me as an artist:
Creating art is like driving a car at night. "You can see only as far as your headlights, but you can make the whole trip that way." You don't have to see where you're going, you don't have to see your destination or everything you will pass along the way. You just have to see two or three feet ahead of you.
Recent News List
From Console to Cinema: Borderlands Movie Sparks Interstellar Excitement!2024-02-26
Behind the Scenes of Image Engine’s Stunning Visual Effects in Netflix’s ‘Lift’2024-02-23
클라우드 렌더링은 무엇인가?2024-02-22
'Twisters' Gets its Official Trailer2024-02-22
Disney+ Shares Official Trailer for 'Iwájú'2024-02-21
Marvel Entertainment Drops Official Teaser for 'Deadpool & Wolverine'2024-02-20
'Attack on Titan' in a Bottle: An Interview with Maxim Titov (Gindo70)2024-02-19
Reveal Making of One Piece: The World's Most Entertaining Live-Action Series with VFX Breakdown by Goodbye Kansas Studios2024-02-14
An Inside Look at How MPC Crafted Visual Effects for Oscar Nominee 'Napoleon'2024-02-12