The making of “Metamorphosis” was wrote by Kay John Yim, who is an architect by day, CGI artist by night. As an architect, although he is busy with his daily work, he always carries his passion for CG and insists on self-learning in his free time, which leads to the creation of many wonderful CG artworks.
Kay John Yim
Chartered Architect & CGI Artist
Kay John Yim's personal site: https://johnyim.com/
ArtStation: https://www.artstation.com/johnyim
As the leading cloud rendering service provider and render farm in the CG industry, Fox Renderfarm is committed to supporting the development of CG artists. Now let's take a look at how John created this masterpiece "Metamorphosis".
OPENING
Project "Metamorphosis" came into being while I was exploring more efficient ways to model existing architecture as well as faster workflows to create heavy motion graphics.
It is a 40-second slow motion CGI animation that depicts a solitary figure in the grandiose setting of a Neo-Gothic rotunda; as she waves her arms and twists her legs, golden branching patterns gradually reveal themselves on her dress and on the ceiling of the rotunda, creating a harmonic spectacle of light and motion.
At its core, the golden patterns that gradually appear on the dress and the ceiling symbolize the interconnectedness and interdependence of all things, akin to how "Metamorphosis" was inspired by my visit to "Tribune" at Strawberry Hill House - a Gothic Revival Villa in London constructed 3 centuries ago. It highlights my appreciation for Neo-Gothic architecture, as well as the passage of time and the fleeting nature of existence.
This article is a making-of journal that details everything I discovered and learned since "Ballerina" & "Kagura". While this is not meant to be a beginner-friendly tutorial, I will provide links to external tutorials or resources wherever possible.
WORKFLOW OVERVIEW
The software's and workflow that I used throughout every stage of the project could be summarized as follows:
1. MODELING
Polycam > Rhino > Moment of Inspiration (MOI) > Cinema4D (C4D) > Redshift
2. CHARACTER
Character Creator 4.1 (CC) > iCone 8.1 (iC) > Houdini (for retime) > C4D > Redshift
3. CLOTH SIMULATION
Marvelous Designer (MD) > Houdini / C4D
4. CLOAK GROWTH ANIMATION
C4D > Redshift
5. DRESS GROWTH ANIMATION
Houdini > C4D > Redshift
6. CEILING ANIMATION
Rhino > MOI > Houdini > C4D > Redshift
7. RENDERING
Redshift > Neat Video (for denoising) > Premiere Pro (Magic Bullet Looks)
1. MODELING
The scene was modeled based on the interior of Strawberry Hill House, a Gothic Revival villa in Twickenham, London built in the 1800s. I was inspired by the "Tribune" rotunda, with its elaborate decorations. Since there were limited references for modeling the space, I visited the house in person and used Polycam's Lidar mode to photoscan the rotunda. The scanned model served as a reference for my modeling in Houdini and Rhino.
Processing photoscan in Polycam
Once I exported the .obj model from Polycam, I opened it up in Houdini and used "Clip" node to extract parts that I wanted to focus on at a time while modeling, which were then brought into Rhino.
My modeling process could be summarized as follows:
- Using Polycam's "Lidar" mode to photoscan the room;
- clipping 1/8 of the model in Houdini with the "Clip" node;
- extracting the 3D silhouette of the model by manually selecting group edges and using "Edge Group to Lines";
- using Rhino to trace and resample curves, creating the base geometry of the room with "Loft";
- flattening the ceiling into a plane using "Flow Along Surface", tracing the ornaments, and using "Sweep" to create a 3D model;
- conforming the plane back to the ceiling 3D geometry with "Flow Along Surface";
- exporting the model as .3dm, importing it into MOI, and saving as .FBX;
- importing .FBX into C4D;
- duplicate and mirror the model 8 times to form the entirety of the rotunda;
Houdini model clipping
Rhino modeling
from Rhino to C4D
The modeling process took around 4 days, but without the Polycam model as a reference, it could have taken twice as long.
2. CHARACTER
While my female character was originally created for "Ballerina" using CC3, I have since adopted CC4 and Redshift's Randomwalk Subsurface Scattering (SSS) to speed up my CG character rendering workflow. CC4's integrated "Skin-gen" allows for faster generation of realistic skin textures without using additional displacement maps from Texturing XYZ. My character texturing workflow has been simplified to a drag-and-drop process, with textures automatically exported when exporting to .FBX.
While the character creation process has become a lot easier in CC4, creating a convincing character out of imagination is still very much impossible for me. I use the 2 websites below as my go-to reference while adjusting character bone structures, face silhouettes and skin textures:
The first website generates a non-existent person's face using AI every time the page is refreshed, while the latter has a collection of high-resolution (16k+) face photographs with good lighting.
The character was first posed in CC4 with a PureRef reference overlay, which I then brought over to iC and used "Edit Motion Layer" to add subtle animation to the arms and legs - essentially 2 keyed poses automatically blended together.
Posing with reference in CC4
iClone Edit Motion Layer
I then brought the character and the textures into C4D Redshift.
Redshift's Randomwalk SSS (since v3.5.06) offers a more realistic model without sacrificing rendering speed. Randomwalk SSS simplifies skin material setup and produces better results under various lighting conditions. Previously, Redshift's ray-traced SSS required multiple texture layers and manual tweaking to achieve decent skin material, which made character preparation time-consuming and required constant adjustments for animation sequences with drastic light changes. Randomwalk SSS has been available in Arnold Renderer for years, but the Redshift update makes Randomwalk SSS much more efficient and therefore more feasible for animation.
I use Redshift's "Standard" material for the skin materials, applying the CC textures straightforwardly - Diffuse map goes to the diffuse slot, roughness map to the reflection roughness slot, etc. I set "Reflection Weight" to 0.5 and "Reflection IOR" to 1.38; for the "Coat", I set the "Weight" to 0.3 and the "Roughness" to 0.6.
Under "Subsurface," I use the Skin Diffuse map from CC as the color and set "Radius" to a salmon color (similar to the color of one's underskin when viewed under direct illumination). I set "Scale" to 0.1 to represent the thickness of the skin, and use "Random Walk" mode with "Include Mode" set to "All Objects."
Redshift material graph
3. CLOTH SIMULATION
I used Marvelous Designer(MD) for garment creation and simulations. Despite improvements in C4D and Houdini's cloth simulations, MD still offers the most control and the best quality, with the downside of being prone to crashing. To prevent accidental progress loss, I follow these rules when using MD:
- always save the scene without the avatar (heavier scenes are more likely to crash)
- delete unwanted simulations before saving or exporting
- avoid using GPU simulation, which frequently fails (likely due to VRAM allocation) and requires relaunching MD
- import only the character's body (no hair, eyes, etc.) for cloth sim, using Houdini or C4D to delete hair, eyes, limbs etc.
My garment is based on fishtail wedding dresses. I gathered existing sewing patterns mostly from fashion magazines and Etsy, and placed them on the 2D plane in MD. I then traced them out, sewed them in 3D space, and altered and modified the patterns to fit my character.
Throughout the garment creation process, I kept my character in either an "A" or "T" pose, which would later be replaced with the animated character exported from iC for cloth simulation.
I kept most of the default simulation settings, but changed a few for better performance: I set garment's "Particle Distance" to 15, material's "Internal Damping" to 100, and material's "Friction" to 99. This eliminated jittering and improved simulation speed.
MD settings
MD sim
The simulated cloth was then exported as Alembic and brought into Houdini.
4. CLOAK GROWTH ANIMATION
The cloak growth animation is an improved version of the "Hakama Transformation" using "PolyFX" in C4D. I added noise to the polygon movement using "Target" and "Random Field" for a more organic aesthetic.
My Mograph setup includes:
- making "PolyFX" a child of the cloak;
- applying a "Plain" effector to "PolyFX" to affect the scale of individual polygons;
- adding a "Random Field (1)" to the Falloff of the "Plain" effector;
- applying another "Plain" effector to "PolyFX" to affect the position of individual polygons, using the same "Random Field (1)" for Falloff;
- adding a "Target" effector and creating a new "Random Field (2)" as the target;
- disconnecting all polygons of the cloak and adding a Vertex Weight tag to the cloak set to using Fields, with "Random Field (1) as input;
- exporting as Alembic and applying Redshift material, using "RS Vertex Attribute" node with the cloak's Vertex Weight tag to drive Emission.
C4D mograph setup
Cloak growth iterations
Cineversity has a great tutorial "Dissolve Effect Using PolyFX and Mograph Tools", which goes more in-depth and explains every step to achieve the effect in great details.
5. DRESS GROWTH ANIMATION
The dress pattern is generated within Houdini with "findshortestpath" and animated with "carve", which can be broken down as follow:
- bringing the MD simulated dress into Houdini using "Alembic" node;
- using "timeshift" to freeze dress at first frame;
- creating a point group named "start", and selecting points where the growth path will start;
- creating another point group named "end", and selecting points where the growth path will end;
- using "findshortestpath" node and assigning the created groups above to the "Start Points" and "End Points", the dress pattern will be almost instantly generated at this point;
- assigning a "smooth" node after "findshortestpath" to make the pattern look more organic;
- assigning a "carve" node and animate "second U" - this will animate the dress patterns;
- assigning "ray" and "pointdeform" to project the animated dress pattern from the frozen dress back to the animated dress;
- using a "sweep" node to create geometry out of the dress pattern curves;
- using a "Transform" node to scale final geometry 10X and export as Alembic*
Houdini Node Graph
Houdini
I highly recommend checking out Junichiro Horikawa's tutorial video on "Shortest Path Growth", where he covered everything about shortest path growth from basics to advanced:
*Note: Houdini's default unit is "meters" while C4D's default is "centimeters", hence exchanging geometry across the 2 DCCs has to be scaled accordingly every time; I do not recommend changing the default units since Redshift lighting and many other parameters only work correctly with the default units.
6. CEILING ANIMATION
The ceiling animation - in theory - could be achieved using C4D's "PolyFX", but C4D's inability to handle high poly meshes made it impractical. Instead, I used Houdini and MOPS to replicate "PolyFX".
MOPs (Motion Operators for Houdini) is an open source Houdini toolkit that introduces a set of versatile nodes into Houdini, that allows one to quickly iterate motion graphics, much like Cinema4D's Mograph tools. For anyone who has not used MOPs I recommend watching the following introductory video:
There are many ways to optimize the geometry before animating it using MOPs, the primary steps I took are detailed as follow:
- converting the meshes to VDBs and converting it back to meshes to get evenly distributed polygons with "polygon to VDB" and "convert VDB" respectively;
- using a "Divide" node to further divide the polygons into triangular or hexagonal patterns;
- using "Facet" after "Divide", check "unique points", turn on "rotate to template";
- assigning an "Attribute Wrangle" node and input the following:
addpoint(0, @P);
removeprim(0, @primnum, 1):
this adds a point at the position of each primitive(polygon), then removes the primitive and all points connected to it;
- assigning a "MOPs_Shape_Falloff", which adds a falloff to control scale and position of polygons, similar to the "Plain" effector in C4D;
- animating the falloff;
- assigning a "MOPs_Noise_Modifier", which adds an additional layer of noise to the animation;
- assigning a "Primitive", check "Do Transformation", then plugin the first slot with geometry from "facet" node and second slot with "MOPs_Noise_Modifier".
The animation is essentially complete at this point, subject to a lot of additional tweaking. I also added an "attributetransfer" to transfer the "Cd" from "MOPs_Transform_Modifier" to the animated geometry, which I used later on in Redshift with "RS Vertex Attribute" to add emission.
Houdini Node Graph
Houdini Timelapse
I highly recommend checking Liam's "Professionally Incorrect" Youtube channel, where I learned how to create this particular effect in addition to a lot of effects that replicate C4D's Mograph tools in a more efficient manner:
7. RENDERING
"Metamorphosis" had the heaviest scene of all my animation projects, with the architecture model totaling 4GB and alembic files over 1TB. To reduce geometry loading times while rendering, I exported each alembic frame into RS Proxies, cutting loading time to seconds. I also used a single GI Irradiance Cache file for each sequence to further optimize rendering time. This "GI Trick" only applies to sequences with slow animation and camera movement, a common technique I use for architectural visualization:
- set render frame to the first frame of the sequence;
- set "Motion Blur" > "Transformation Steps" set to 16;
- set "Frame Duration" to "total number of frames in sequence X 2", for instance I would input 240 for a a 120-frame sequence;
- set "Start" to 0;
- set "Global Illumination" > Mode set to "Rebuild (Prepass Only), set to single file;
- render.
Motion Blur settings for preparing IPC
Once the IPC is rendered following the steps above, I would then "Load" the IPC file in my "Irradiance Point Cloud" and render out the sequence with just that single IPC file.
For the final renders I use all the default settings, with the exception of setting the Automatic Sampling Threshold to 0.03.
This resulted in relatively noisy renders, which I then imported as sequences into Premiere Pro and used Neat Video 5 for denoising.
I kept most Neat Video settings as default and "automatic", the only thing that one has to pay attention to is to right click on the Premiere Pro's viewport and to make sure that the "Playback Resolution" is set to "Full" - this would prevent Neat Video from sampling the final renders at full resolution otherwise. Neat Video's UI is fairly straightforward but I recommend everyone to check out their official tutorials to take full advantage of it.
Premiere Pro "Playback Resolution"
Neat Video
With the render sequences denoised, I used Red Giant Magic Bullet Looks in Premiere Pro to add imperfections - imperfections that one would find in video footage/films - to the final animation, which included primarily chromatic aberration, Halation and digital noise/film grain. This helped make the final animation appear more “cinematic” and nostalgic.
Magic Bullet Looks
Note: At the time of writing, a persistent NVIDIA driver issue caused Redshift to crash on long sequence renders. One of long discussion threads around the NVIDIA driver issue linked below: https://www.facebook.com/groups/RedshiftRender/posts/1504427980006407/?mibextid=6NoCDW
While some artists found that downgrading to NVIDIA driver 462.59 worked, the only fix for me was disabling half of my GPUs.
CONCLUSION
"Metamorphosis" was created amid the rise of text-to-image AI, a time when many artists, including myself, were anxious about the future of visual arts. With AI able to produce beautiful images quickly and efficiently, I questioned the value of creating art manually.
This project allowed me to clear my mind and explore new techniques. It took over three months and thousands of iterations to complete, during which I developed a deeper appreciation for Neo-Gothic architecture and fashion design. Perhaps more importantly, I came to realize the value of the creative journey and my intention behind creating CG art. Unlike AI generated images, every pixel in my work - including "Metamorphosis" - was carefully considered and designed deliberately to express my passion for architecture, fashion and fantasy.
I hope this project will inspire others to pursue their own artistic endeavors and express themselves with CG, despite the rise of AI. I hope it will remind them that the process of creating art is a uniquely human experience, and that the journey is often more important than the end result.