The CG animated work "Henshin!" that this article is introducing was created by 3D artist Kay John Yim, who is a long-time partner of Fox Renderfarm, the leading render farm in the CG industry. John has also rendered numerous CG works using our cloud rendering services. Let's discover more about him and his amazing artwork.
About Kay John Yim
Kay John Yim’s personal site: https://johnyim.com/
John grew up in Hong Kong and graduated from the University of Bath (UK) with a degree in Science/Architectural Studies. He was an exchange student in Architecture at Delft University of Technology (Netherlands). After graduation, he studied architecture at the Architectural Association School of Architecture. He is currently an architect at Spink Partners, a well-known British architectural design firm.
Intro
Having transitioned from Cinema 4D to Houdini a year ago, I felt it was time to dive into a project different from the slow-paced animations I'd been creating. I wanted to tackle something that would be impossible without a procedural workflow. Anime dances immediately came to mind.
Anime has always been a major inspiration for my personal projects, even more so than live-action movies or CG animations. As someone who works with CG daily, the 2D linework and distinctive shading in anime are incredibly refreshing. They offer a welcome escape from the complex, photorealistic visuals I typically work with. Of all the anime series I've watched, if I had to choose one setting or scene to represent the essence of anime, it would undoubtedly be high school dance scenes. These moments capture a unique energy and silliness that could rarely be found in real life, and I wanted to bring that same tone into my CGI Fantasies.
Thus, project "Henshin!" (変身!) was born.
@ Kay John Yim
"Henshin," a beloved term in Japanese anime for initiating electrifying transformation scenes, fittingly translates to "transformation", symbolizing my artistic and metaphorical transformation from Cinema 4D to the dynamic world of Houdini.
I kickstarted the project by searching for character animations on Reallusion ActorCore. Using a combination of iClone, Marvelous Designer, Rhino, and Houdini, I established my procedural workflow, summarized as follows:
- Character animation editing in iClone
- Cloth preparation in Marvelous Designer
- Vellum Simulation in Houdini
- FX in Houdini
- Procedural Modeling in Houdini
- Rendering with Houdini & Redshift
- Post in Premiere Pro
Character Animation
This is the one process in my entire workflow that I still prefer doing outside of Houdini. While APEX in Houdini 20 looks very promising for character animation, I lack a traditional background in rigging or animating characters, so I'll tackle that challenge another day. iClone is simply easier to use for manipulating characters, even though it isn't exactly procedural.
Initially, I selected two sets of ActorCore motions and blended them together.
I frequently use the 'Edit Motion Layer' and 'Motion Correction' functions in iClone; in fact, these are my primary tools. 'Edit Motion Layer' allows me to manually keyframe character movements using IK joints, which is especially useful for adjusting hand contact positions with clothing.
I also rely heavily on 'Motion Correction.' Although ActorCore motions are well-cleaned motion captures, iClone doesn’t account for changes in character height when shoes are added, causing the feet to clip through the ground occasionally.
'Motion Correction' is my first go-to for fixing foot clipping or sliding issues. When it fails, I place a box at ground level, switch my camera to orthographic and painstakingly adjust the feet and ground contact frame by frame.
Before exporting my character from iClone, I always set the first frame to an A-pose or T-pose and add a 15-frame buffer before the first frame of the animation.
The A-pose is crucial for both garment fitting in Marvelous Designer and point deforming parts of the garment on the characters in Houdini. For the initial export from iClone, I always export in Alembic. Houdini 20 has a known bug that sometimes causes iClone FBX accessories, such as shoes or hair, to appear incorrectly positioned. Although Alembic files take up more disk space than FBX, they are the safest export option.
Cloth Preparation
I use Marvelous Designer for creating all the garments, typically through a kitbashing process of tracing sewing patterns found on Etsy, modifying the default templates that come with Marvelous Designer, or combining pre-made patterns discovered online.
The most important points I keep in mind during this process are to avoid overlapping cloth as much as possible—Houdini Vellum does not handle overlapping cloth well—and to skip details such as the hem of a dress or a sleeve, which simplifies the workflow later.
Additionally, I always maintain a particle distance above 10 in Marvelous Designer. While lower particle densities below 10 can add more detail, and many tutorials recommend densities as low as 2, I prefer to stick to 10 as the minimum to keep simulation times reasonable.
Once I have all the garments fitted to my character’s A-pose, I convert them into quad meshes and export them as welded OBJ files into Houdini.
OBJ is the only file format that exports material data from Marvelous Designer. Unless I am exporting animations, I always use OBJ.
Marvelous Designer Tips (TLDR):
- Keep garments simple; add details like cloth thickness, stitching, and buttons in Houdini after the Vellum simulation.
- Always export in quad mesh.
- Maintain particle distance above 10.
- Always export as 'welded' and 'thin'.
For the character's hair, I used iClone’s Hair Mesh as a foundation. Initially, I had a complex setup converting hair mesh into curves, but later in the project, I discovered 'Cards to Curves' on Github, a more elegant method for creating hair from hair mesh that I highly recommend everyone explore.
Cards to Curves: https://github.com/jamesrobinsonvfx/cards_to_curves
Vellum Simulation
Once all character animations in Alembic and garment OBJ files are imported into Houdini, my first step is always a Vellum simulation on the character itself. During animations, body parts like armpits and the back of knees may intersect. In reality, when armpits fold wearing T-shirts, cloth gets caught and released due to friction. However, in a Vellum sim, once meshes intersect, resolving them becomes near impossible. To address this, I paint a mask around the character’s armpits, defining the masked area as the only region for simulation, leaving the rest unaffected. This simulation resolves mesh intersections and prevents future issues.
This workflow is also outlined in detail on SideFX’s site: 'Vellum Cloth: Tips & Tricks'
It’s important to note that Houdini defaults to meters, while iClone, Marvelous Designer, and most other DCCs use centimeters. Therefore, scaling down imported files in Houdini is crucial, as all simulations depend on units.
My Vellum setup involves separating the imported OBJ by material, applying different Vellum cloth properties, and painting masks to specify simulation areas. Typically, I mask out the entire back and waist, using 'Point Deform' to adhere the OBJ to character animations. This ensures painted areas stick and deform with character movements, while the remainder undergoes simulation.
According to SideFX’s official documentation, it’s recommended to use a 'Remesh’ node to convert garments into triangular meshes before Vellum simulation, as triangular meshes capture more details with fewer points. Personally, I also test Vellum simulations with quad meshes and various mesh resolutions. I find that quad meshes yield more realistic results for thicker fabrics like leather, where excessive detail in wrinkles may not be desirable.
While Vellum simulation is straightforward in theory, it’s challenging in practice because Vellum parameters don’t correspond to real-world metrics and vary with mesh density. Therefore, achieving the behavior of a specific fabric requires experimenting with different parameters rather than applying a one-size-fits-all approach across projects.
This may seem daunting compared to simulating in Marvelous Designer, but Houdini offers the advantage of unlimited simulation duration based on available hard drive space, whereas Marvelous Designer's simulations are constrained by RAM or VRAM limitations.
In Marvelous Designer, simulations adhere strictly to physics, even when certain parts of a garment like the neckline of a T-shirt or the waistline of a skirt remain static in subtle animations. In contrast, Houdini allows me to creatively manipulate simulations by painting multiple masks, and therefore easier to art direct and more efficient to iterate.
Ultimately, achieving the final look I was satisfied with took over a month and involved 40-50 iterations. Some Vellum simulations exhibited jitter on idle frames, which I resolved using a 'post-Vellum’ node or occasionally a 'Temporal Smooth’ node from AELib.
From this point, it's a matter of using 'Point Deform’ to attach buttons, sewing lines, and stitches onto the simulated clothes.
Here are some Vellum tips that I always keep in mind (TLDR):
- Increase Cache memory to around 50GB to allow pausing and checking the simulation without restarting from frame 1.
- Always use a separate 'Vellum Constraint' node for 'Pin to Target'.
- Experiment with both quad and triangular meshes for Vellum simulations; results can vary significantly.
- Higher mesh density isn't always better; lower mesh density can surprisingly work well for thick fabrics like leather.
- Use 'post-Vellum' node or AELib's 'Temporal Smooth’ node to eliminate jitter.
FX
With all simulations completed, I move on to Cloth Transformation, a blend of uniform disintegration into petals and the emergence of gothic attire from fire. This look has been a development since my days with Cinema 4D. Back then, disintegration was a nightmare; in Houdini, it's essentially a 'Solver’ and a Pyro sim.
Applied Houdini’s tutorials—Particle IV covers the disintegration and petal spawning effect in great detail: https://www.appliedhoudini.com/blog/2018/9/5/particles-iv-physical-instances
In retrospect, I would opt for the disintegration effect using MOPs, as it is simpler to use and quicker to set up.
MOPS: https://github.com/toadstorm/MOPS
I used Axiom Solver for the Pyro sims, although Houdini’s native Pyro Solver would have performed equally well.
I use all the cached Pyro sims as POP sim volume advection sources to generate 'embers,' and instanced butterflies with a looped flying animation onto the 'ember' particles to add an enchanting touch to the scene.
A great intro to Axiom Solver, as well as how to generate embers: https://www.youtube.com/watch?v=nJa5RbQK9zU&t=2s
The dragon fire, being the largest and most time-consuming simulation in the entire project - was the last FX I worked on. The main challenge lay in the simulation's duration and iteration time. To expedite the process, I uploaded multiple files with varying pyro simulation parameters to Fox Renderfarm. This allowed me to utilize their resources while freeing up my local workstation, significantly enhancing efficiency. I highly recommend leveraging a powerful render farm service, such as Fox Renderfarm for simulations if your local setup proves inadequate.
The simulated dragon fire was then utilized to ignite the tree fires through Attribute Transfer, followed by triggering the two pyro simulations.
Voxyde VFX has a free tutorial goes through setting up a dragon fire breath simulation in Houdini:
SideFX Content Library has a 'Village Fire’ setup that I used as a foundation for the tree fire simulations:
Environment
The environment is modeled with reference to the altar space of Albi Cathedral in France.
During the design phase of my environment, I was uncertain about how the characters would interact within the space, especially since I was initially working with a single set of character animations. Therefore, I aimed to keep the environment procedural and flexible, allowing me to adjust proportions, size, and extensions to fit my camera animation later on.
To begin, I traced the altar screen using Rhino—while Houdini can perform similar tasks, it lacks the precision of a CAD program. Next, I created several profile curves and exported them as IGES and NURBs curves.
In Houdini, I imported these curves and used them as backbone curves for a 'Sweep’ operation, incorporating the profile curves I had drawn. This formed the foundation of my model.
Learning Houdini is never straightforward, but I've found procedural modeling to be particularly challenging, especially in the beginning. While many simulations offer shelf tools and presets that can be dissected and tweaked, procedural modeling lacks a standard recipe. However, the immediate feedback in the SOPs environment allows me to quickly iterate and test different methods and workflows for modeling the same object.
My workflow revolves around curve manipulation, relying heavily on a few key nodes.
One invaluable node is the 'Match Size’ node, which I frequently use to center objects to the origin or align them with each other. It's also handy for quickly scaling objects to specific sizes, a task much faster than using 'Transform’.
The second node I frequently rely on is 'Labs Extract Silhouette’. This node generates a 2D outline of the input geometry, which I often use to extract a backbone curve later used in 'Sweep’ operations. I find this node indispensable because it closely resembles duplicating curves or extracting border curves in Rhino, a workflow I've employed for many years.
Moreover, I often apply 'Polyextrude’ to the extracted silhouette. This allows me to use it as a 'Boolean’ shape for creating window openings, doors, and similar architectural features.
Another essential node in my workflow is 'PolyExpand2D’, which offsets curves by a specified distance. This is invaluable for adding procedural architectural details such as tile borders, skirtings, and trim.
I also frequently utilize nodes like 'Sweep’, 'Polyextrude’, and 'Boolean’, which I've mentioned previously.
Assembly, Look-dev & Shot-framing
With the primary environment elements modeled, I began experimenting with different camera angles and animations.
Framing the shot proved to be quite challenging. Initially, I considered wrapping up the project with the two characters transforming against a black backdrop. However, after incorporating the environment, adjusting lighting, and experimenting with camera movements, I realized the scene lacked excitement and felt too brief.
To enhance the visual impact, I explored duplicating the characters using Instance objects. I positioned them symmetrically and asymmetrically, aiming to maximize the use of the two characters and their animations.
After numerous flip books and camera tests, I settled on the final animation. This version introduces two additional sets of character animations and features a dragon descending into the frame from off-screen.
Simulating additional characters did take time, but since the characters and their clothing were mostly identical, I could reuse the entire setup with minimal manual effort.
Once I had fine-tuned all the simulations, lighting became a significant challenge. Typically, cathedrals feature pendant lights and floor chandeliers as primary light sources. However, floor chandeliers, with their bulbs at approximately eye level, can be distracting. I strategically used them only during the first dance sequence to draw attention, but they weren’t suitable for the rest of the animation.
I explored the use of floor lanterns as an alternative. Traditional floor lanterns, however, appeared too minimalistic and did not match the atmosphere I aimed to convey. After conducting further research, I discovered fashion photo shoots online where set designers placed chandeliers on the ground. Inspired by this approach, I adopted a similar technique. By positioning chandeliers on the ground, the light attenuates from bottom to top, emphasizing the characters’ visual presence in the camera frame.
To enrich the scene further, I incorporated iClone’s French Style BuildingGen Content Pack as floor lanterns. I separated out the window panes and utilized them as Redshift Mesh lights. This addition provided visual interest without overly affecting global illumination.
iClone’s French Style BuildingGen Content Pack: https://www.reallusion.com/ContentStore/iClone/pack/building-gen/3d-building-french-style/default.html
Initially, the floor featured a checkerboard marble pattern. As the animation progressed, I noticed the white squares became overly prominent and distracting. To mitigate this, I applied a color gradient across the entire floor and blended it with the marble materials.
For added visual interest, I introduced kintsugi crack lines on the floor. These cracks suggest a stage that has weathered many rehearsals, perhaps under the weight of the dragon.
In order to accommodate the dragon, I use forced perspective to push the rear wall of the cathedral further, but scale it up much larger such that it appears closer and smaller than it actually is.
In terms of depth and atmosphere, I experimented with Redshift scattering, fog attenuation, VDB uniform fog and VDB clouds. Notably, Redshift's fog attenuation did not interact well with VDBs; activating Redshift fog significantly altered the appearance of other VDB elements. Therefore, I opted to use a uniform fog VDB with subtle noise.
Redshift Mesh Lights received a significant upgrade since version 3.6.0.1. Previously, I often needed additional fake area lights to enhance realistic lighting sources. Now, using chandelier bulbs suffices. I used "Object Merge" to bring all light bulbs into a single geo container, then used 'MOPs linear falloff' and 'clip by attribute' to animate all lights collectively, streamlining the process.
For additional realism, I incorporated subtle random camera shakes through the parent Null of the camera. Additionally, I introduced extra noise, synchronized with the dragon's fire-breathing sequences. I kept the camera movements minimal, mimicking the stability of heavy cinema cameras and avoiding the disorienting effect of chaotic, handheld filming.
Rendering
From this point onward, my days were filled with overnight renders and fine-tuning based on the results. This phase was both enjoyable and incredibly time-consuming, as my workflow could only progress as quickly as my renders.
One recurring challenge I faced was with motion blur—specifically, my character's skin would sometimes penetrate through their clothing when I increased the Deformation Blur step beyond one. While I'm unsure of the official recommendations from Redshift or Houdini, I resolved this issue by transferring velocity data from the character to all garments. This simple adjustment effectively mitigated the problem.
Another persistent issue towards the project's end was light splotches or flickering caused by irradiance caches. Redshift advises blending multiple frames or using Brute Force with Brute Force global illumination on specific objects to address this. However, these solutions added considerable rendering time atop an already demanding project that took up to 30 minutes per frame. Instead, I applied a technique I've used in architectural visualization projects—a single-frame irradiance cache with motion blur set to twice the total frame count. While not flawless, this method saved me significant trouble and rendering time.
Here are some key rendering tips I always keep in mind (TLDR):
- Always enable 'Motion Blur' settings when exporting animated OBJs as Redshift Proxies.
- Transfer velocity from characters to garments to prevent skin piercing through clothes when Deformation Blur is enabled.
- Keep Redshift's 'Sample Filtering' as low as possible to minimize fireflies; higher intensities increase render times.
- Use a single-frame irradiance cache when feasible to prevent light splotches or flickering.
The final animation was rendered on Fox Renderfarm by using their cloud rendering service. I had assembled a fairly complex scene incorporating numerous custom HDAs, and Fox Renderfarm generously accommodated by installing all my custom HDAs to maintain consistency between their renders and my local setup. However, I do want to note that Fox Renderfarm's Houdini 20 is configured with an OCIO that differs from Redshift's official OCIO included with Houdini 20, resulting in minor but noticeable color variations. I had to reach out to Fox Renderfarm to customize my rendering settings to align with Redshift’s OCIO.
Post
After rendering the entire sequence, I used Neat Video in Premiere Pro for noise reduction and applied Red Giant Looks for subtle color correction. To enhance the final render with a softer, more cinematic feel, I introduced lens artifacts like halation and chromatic aberration.
Afterthoughts
Transitioning to Houdini has unlocked numerous possibilities for both my personal and professional work. The level of iteration required for this project to achieve its final form would have been impossible without a procedural setup. If I had attempted this in Cinema 4D, the software's constraints would have limited my ability to handle just two characters, likely leading to frequent crashes.
However, Houdini's potential can also present artistic challenges. While working in Cinema 4D imposed strict limitations on project scope and time, Houdini allowed me to continually expand this project—from one character animation to three sets, and from 7 seconds to 25 seconds. I even incorporated a fire-breathing dragon and delved into how the fire interacts with its environment. What started as a weekend experiment evolved into a four-month endeavor.
In my early days with Houdini, its procedural nature sometimes transformed what should have been an artistic experiment into a technical puzzle. I found myself frequently Googling solutions, troubleshooting issues, and optimizing topology and performance—tasks that were once foreign to me and diverted my focus from achieving the final render I envisioned.
Despite these technical hurdles, project "Henshin!" represents a significant milestone for me. Technically, it marks my transition from a destructive workflow to nearly full proceduralism. Artistically, it is my first exploration of modern, fast-paced dance movements juxtaposed against the backdrop of gothic fashion and architecture—a thematic and technical "transformation" that has profoundly shaped my approach to future projects.