CHRISTMAS SUPER SALE
CHRISTMAS SUPER SALE
close

The Making of "Henshin!", A CGI Anime Fantasy Created by Kay John Yim

Last edited on: 2024-08-09
Trending | Top News
VFX Breakdown

The CG animated work "Henshin!" that this article is introducing was created by 3D artist Kay John Yim, who is a long-time partner of Fox Renderfarm, the leading render farm in the CG industry. John has also rendered numerous CG works using our cloud rendering services. Let's discover more about him and his amazing artwork.

About Kay John Yim

John.jpg

Kay John Yim’s personal site: https://johnyim.com/

John grew up in Hong Kong and graduated from the University of Bath (UK) with a degree in Science/Architectural Studies. He was an exchange student in Architecture at Delft University of Technology (Netherlands). After graduation, he studied architecture at the Architectural Association School of Architecture. He is currently an architect at Spink Partners, a well-known British architectural design firm.

Intro

Having transitioned from Cinema 4D to Houdini a year ago, I felt it was time to dive into a project different from the slow-paced animations I'd been creating. I wanted to tackle something that would be impossible without a procedural workflow. Anime dances immediately came to mind.

Anime has always been a major inspiration for my personal projects, even more so than live-action movies or CG animations. As someone who works with CG daily, the 2D linework and distinctive shading in anime are incredibly refreshing. They offer a welcome escape from the complex, photorealistic visuals I typically work with. Of all the anime series I've watched, if I had to choose one setting or scene to represent the essence of anime, it would undoubtedly be high school dance scenes. These moments capture a unique energy and silliness that could rarely be found in real life, and I wanted to bring that same tone into my CGI Fantasies.

Thus, project "Henshin!" (変身!) was born.

Final_Still_1

@ Kay John Yim

"Henshin," a beloved term in Japanese anime for initiating electrifying transformation scenes, fittingly translates to "transformation", symbolizing my artistic and metaphorical transformation from Cinema 4D to the dynamic world of Houdini.

I kickstarted the project by searching for character animations on Reallusion ActorCore. Using a combination of iClone, Marvelous Designer, Rhino, and Houdini, I established my procedural workflow, summarized as follows:

  1. Character animation editing in iClone
  2. Cloth preparation in Marvelous Designer
  3. Vellum Simulation in Houdini
  4. FX in Houdini
  5. Procedural Modeling in Houdini
  6. Rendering with Houdini & Redshift
  7. Post in Premiere Pro

Character Animation

This is the one process in my entire workflow that I still prefer doing outside of Houdini. While APEX in Houdini 20 looks very promising for character animation, I lack a traditional background in rigging or animating characters, so I'll tackle that challenge another day. iClone is simply easier to use for manipulating characters, even though it isn't exactly procedural.

Initially, I selected two sets of ActorCore motions and blended them together.

2_Routine_04

3_Routine_06

I frequently use the 'Edit Motion Layer' and 'Motion Correction' functions in iClone; in fact, these are my primary tools. 'Edit Motion Layer' allows me to manually keyframe character movements using IK joints, which is especially useful for adjusting hand contact positions with clothing.

 

I also rely heavily on 'Motion Correction.' Although ActorCore motions are well-cleaned motion captures, iClone doesn’t account for changes in character height when shoes are added, causing the feet to clip through the ground occasionally.

'Motion Correction' is my first go-to for fixing foot clipping or sliding issues. When it fails, I place a box at ground level, switch my camera to orthographic and painstakingly adjust the feet and ground contact frame by frame.

Before exporting my character from iClone, I always set the first frame to an A-pose or T-pose and add a 15-frame buffer before the first frame of the animation.

6_iClone_Timeline

The A-pose is crucial for both garment fitting in Marvelous Designer and point deforming parts of the garment on the characters in Houdini. For the initial export from iClone, I always export in Alembic. Houdini 20 has a known bug that sometimes causes iClone FBX accessories, such as shoes or hair, to appear incorrectly positioned. Although Alembic files take up more disk space than FBX, they are the safest export option.

Cloth Preparation

I use Marvelous Designer for creating all the garments, typically through a kitbashing process of tracing sewing patterns found on Etsy, modifying the default templates that come with Marvelous Designer, or combining pre-made patterns discovered online.

The most important points I keep in mind during this process are to avoid overlapping cloth as much as possible—Houdini Vellum does not handle overlapping cloth well—and to skip details such as the hem of a dress or a sleeve, which simplifies the workflow later.

Additionally, I always maintain a particle distance above 10 in Marvelous Designer. While lower particle densities below 10 can add more detail, and many tutorials recommend densities as low as 2, I prefer to stick to 10 as the minimum to keep simulation times reasonable.

7_MD_particle_distance

Once I have all the garments fitted to my character’s A-pose, I convert them into quad meshes and export them as welded OBJ files into Houdini.

8_MD_Quadrangulate

OBJ is the only file format that exports material data from Marvelous Designer. Unless I am exporting animations, I always use OBJ.

Marvelous Designer Tips (TLDR):

  1. Keep garments simple; add details like cloth thickness, stitching, and buttons in Houdini after the Vellum simulation.
  2. Always export in quad mesh.
  3. Maintain particle distance above 10.
  4. Always export as 'welded' and 'thin'.

9_MD_export_settings

For the character's hair, I used iClone’s Hair Mesh as a foundation. Initially, I had a complex setup converting hair mesh into curves, but later in the project, I discovered 'Cards to Curves' on Github, a more elegant method for creating hair from hair mesh that I highly recommend everyone explore.

Cards to Curves: https://github.com/jamesrobinsonvfx/cards_to_curves

Vellum Simulation

Once all character animations in Alembic and garment OBJ files are imported into Houdini, my first step is always a Vellum simulation on the character itself. During animations, body parts like armpits and the back of knees may intersect. In reality, when armpits fold wearing T-shirts, cloth gets caught and released due to friction. However, in a Vellum sim, once meshes intersect, resolving them becomes near impossible. To address this, I paint a mask around the character’s armpits, defining the masked area as the only region for simulation, leaving the rest unaffected. This simulation resolves mesh intersections and prevents future issues.

11_Skin_intersection_fix_nodegraph

This workflow is also outlined in detail on SideFX’s site: 'Vellum Cloth: Tips & Tricks'

It’s important to note that Houdini defaults to meters, while iClone, Marvelous Designer, and most other DCCs use centimeters. Therefore, scaling down imported files in Houdini is crucial, as all simulations depend on units.

My Vellum setup involves separating the imported OBJ by material, applying different Vellum cloth properties, and painting masks to specify simulation areas. Typically, I mask out the entire back and waist, using 'Point Deform' to adhere the OBJ to character animations. This ensures painted areas stick and deform with character movements, while the remainder undergoes simulation.

13_vellum_sim_nodegraph

 

According to SideFX’s official documentation, it’s recommended to use a 'Remesh’ node to convert garments into triangular meshes before Vellum simulation, as triangular meshes capture more details with fewer points. Personally, I also test Vellum simulations with quad meshes and various mesh resolutions. I find that quad meshes yield more realistic results for thicker fabrics like leather, where excessive detail in wrinkles may not be desirable.

While Vellum simulation is straightforward in theory, it’s challenging in practice because Vellum parameters don’t correspond to real-world metrics and vary with mesh density. Therefore, achieving the behavior of a specific fabric requires experimenting with different parameters rather than applying a one-size-fits-all approach across projects.

This may seem daunting compared to simulating in Marvelous Designer, but Houdini offers the advantage of unlimited simulation duration based on available hard drive space, whereas Marvelous Designer's simulations are constrained by RAM or VRAM limitations.

In Marvelous Designer, simulations adhere strictly to physics, even when certain parts of a garment like the neckline of a T-shirt or the waistline of a skirt remain static in subtle animations. In contrast, Houdini allows me to creatively manipulate simulations by painting multiple masks, and therefore easier to art direct and more efficient to iterate.

Ultimately, achieving the final look I was satisfied with took over a month and involved 40-50 iterations. Some Vellum simulations exhibited jitter on idle frames, which I resolved using a 'post-Vellum’ node or occasionally a 'Temporal Smooth’ node from AELib.

16_post-vellum_clean_up_nodegraph

From this point, it's a matter of using 'Point Deform’ to attach buttons, sewing lines, and stitches onto the simulated clothes.

17_post-vellum_point_deform

Here are some Vellum tips that I always keep in mind (TLDR):

  1. Increase Cache memory to around 50GB to allow pausing and checking the simulation without restarting from frame 1.
  2. Always use a separate 'Vellum Constraint' node for 'Pin to Target'.
  3. Experiment with both quad and triangular meshes for Vellum simulations; results can vary significantly.
  4. Higher mesh density isn't always better; lower mesh density can surprisingly work well for thick fabrics like leather.
  5. Use 'post-Vellum' node or AELib's 'Temporal Smooth’ node to eliminate jitter.

FX

With all simulations completed, I move on to Cloth Transformation, a blend of uniform disintegration into petals and the emergence of gothic attire from fire. This look has been a development since my days with Cinema 4D. Back then, disintegration was a nightmare; in Houdini, it's essentially a 'Solver’ and a Pyro sim.

Applied Houdini’s tutorials—Particle IV covers the disintegration and petal spawning effect in great detail: https://www.appliedhoudini.com/blog/2018/9/5/particles-iv-physical-instances

In retrospect, I would opt for the disintegration effect using MOPs, as it is simpler to use and quicker to set up.

19_MOPS

MOPS: https://github.com/toadstorm/MOPS

I used Axiom Solver for the Pyro sims, although Houdini’s native Pyro Solver would have performed equally well.

I use all the cached Pyro sims as POP sim volume advection sources to generate 'embers,' and instanced butterflies with a looped flying animation onto the 'ember' particles to add an enchanting touch to the scene.

A great intro to Axiom Solver, as well as how to generate embers: https://www.youtube.com/watch?v=nJa5RbQK9zU&t=2s

The dragon fire, being the largest and most time-consuming simulation in the entire project - was the last FX I worked on. The main challenge lay in the simulation's duration and iteration time. To expedite the process, I uploaded multiple files with varying pyro simulation parameters to Fox Renderfarm. This allowed me to utilize their resources while freeing up my local workstation, significantly enhancing efficiency. I highly recommend leveraging a powerful render farm service, such as Fox Renderfarm for simulations if your local setup proves inadequate.

The simulated dragon fire was then utilized to ignite the tree fires through Attribute Transfer, followed by triggering the two pyro simulations.

Voxyde VFX has a free tutorial goes through setting up a dragon fire breath simulation in Houdini:

SideFX Content Library has a 'Village Fire’ setup that I used as a foundation for the tree fire simulations:

Environment

The environment is modeled with reference to the altar space of Albi Cathedral in France.

During the design phase of my environment, I was uncertain about how the characters would interact within the space, especially since I was initially working with a single set of character animations. Therefore, I aimed to keep the environment procedural and flexible, allowing me to adjust proportions, size, and extensions to fit my camera animation later on.

To begin, I traced the altar screen using Rhino—while Houdini can perform similar tasks, it lacks the precision of a CAD program. Next, I created several profile curves and exported them as IGES and NURBs curves.

In Houdini, I imported these curves and used them as backbone curves for a 'Sweep’ operation, incorporating the profile curves I had drawn. This formed the foundation of my model.

25_procedural_modeling_nodegraph

Learning Houdini is never straightforward, but I've found procedural modeling to be particularly challenging, especially in the beginning. While many simulations offer shelf tools and presets that can be dissected and tweaked, procedural modeling lacks a standard recipe. However, the immediate feedback in the SOPs environment allows me to quickly iterate and test different methods and workflows for modeling the same object.

My workflow revolves around curve manipulation, relying heavily on a few key nodes.

One invaluable node is the 'Match Size’ node, which I frequently use to center objects to the origin or align them with each other. It's also handy for quickly scaling objects to specific sizes, a task much faster than using 'Transform’.

27_Match_Size

The second node I frequently rely on is 'Labs Extract Silhouette’. This node generates a 2D outline of the input geometry, which I often use to extract a backbone curve later used in 'Sweep’ operations. I find this node indispensable because it closely resembles duplicating curves or extracting border curves in Rhino, a workflow I've employed for many years.

Moreover, I often apply 'Polyextrude’ to the extracted silhouette. This allows me to use it as a 'Boolean’ shape for creating window openings, doors, and similar architectural features.

28_Labs_Extract_Silhouette

Another essential node in my workflow is 'PolyExpand2D’, which offsets curves by a specified distance. This is invaluable for adding procedural architectural details such as tile borders, skirtings, and trim.

29_PolyExpand2D

I also frequently utilize nodes like 'Sweep’, 'Polyextrude’, and 'Boolean’, which I've mentioned previously.

Assembly, Look-dev & Shot-framing

With the primary environment elements modeled, I began experimenting with different camera angles and animations.

Framing the shot proved to be quite challenging. Initially, I considered wrapping up the project with the two characters transforming against a black backdrop. However, after incorporating the environment, adjusting lighting, and experimenting with camera movements, I realized the scene lacked excitement and felt too brief.

To enhance the visual impact, I explored duplicating the characters using Instance objects. I positioned them symmetrically and asymmetrically, aiming to maximize the use of the two characters and their animations.

After numerous flip books and camera tests, I settled on the final animation. This version introduces two additional sets of character animations and features a dragon descending into the frame from off-screen.

Simulating additional characters did take time, but since the characters and their clothing were mostly identical, I could reuse the entire setup with minimal manual effort.

Once I had fine-tuned all the simulations, lighting became a significant challenge. Typically, cathedrals feature pendant lights and floor chandeliers as primary light sources. However, floor chandeliers, with their bulbs at approximately eye level, can be distracting. I strategically used them only during the first dance sequence to draw attention, but they weren’t suitable for the rest of the animation.

I explored the use of floor lanterns as an alternative. Traditional floor lanterns, however, appeared too minimalistic and did not match the atmosphere I aimed to convey. After conducting further research, I discovered fashion photo shoots online where set designers placed chandeliers on the ground. Inspired by this approach, I adopted a similar technique. By positioning chandeliers on the ground, the light attenuates from bottom to top, emphasizing the characters’ visual presence in the camera frame.

To enrich the scene further, I incorporated iClone’s French Style BuildingGen Content Pack as floor lanterns. I separated out the window panes and utilized them as Redshift Mesh lights. This addition provided visual interest without overly affecting global illumination.

iClone’s French Style BuildingGen Content Pack: https://www.reallusion.com/ContentStore/iClone/pack/building-gen/3d-building-french-style/default.html

Initially, the floor featured a checkerboard marble pattern. As the animation progressed, I noticed the white squares became overly prominent and distracting. To mitigate this, I applied a color gradient across the entire floor and blended it with the marble materials.

For added visual interest, I introduced kintsugi crack lines on the floor. These cracks suggest a stage that has weathered many rehearsals, perhaps under the weight of the dragon.

In order to accommodate the dragon, I use forced perspective to push the rear wall of the cathedral further, but scale it up much larger such that it appears closer and smaller than it actually is.

37_forced_perspective

38_forced_perspective_plan_view

In terms of depth and atmosphere, I experimented with Redshift scattering, fog attenuation, VDB uniform fog and VDB clouds. Notably, Redshift's fog attenuation did not interact well with VDBs; activating Redshift fog significantly altered the appearance of other VDB elements. Therefore, I opted to use a uniform fog VDB with subtle noise.

Redshift Mesh Lights received a significant upgrade since version 3.6.0.1. Previously, I often needed additional fake area lights to enhance realistic lighting sources. Now, using chandelier bulbs suffices. I used "Object Merge" to bring all light bulbs into a single geo container, then used 'MOPs linear falloff' and 'clip by attribute' to animate all lights collectively, streamlining the process.

40_lights

For additional realism, I incorporated subtle random camera shakes through the parent Null of the camera. Additionally, I introduced extra noise, synchronized with the dragon's fire-breathing sequences. I kept the camera movements minimal, mimicking the stability of heavy cinema cameras and avoiding the disorienting effect of chaotic, handheld filming.

Rendering

From this point onward, my days were filled with overnight renders and fine-tuning based on the results. This phase was both enjoyable and incredibly time-consuming, as my workflow could only progress as quickly as my renders.

One recurring challenge I faced was with motion blur—specifically, my character's skin would sometimes penetrate through their clothing when I increased the Deformation Blur step beyond one. While I'm unsure of the official recommendations from Redshift or Houdini, I resolved this issue by transferring velocity data from the character to all garments. This simple adjustment effectively mitigated the problem.

43_enabling_Redshift_motion_blur

44_velocity_transfer

Another persistent issue towards the project's end was light splotches or flickering caused by irradiance caches. Redshift advises blending multiple frames or using Brute Force with Brute Force global illumination on specific objects to address this. However, these solutions added considerable rendering time atop an already demanding project that took up to 30 minutes per frame. Instead, I applied a technique I've used in architectural visualization projects—a single-frame irradiance cache with motion blur set to twice the total frame count. While not flawless, this method saved me significant trouble and rendering time.

45_irradiance_cache_settings_I

46_irradiance_cache_settings_II

Here are some key rendering tips I always keep in mind (TLDR):

  1. Always enable 'Motion Blur' settings when exporting animated OBJs as Redshift Proxies.
  2. Transfer velocity from characters to garments to prevent skin piercing through clothes when Deformation Blur is enabled.
  3. Keep Redshift's 'Sample Filtering' as low as possible to minimize fireflies; higher intensities increase render times.
  4. Use a single-frame irradiance cache when feasible to prevent light splotches or flickering.

47_Sample_Filtering

48_breakdown_clay_render_updated

The final animation was rendered on Fox Renderfarm by using their cloud rendering service. I had assembled a fairly complex scene incorporating numerous custom HDAs, and Fox Renderfarm generously accommodated by installing all my custom HDAs to maintain consistency between their renders and my local setup. However, I do want to note that Fox Renderfarm's Houdini 20 is configured with an OCIO that differs from Redshift's official OCIO included with Houdini 20, resulting in minor but noticeable color variations. I had to reach out to Fox Renderfarm to customize my rendering settings to align with Redshift’s OCIO.

Post

After rendering the entire sequence, I used Neat Video in Premiere Pro for noise reduction and applied Red Giant Looks for subtle color correction. To enhance the final render with a softer, more cinematic feel, I introduced lens artifacts like halation and chromatic aberration.

49_Red_Giant_Looks

Afterthoughts

Final_Still_2

Final_Still_3

Transitioning to Houdini has unlocked numerous possibilities for both my personal and professional work. The level of iteration required for this project to achieve its final form would have been impossible without a procedural setup. If I had attempted this in Cinema 4D, the software's constraints would have limited my ability to handle just two characters, likely leading to frequent crashes.

However, Houdini's potential can also present artistic challenges. While working in Cinema 4D imposed strict limitations on project scope and time, Houdini allowed me to continually expand this project—from one character animation to three sets, and from 7 seconds to 25 seconds. I even incorporated a fire-breathing dragon and delved into how the fire interacts with its environment. What started as a weekend experiment evolved into a four-month endeavor.

In my early days with Houdini, its procedural nature sometimes transformed what should have been an artistic experiment into a technical puzzle. I found myself frequently Googling solutions, troubleshooting issues, and optimizing topology and performance—tasks that were once foreign to me and diverted my focus from achieving the final render I envisioned.

Despite these technical hurdles, project "Henshin!" represents a significant milestone for me. Technically, it marks my transition from a destructive workflow to nearly full proceduralism. Artistically, it is my first exploration of modern, fast-paced dance movements juxtaposed against the backdrop of gothic fashion and architecture—a thematic and technical "transformation" that has profoundly shaped my approach to future projects.

Latest News
Crafting VFX Magic: A Deep Dive into The Union’s Visual Effects
2024-10-04
Crafting VFX Magic: A Deep Dive into The Union’s Visual Effects
More
Beyond the Screen: A Dive into the World of "All We Have Left"
2024-09-11
Beyond the Screen: A Dive into the World of "All We Have Left"
More
Keywords
3D Art Competitions3D ToolTrailerFox Renderfarm PromotionCloud Rendering FarmAnnecy FestivalMILESTONESFox RenderfarmDesktop ClientBlenderBollywood filmsHum3D ContestNVIDIASilkroad Digital VisionMalaysian Animated FilmsIndiajoyArchitectural VisualizationCGarchitect Architectural 3DawardsBest cloud rendering servicesRender cloudVFXUpin & IpinGreen BookFox Renderfarm InterviewMission MangalKre8tif!Fox Renderfarm Mini ProgramCGCG Tech Summit Malaysiafilm workscloud renderingGolden Horse AwardShadowVFX BreakdownMorrigan FlebotteVWArtclub Annual *ContestAsswad Mouhamadanimation worksIMax StudioBoonie BearsRenderosityGary S. KennedyEvermotion Challenge 2018Brian BeaudryBobby BalaMohit SanchaniyaKatapix MediaFlying Car ProductionsRazerThe ShipmentFoxRenderfarmCG Tech SummitAlpacalypse ProductionshalloweenHum3d Survial Car ChallengeFederico CiuffoliniRalf SczepanIavor TrifonovClarisseCGTS MalaysiaIsotropixC4DTomasz BednarzV-RayCinema 4DMAXONsiggraph cafEvermotion challenge 2017CGTrader Space Competitionfilm of the yearLe Anh NhanPlanet UnknownFox Renderfarm 2017 AchievementsCSFFJulio Sotoboar 2017DeepUnreal Engine render farmChris SunGlass CageMaking Life of Bri' n Chrisanthem studiosThe RookiesPeter DraperMakuta VFXBaahubali 2CG CompetitionenchantedmobCG StudiosAcademy AwardsCGVrayweeklycgchallengeSketchUpsiggraph 2017Chris BuchalSIGGRAPH AsiaLightWaveIndigo RendererStop Motion AnimationV-Ray RTCPU RenderingNVIDIA IrayChaos GroupOctaneRenderRedshiftSTAR CORECICAFVRMr. HublotRibbitGPU RenderingLinuxMonkey IslandLuxRenderHPCRender FarmLife of BriWORLD LABMichael Wakelam3D RenderingOnline Render FarmAlibabaBaahubaliVAXMalaysia3D AnimationOscarSIGGRAPHCGTraderKunming Asia Animation ExhibitionEvermotionRenderMan
Christmas Super Sale 2024 - Fox Renderfarm
Categories
Fox Renderfarm InterviewFox Renderfarm NewsFox Renderfarm ProjectsCloud Rendering TutorialRender Farm TutorialCG ChallengesCG NewsTop NewsTrending
Recent News List
Top 10 Hot 3D Animation Movies Of 2024
Top 10 Hot 3D Animation Movies Of 2024
2024-12-03
Understanding the Differences: 2D Animation vs. 3D Animation
Understanding the Differences: 2D Animation vs. 3D Animation
2024-12-02
SIGGRAPH Asia 2024 Returns to Tokyo After Three Years
SIGGRAPH Asia 2024 Returns to Tokyo After Three Years
2024-11-29
Catch on the Highlights in Unreal Engine 5.5
Catch on the Highlights in Unreal Engine 5.5
2024-11-29
FGT Art Featured Artwork: The Playful Deity by Kay John Yim
FGT Art Featured Artwork: The Playful Deity by Kay John Yim
2024-11-28
Corona 12 Update 1: Collaborate Faster with the Latest Features
Corona 12 Update 1: Collaborate Faster with the Latest Features
2024-11-27
The Use of Cloud Rendering in E-Commerce Platforms
The Use of Cloud Rendering in E-Commerce Platforms
2024-11-26
When Architecture Meets Dark Arts: John's Bewitching CGANDWE Triumph
When Architecture Meets Dark Arts: John's Bewitching CGANDWE Triumph
2024-11-25
Stop Motion Vs Computer Animation: A Comparative Study
Stop Motion Vs Computer Animation: A Comparative Study
2024-11-22
Business Consulting

Global Agent Contact: Gordon Shaw

Email: gordon@foxrenderfarm.com

Marketing Contact: Rachel Chen

Email: rachel@foxrenderfarm.com

Message Us:
Newsletter
Keep up with our latest software updates, special offers and events!
Copyright © 2024 FoxRenderfarm.com. All Rights Reserved.