Animation Trends of 2024: The Rise of 3D and Hybrid Techniques
As we navigate through 2024, the art of animation continues to undergo transformative changes, driven by technological advancements and shifting audience preferences. We're observing significant shifts in animation trends 2024, with creativity being pushed to new heights.One of the key animation trends of 2024 is the increasingly widespread use of 3D animation. Previously a novel concept, 3D animation has now become a mainstream technique across various media. This growth in 3D animation is attributed to its prowess in creating detailed, immersive environments, transcending traditional boundaries and enhancing viewer engagement with its depth and detail.Image via MarvelMoreover, the blending of 2D and 3D animation techniques is one of the standout hybrid techniques gaining momentum. This innovative approach merges the nostalgic allure of 2D animation with the dynamic realism of 3D. These hybrid techniques offer a distinct aesthetic, providing creators with new opportunities to experiment with textures and perspectives. Movies such as "Spider-Man: Into the Spider-Verse" and "The Mitchells vs. The Machines" are excellent examples of this trend, showcasing the possibilities of these hybrid animation techniques.Image via NetflixLinking to 'Oni: Thunder God's Tale' - A Netflix Series Showcase The Netflix series "Oni: Thunder God's Tale" exemplifies these emerging animation trends 2024, blending various styles and techniques in a contemporary setting. The series is a visual celebration of hybrid techniques in animation, merging traditional methods with modern digital tools.Image via NetflixThe creators of "Oni," in collaboration with Megalis VFX and other studios, have pushed the envelope in combining stop-motion and CG environments. Their advanced approach in hybrid techniques, which involved using deep images and compositing tricks, has set a new benchmark in animation, blurring the lines between traditional techniques and modern digital tools. The result is a visually captivating series that respects traditional animation while embracing the advancements of modern technology. Fox Renderfarm: Powering the Future of Animation This seamless integration of different animation styles in "Oni: Thunder God's Tale" highlights the importance of powerful rendering solutions, and that's where Fox Renderfarm comes into play. As a leading render farm in the CG industry, Fox Renderfarm offers cloud rendering services that are crucial for projects like "Oni." Fox Renderfarm plays a pivotal role in realizing projects like "Oni: Thunder God's Tale" that epitomize the animation trends of 2024. As a leading render farm, Fox Renderfarm's cloud rendering services are essential for handling complex, hybrid techniques in animation. Their infrastructure ensures precision and efficiency in rendering, making them an invaluable asset in the animation industry's journey towards a more technologically advanced future.The success of "Oni: Thunder God's Tale" and the integral role of Fox Renderfarm in cloud rendering services underscore the evolution of animation trends in 2024. As we continue to witness groundbreaking developments in hybrid techniques and other innovations, Fox Renderfarm stands as a key enabler, powering the creative visions of animators across the globe.Fox Renderfarm's Customers' WorkSome other Fox Renderfarm’s key benefits include:Rapid Rendering: Minimize delays in production with faster rendering capabilities.Batch Rendering: Handle large volumes of rendering tasks simultaneously.Cost-Effectiveness: Reduce expenses compared to maintaining local render farms.24/7 Service: Constant availability to keep projects on schedule.Expert Technical Support: Professional assistance to ensure smooth operations.TPN-Accredited Vendor: Guarantees top-tier security and data protection.In conclusion, as we delve into the fascinating world of animation in 2024, it is clear that the industry is moving towards a more integrated and technologically advanced future. The success of series like "Oni: Thunder God's Tale" and the pivotal role of cloud rendering services like those offered by Fox Renderfarm are testaments to this evolution. As we continue to explore new horizons in animation, Fox Renderfarm stands as a beacon, powering the dreams and aspirations of animators worldwide.
3D Animation Impact Mints Limited Edition Cricket Made by CGKatha Studios
CGKatha Studios has created a stunning 3D animation for Impact Mints to show cricket enthusiasts the limited edition cricket themed mints. According to CGKatha Studios, the animation was created entirely in Blender and they chose Fox Renderfarm as the high-speed cloud rendering solution. Without excellent work from each team, it is impossible to turn this concept into reality.Check out the Impact Mints limited edition cricket:As a leading cloud rendering services provider and render farm, Fox Renderfarm has always provided our users with fast and seamless rendering services, including individual artists, small studios, large animation companies, film and television special effects &x26; advertising industry, and architecture industry, etc.Thanks again to CGKatha Studios for choosing Fox Renderfarm to deliver this impressive 3D animation!CGKatha Studios: https://www.instagram.com/cgkatha_studios/
Peeling Back Layers: Exclusive Interview with 3D Young Minds Behind Animation Short "Veggie Go Round"
Fox Renderfarm Interview
Veggie Go Round is a collaborative animated short film created by an Illustration and Digital Animation student team - Salad Bowl Studios, for their final year graduation project at The One Academy. It’s an action-packed comedy about a young boy who imagines the veggies on his plate as live monsters, merrily inspired by the universal childhood experience of growing up as a picky eater.Watch Veggie Go RoundAs a leading render farm and partner of The One Academy, Fox Renderfarm has provided the cloud rendering service to the team and helped make this animated short film a reality. We’re honored to have Violet Lee, the Director of Salad Bowl Studios here and made an exclusive interview with her. Let’s look behind the scenes of Veggie Go Round through this interview.Fox Renderfarm: Hi Violet, thank you so much for accepting our interview. First of all, please introduce yourself and your team to us.Violet: Hello, thank you for the precious opportunity! I’m Violet Lee, the Director of Salad Bowl Studios. We are a team of twenty-eight 2D and 3D artists who created a short film Veggie Go Round that was released earlier this year in June 2023.Fox Renderfarm: Why did you call yourself Salad Bowl Studios? Is it a combination of all kinds of CG talent?Violet: That’s right! Salad Bowl Studios consists of various students majoring in Illustration, 3D Animation and 3D Modeling in The One Academy, Malaysia. The concept of a Salad Bowl is indeed our way of describing the bright mixture of CG talents that contributed to this short film about vegetables.Fox Renderfarm: Could you tell us more about this short? And what was the inspiration?Violet: Veggie Go Round is a cute and light-hearted comedy portraying the adventures of a young picky eater as he imagines himself being chased by vegetable monsters. When creating this story, we took inspiration from our own childhood memories of disliking veggies, as we’ve found that many people around the world can relate to this experience.Our head writer, Metta Soegiharto, is the main lead this pitch is accredited to, as she had worked tirelessly to bring us the full development and visual flow of the script alongside co-writer Chi Hang and line producer Ly-Ann. To add a fun fact: we had originally imagined our film to have an eerie and sinister style like Cuphead! However, we ultimately decided on a cute short that’s safe and friendly for young audiences.Our initial idea was thought to have a darker art style. © Studio MDHRThe final execution ended up cute and funny. Meet the vegetable monsters! © Salad Bowl StudiosFox Renderfarm: How did you design the characters of the little boy and vegetables? Were there any references?Violet: To approach each of our character designs, we start with describing a prompt of the character. For example, our boy is a “small mischievous boy, ginger hair with a hat”. Then our illustrators begin compiling a variety of references to develop design concepts around this prompt.Prompt: small mischievous boy, ginger hair with a hat.A collection of sketches by our character artist team that captures the boy best. © Salad Bowl StudiosOnce the most suitable idea was established, our lead character artist Erina Sufian proceeded with the final version of the boy’s design.Boy character sheet by Erina Sufian. © Salad Bowl StudiosUp next is the adaptation of 2D design into 3D model, executed and rigged by our lead modeler Hsin Wei.Boy sculpt by Hsin Wei. © Salad Bowl StudiosCompleted 3D model, rigged and textured by Hsin Wei. © Salad Bowl StudiosDemo of our boy’s personality, animated by lead animator Ardelia Tiffany Yoga. © Salad Bowl StudiosAs for the rest of the cast, it is a slightly different process where we narrow down our selection of iconic vegetables to make the prompt, before creating cute and charming designs from the unique shapes. Under the guidance of art director Ting Wei, here are some key points of our veggie development.Studying the silhouettes of existing food characters for inspiration. © Salad Bowl StudiosOriginally, we were sketching out eccentric and creepy veggie monsters for the thrilling chase scene. Later on however, it became clear that cute rounded designs were more visually appealing for general audiences, especially when viewed and animated from different angles. And that’s how our adorable and energetic cast was born!Three different render styles for our character cast explored by Hui Ching, Joel Chong and Natasha Ng. © Salad Bowl StudiosDiscovering the “personalities” of each veggie: illustrated by Ainsley Phua, Hui Ching, Natasha and Ting Wei. © Salad Bowl StudiosDemo of our veggie character lineup in 3D with distinct silhouettes and unique behaviors. The execution was a collaborative effort - modeled by Hsin Wei, Donovan and Nicholas, rigged by Alex and Vincent, and finally animated by Ardelia Tiffany Yoga, Ammar Syahiran, Violet Lee, Chloe Loh and JoJo Chan.Fox Renderfarm: Could you introduce the pipeline/workflow of this project? What software or plug-ins did you use? And how long did it take you to finish the short?Violet: To keep track of our progress and scheduling, we referenced the classic animation production pipeline to divide our workload across pre-production, production and post-production stages. Below is an illustrated example of how each task may be split.Image © "3D Animation Essentials" by Andy Beane.In the case of our film, we referred to this timeline with some minor deviations depending on the completion status of each phase. Pre-production is largely focused on developing the story, character and environment designs in 2D so that the 3D team can begin sculpting and building the previsualization scene. An efficient workflow ensures that all the look development plus 3D blocking is completed before moving onto the production stage — which focuses on polishing the animation, finalizing the model textures, and fixing any rigging issues that subsequently arise. Lastly, post-production is dedicated to lighting, rendering and compositing as this process can take longer than expected to tweak and troubleshoot. This is also the stage for us to prepare promotional materials for official release on various platforms.Once the film’s individual scenes have been stitched together with music, sound effects, visual adjustments and special effects, that’s when we know our film is complete and ready.Example of environment blocking in the pre-production stage, modeled by lead environment artist Nicholas. © Salad Bowl StudiosRender of a scene in production stage with refined textures and lighting. © Salad Bowl StudiosFinal result upon release, after compositing and VFX. © Salad Bowl StudiosThe main software of choice for our production is Autodesk Maya, and relevant supplementary work was done in ZBrush, Adobe Photoshop, After Effects and Premiere Pro. Our line producers Ly-Ann and Chloe Loh collaborated to manage the timeline and arrange the task distribution for each team member.It took us exactly a year from May 2022 to May 2023 to complete our 4-minute short film, and we managed to speed things up by having processes run concurrently such as the 2D team finalizing the visual design whilst the 3D team constructing the previs at the same time. The benefit of having a team of diverse talents is that you don’t have to wait for one task to be complete before starting the other — you can have multiple sectors simultaneously working on separate fragments to be combined later on.Fox Renderfarm: The little boy has rich facial expressions and various movements. How did you achieve this cute style through animation? Were the animators observing the behaviors of children or taking reference videos before?Violet: I’m very happy to hear that! Yes, to capture the cute vibe of the character, we began by compiling funny clips of children reacting to eating vegetables. However, as we proceeded to the 3D animation stage, we found that there were many references that needed further refinement before they could be translated into our workflow. That was when we started recording our own facial expressions and movements to match the emotional progression of our boy under the direction of our lead animator, Ardelia Tiffany Yoga.Here is an example of using our own recordings to refine body language and timing, acted by Vincent Yeoh and translated to animation by Filbert Yohanes Sulivan. © Salad Bowl StudiosAnother example acted and animated by Ammar Syahiran with more emphasis on pose-silhouette and emotions. © Salad Bowl StudiosA difficulty that we faced at this point was realizing how hard it was to transfer movement information from a properly-proportioned reference to a small character with tiny limbs — the boy needed to run more little steps compared to our one big step. Therefore, we referenced the stylized animation movements of ‘Coin Operated’ and the TinyTAN series to better match the small body proportions of our boy.An inspiring short film with a bouncy character. (Two Ghosts Films "Coin Operated", Nicholas Arioli 2018.)Chibi animation of BTS members. (TinyTAN “Dream ON”, Big Hit Ent 2021.)Fox Renderfarm: Did you guys run into any obstacles? How were they resolved?Violet: Working in 3D, technical troubles come pretty frequently. Most of the obstacles we met were related to rigging and the occasional bug in the software (Maya), but with the guidance of our lecturers, we managed to learn the necessary solutions to resolve them.Unwanted mesh deformations can appear haunted in nature.Eye constraints that bugged out due to a conflicting namespace.Whenever we encounter an issue with a rig, we first consult our lead rigger Hsin Wei to see if the problem lies with the rigging process, the model topology, or the animator’s settings. If the rig’s structure looks fine in the Outliner, we test the rig to gauge how the mesh deforms with skin weights and blendshapes.We’ll also check the animator’s rig settings to see if they may have used an older version of the rig or accidentally tweaked an attribute without knowing. However, sometimes the solution is as simple as making sure everyone is using the same version across the board. When we began production we started with Maya 2022.3, but when we upgraded to 2022.4 a few months later, some parts of the rig needed to be redone to accommodate the transition. Our veggie rigs in particular required a lot of manual re-rigging from scratch due to the stylistic proportions that can’t be replicated with automated rigging tools. Although necessary, it definitely caused delays when our animators needed to re-animate parts of the rig that were affected during the update.Fox Renderfarm: This work is rich in color and goes well together. How did you consider the color palette for different scenes? Please give one or two examples.Violet: I’m glad you think so. While refining the story beats, we were considering the color progression to match the emotional state of the boy throughout his journey. Regarding the kitchen scenes, we wanted to give a safe homely vibe like ‘Wallace &x26; Gromit’ and ‘Bao’.Semi-realistic warm tone of a comfy kitchen, designed by Revvathy and modeled by Nicholas and Donovan. © Salad Bowl StudiosHowever, the Veggie Land has a more stylized approach influenced by the films ‘Trolls’ and ‘Blue Curry’. To capture the childlike disgust for vegetables, our illustrators took the initiative to explore crafts with an emphasis on color and textures.Mood and texture experiments created by our illustrators. © Salad Bowl StudiosColor-focused explorations painted by Patrick Ch’ng, Shun Cheng, Revvathy, Ker Er and Ting Wei. © Salad Bowl StudiosAfter these explorations, we optimized our research into a cohesive color script that the lighters and compositors can refer to. Starting from a bright imaginative wonder, the colors slowly turn more eerie and dark as the boy delves deeper into his adventure.Color script as the story progresses, compiled with the guidance of Metta Soegiharto. © Salad Bowl StudiosComparison of 2D key art and 3D execution. © Salad Bowl StudiosFox Renderfarm: What are the future plans for Salad Bowl Studios? Are there any ongoing projects?Violet: Thank you for your interest! For the time being, we have no further projects scheduled under Salad Bowl Studios as our members are either focused on personal projects or have begun working in local CG production industries (Malaysia and Indonesia). As we are a student-run group, there is still much to learn when it comes to creating art in a professional environment. I’m proud to say that many of my teammates are working hard to gain creative experience and/or progressing in their careers within their preferred specialization.Salad Bowl Studios: an array of talents covering illustration, animation, modeling and rigging.In addition, I’d like to add that many of us are individually open to contract work and freelance. If you are interested in hiring our members for your vision or project, please email an enquiry to email@example.com or connect with us through Instagram via @veggiegoround.Fox Renderfarm: On top of that, we’d like to know more about you, our director. How did you encounter CG?Violet: As a kid, I was always a big fan of cartoons that aired regularly on TV. Animating stories was my dream and learning about how technology has advanced to develop 3D productions was very inspiring and humbling at the same time.For example, when you’re young, you think that greenscreens are magical enchanted backdrops. Only when you’re older, do you start to realize how the necessary special effects that contribute to the film’s commercial success were manually created and layered by artists over months of hard work. For a perfect result, creative people are willing to go to extreme lengths to tell stories and break conventional limits. Although I’m most familiar with CG creations through Disney and Pixar storytelling, it’s amazing to realize just how much the work of CG artists across animation, modeling and VFX actually make up the backbone of most films we see today.Fox Renderfarm: Any artist or artwork that inspires you the most?Violet: Admittedly it's not easy for me to name specific persons, as many of the films I enjoy are created by a large team of highly skilled artists and creators that combine their unique strengths to produce a fantastic result. However, I find that some examples of CG work I’ve been awed by lately are the immersive character-centric animation shorts produced by Blizzard and Riot Games, as well as the revolutionary experimental style of the Spider-Verse movie series.A screenshot from one of the first game-animated shorts I watched a few years ago. The prospect of expanding in-game character lore through CG animation really took my breath away. (Overwatch “Alive”, Blizzard Ent 2016.)A music video with fantastic execution, animated by French studio Fortiche Prod. Although these are in-game characters, the production team did wonderful bringing them to life. (K/DA “POP/STARS”, Riot Games 2018.)A series with beautifully-stylized art direction, produced by Columbia Pictures and Sony Animation. The dedication to wit and detail in the film left an unforgettable impression. (Spider-Man “Into the Spider-Verse”, Marvel Ent 2018.)Fox Renderfarm: Veggie Go Round was rendered with Fox Renderfarm. How do you feel about the cloud rendering services of Fox Renderfarm?Violet: I recommend it for both personal and team-based projects! Fox Renderfarm has been a very helpful asset in the making of ‘Veggie Go Round’, especially when our release date was just around the corner. Although I would consider our scenes less resource-intensive than many other 3D projects, we did have some shots with dynamic lighting and shadows that required at least a day or two to render with our best computers. However, Fox Renderfarm’s service managed to render these shots within a matter of hours — it was so satisfying. Implementing it into our workflow has sped up production and given our compositors more opportunity to polish up the film’s outcome.Once your settings are established and ready to go, I definitely suggest trying out Fox Renderfarm to complete your rendering. A scene that takes an hour to render by laptop can be completed within seconds, and that extra time you save can be dedicated to making adjustments for refinement. I think all CG artists should get familiar with cloud-based rendering, and Fox Renderfarm is a good introductory service thanks to their helpful and proactive staff.The render settings for our project file on Fox Renderfarm.Fox Renderfarm: What would you like to say or share with the CG enthusiasts who support your works?Violet: I think the message I would like to leave most for fellow CG enthusiasts is to extend more appreciation to your own creations and others. If we take a moment to truly appreciate the incredible privilege we have in being able to create and immerse ourselves in the world of CG, we will realize how behind every project lies countless hours of dedication and creativity.It's essential to remember that we're not just crafting images; we're giving life to our ideas and passions. In this ever-evolving field, flexibility and adaptability are becoming increasingly valuable. Embrace the opportunity to diversify your skill set, but never forget the profound joy of creating something uniquely your own! Let's continue to support and inspire each other, nurturing our love for CG and pushing the boundaries of what's possible. Our shared journey is a testament to the beauty of human creativity.
C’Atelier, Stunning 3D Animation Produced by Leonstudio
Leonstudio has shared their latest animation project, C'Atelier, which was produced completely by themselves. In the animation, they mixed their two favorite styles, character animation with a bit of stop motion look and photoreal jewelry.C'Atelier was created by Leonstudio from look dev to production, including production design, technical direction and simulation, modeling, rigging, animation, lighting and rendering, compositing, and sound design.C'Atelier © Leonstudio C'Atelier © Leonstudio C'Atelier © Leonstudio Check out Leonstudio's new project:Leonstudio is a CG and animation studio established in Spain, in 2006. They specialize in animation, character development, and set design, as well as having passion for stop motion, handmade, and miniature look. Learn more about Leonstudio's C'Atelier.We are honored that Leonstudio used Fox Renderfarm's cloud rendering service to render this amazing animation!One of the best solutions for accelerating CGI animation rendering is cloud rendering. Fox Renderfarm has been providing secure and fast cloud rendering services to our customers since 2011.What is cloud rendering?Cloud rendering is the process of using the Internet to upload the files you need to render to the cloud, and then taking advantage of the huge clusters of computers built on the cloud platform to assist you with the rendering jobs. By using cloud rendering, you can free up local resources and get your work done faster.As a leading render farm in the CG industry, Fox Renderfarm will continue to provide rapid and reliable cloud rendering services to our users in the future.
Fiery, a CGI Fantasy Made by Kay John Yim
Kay John Yim, an architect by day, CGI artist by night, again created an amazing CGI animation, Fiery, which is the September featured entry in our FGT Art Campaign.In John's Fiery, adorned in a fiery kimono, Kitsune and her feline companion journey through the ethereal realm beyond the torii gate, where they discover a radiant landscape of vibrant Ginkgo trees, and their path softly illuminated by a carpet of fallen leaves.Final StillFinal StillFinal StillIn creating this beautiful animation, he used Houdini, Axiom Solver, Redshift, Marvelous Designer, CharacterCreator, iClone, Premiere, Rhino and rendered with Fox Renderfarm's cloud rendering service to speed up the animation rendering to get the final image.BreakdownBreakdownWIP (Rhino)WIP (Houdini Vellum + Axiom Solver)WIP (Axiom Solver)WIP (Redshift)WIP (Houdini Axiom Solver)WIP (Houdini)Final Still Clay RenderFinal Still Clay RenderFinal Still Clay RenderJohn has posted a YouTube video showing the animation and how it was made in detail.In the production of 3D animation, for complex scenes, it may take several weeks to render using your local computer, such as John's 3D animation. However, with resources of a render farm, which owns hundreds of computers rendering the 3D animation simultaneously, the final render results can be obtained in a few hours. This is why many famous CGI animation studios choose to use cloud rendering to accelerate their rendering process.The fast and secure cloud rendering service shortens the animation production cycle, accelerating the creation of works and providing opportunities to modify the final images. Since 2011, Fox Renderfarm has been committed to providing the best cloud rendering service, working together with our users to create excellent works.Learn more about John: John's personal site, Instagram, Artstation.
The Making of “Metamorphosis”, A 40-Second Slow Motion CGI Animation Created by Kay John Yim
The making of “Metamorphosis” was wrote by Kay John Yim, who is an architect by day, CGI artist by night. As an architect, although he is busy with his daily work, he always carries his passion for CG and insists on self-learning in his free time, which leads to the creation of many wonderful CG artworks.Kay John YimChartered Architect & CGI ArtistKay John Yim's personal site: https://johnyim.com/ArtStation: https://www.artstation.com/johnyimAs the leading cloud rendering service provider and render farm in the CG industry, Fox Renderfarm is committed to supporting the development of CG artists. Now let's take a look at how John created this masterpiece "Metamorphosis".OPENINGProject "Metamorphosis" came into being while I was exploring more efficient ways to model existing architecture as well as faster workflows to create heavy motion graphics.It is a 40-second slow motion CGI animation that depicts a solitary figure in the grandiose setting of a Neo-Gothic rotunda; as she waves her arms and twists her legs, golden branching patterns gradually reveal themselves on her dress and on the ceiling of the rotunda, creating a harmonic spectacle of light and motion.At its core, the golden patterns that gradually appear on the dress and the ceiling symbolize the interconnectedness and interdependence of all things, akin to how "Metamorphosis" was inspired by my visit to "Tribune" at Strawberry Hill House - a Gothic Revival Villa in London constructed 3 centuries ago. It highlights my appreciation for Neo-Gothic architecture, as well as the passage of time and the fleeting nature of existence.This article is a making-of journal that details everything I discovered and learned since "Ballerina" &x26; "Kagura". While this is not meant to be a beginner-friendly tutorial, I will provide links to external tutorials or resources wherever possible.WORKFLOW OVERVIEWThe software's and workflow that I used throughout every stage of the project could be summarized as follows:1. MODELINGPolycam > Rhino > Moment of Inspiration (MOI) > Cinema4D (C4D) > Redshift2. CHARACTERCharacter Creator 4.1 (CC) > iCone 8.1 (iC) > Houdini (for retime) > C4D > Redshift3. CLOTH SIMULATIONMarvelous Designer (MD) > Houdini / C4D4. CLOAK GROWTH ANIMATIONC4D > Redshift5. DRESS GROWTH ANIMATIONHoudini > C4D > Redshift6. CEILING ANIMATIONRhino > MOI > Houdini > C4D > Redshift7. RENDERINGRedshift > Neat Video (for denoising) > Premiere Pro (Magic Bullet Looks)1. MODELINGThe scene was modeled based on the interior of Strawberry Hill House, a Gothic Revival villa in Twickenham, London built in the 1800s. I was inspired by the "Tribune" rotunda, with its elaborate decorations. Since there were limited references for modeling the space, I visited the house in person and used Polycam's Lidar mode to photoscan the rotunda. The scanned model served as a reference for my modeling in Houdini and Rhino.Processing photoscan in PolycamOnce I exported the .obj model from Polycam, I opened it up in Houdini and used "Clip" node to extract parts that I wanted to focus on at a time while modeling, which were then brought into Rhino.My modeling process could be summarized as follows:Using Polycam's "Lidar" mode to photoscan the room;clipping 1/8 of the model in Houdini with the "Clip" node;extracting the 3D silhouette of the model by manually selecting group edges and using "Edge Group to Lines";using Rhino to trace and resample curves, creating the base geometry of the room with "Loft";flattening the ceiling into a plane using "Flow Along Surface", tracing the ornaments, and using "Sweep" to create a 3D model;conforming the plane back to the ceiling 3D geometry with "Flow Along Surface";exporting the model as .3dm, importing it into MOI, and saving as .FBX;importing .FBX into C4D;duplicate and mirror the model 8 times to form the entirety of the rotunda;Houdini model clippingRhino modelingfrom Rhino to C4D The modeling process took around 4 days, but without the Polycam model as a reference, it could have taken twice as long.2. CHARACTERWhile my female character was originally created for "Ballerina" using CC3, I have since adopted CC4 and Redshift's Randomwalk Subsurface Scattering (SSS) to speed up my CG character rendering workflow. CC4's integrated "Skin-gen" allows for faster generation of realistic skin textures without using additional displacement maps from Texturing XYZ. My character texturing workflow has been simplified to a drag-and-drop process, with textures automatically exported when exporting to .FBX.While the character creation process has become a lot easier in CC4, creating a convincing character out of imagination is still very much impossible for me. I use the 2 websites below as my go-to reference while adjusting character bone structures, face silhouettes and skin textures:Stability AIDaniel BoschungThe first website generates a non-existent person's face using AI every time the page is refreshed, while the latter has a collection of high-resolution (16k+) face photographs with good lighting.The character was first posed in CC4 with a PureRef reference overlay, which I then brought over to iC and used "Edit Motion Layer" to add subtle animation to the arms and legs - essentially 2 keyed poses automatically blended together.Posing with reference in CC4iClone Edit Motion LayerI then brought the character and the textures into C4D Redshift.Redshift's Randomwalk SSS (since v3.5.06) offers a more realistic model without sacrificing rendering speed. Randomwalk SSS simplifies skin material setup and produces better results under various lighting conditions. Previously, Redshift's ray-traced SSS required multiple texture layers and manual tweaking to achieve decent skin material, which made character preparation time-consuming and required constant adjustments for animation sequences with drastic light changes. Randomwalk SSS has been available in Arnold Renderer for years, but the Redshift update makes Randomwalk SSS much more efficient and therefore more feasible for animation.I use Redshift's "Standard" material for the skin materials, applying the CC textures straightforwardly - Diffuse map goes to the diffuse slot, roughness map to the reflection roughness slot, etc. I set "Reflection Weight" to 0.5 and "Reflection IOR" to 1.38; for the "Coat", I set the "Weight" to 0.3 and the "Roughness" to 0.6.Under "Subsurface," I use the Skin Diffuse map from CC as the color and set "Radius" to a salmon color (similar to the color of one's underskin when viewed under direct illumination). I set "Scale" to 0.1 to represent the thickness of the skin, and use "Random Walk" mode with "Include Mode" set to "All Objects."Redshift material graph3. CLOTH SIMULATIONI used Marvelous Designer(MD) for garment creation and simulations. Despite improvements in C4D and Houdini's cloth simulations, MD still offers the most control and the best quality, with the downside of being prone to crashing. To prevent accidental progress loss, I follow these rules when using MD:always save the scene without the avatar (heavier scenes are more likely to crash)delete unwanted simulations before saving or exportingavoid using GPU simulation, which frequently fails (likely due to VRAM allocation) and requires relaunching MDimport only the character's body (no hair, eyes, etc.) for cloth sim, using Houdini or C4D to delete hair, eyes, limbs etc.My garment is based on fishtail wedding dresses. I gathered existing sewing patterns mostly from fashion magazines and Etsy, and placed them on the 2D plane in MD. I then traced them out, sewed them in 3D space, and altered and modified the patterns to fit my character.Throughout the garment creation process, I kept my character in either an "A" or "T" pose, which would later be replaced with the animated character exported from iC for cloth simulation.I kept most of the default simulation settings, but changed a few for better performance: I set garment's "Particle Distance" to 15, material's "Internal Damping" to 100, and material's "Friction" to 99. This eliminated jittering and improved simulation speed.MD settingsMD simThe simulated cloth was then exported as Alembic and brought into Houdini.4. CLOAK GROWTH ANIMATIONThe cloak growth animation is an improved version of the "Hakama Transformation" using "PolyFX" in C4D. I added noise to the polygon movement using "Target" and "Random Field" for a more organic aesthetic.My Mograph setup includes:making "PolyFX" a child of the cloak;applying a "Plain" effector to "PolyFX" to affect the scale of individual polygons;adding a "Random Field (1)" to the Falloff of the "Plain" effector;applying another "Plain" effector to "PolyFX" to affect the position of individual polygons, using the same "Random Field (1)" for Falloff;adding a "Target" effector and creating a new "Random Field (2)" as the target;disconnecting all polygons of the cloak and adding a Vertex Weight tag to the cloak set to using Fields, with "Random Field (1) as input;exporting as Alembic and applying Redshift material, using "RS Vertex Attribute" node with the cloak's Vertex Weight tag to drive Emission.C4D mograph setupCloak growth iterationsCineversity has a great tutorial "Dissolve Effect Using PolyFX and Mograph Tools", which goes more in-depth and explains every step to achieve the effect in great details.5. DRESS GROWTH ANIMATIONThe dress pattern is generated within Houdini with "findshortestpath" and animated with "carve", which can be broken down as follow:bringing the MD simulated dress into Houdini using "Alembic" node;using "timeshift" to freeze dress at first frame;creating a point group named "start", and selecting points where the growth path will start;creating another point group named "end", and selecting points where the growth path will end;using "findshortestpath" node and assigning the created groups above to the "Start Points" and "End Points", the dress pattern will be almost instantly generated at this point;assigning a "smooth" node after "findshortestpath" to make the pattern look more organic;assigning a "carve" node and animate "second U" - this will animate the dress patterns;assigning "ray" and "pointdeform" to project the animated dress pattern from the frozen dress back to the animated dress;using a "sweep" node to create geometry out of the dress pattern curves;using a "Transform" node to scale final geometry 10X and export as AlembicHoudini Node GraphHoudiniI highly recommend checking out Junichiro Horikawa's tutorial video on "Shortest Path Growth", where he covered everything about shortest path growth from basics to advanced:Note: Houdini's default unit is "meters" while C4D's default is "centimeters", hence exchanging geometry across the 2 DCCs has to be scaled accordingly every time; I do not recommend changing the default units since Redshift lighting and many other parameters only work correctly with the default units.6. CEILING ANIMATIONThe ceiling animation - in theory - could be achieved using C4D's "PolyFX", but C4D's inability to handle high poly meshes made it impractical. Instead, I used Houdini and MOPS to replicate "PolyFX".MOPs (Motion Operators for Houdini) is an open source Houdini toolkit that introduces a set of versatile nodes into Houdini, that allows one to quickly iterate motion graphics, much like Cinema4D's Mograph tools. For anyone who has not used MOPs I recommend watching the following introductory video:There are many ways to optimize the geometry before animating it using MOPs, the primary steps I took are detailed as follow:converting the meshes to VDBs and converting it back to meshes to get evenly distributed polygons with "polygon to VDB" and "convert VDB" respectively;using a "Divide" node to further divide the polygons into triangular or hexagonal patterns;using "Facet" after "Divide", check "unique points", turn on "rotate to template";assigning an "Attribute Wrangle" node and input the following:addpoint(0, @P);removeprim(0, @primnum, 1):this adds a point at the position of each primitive(polygon), then removes the primitive and all points connected to it;assigning a "MOPs_Shape_Falloff", which adds a falloff to control scale and position of polygons, similar to the "Plain" effector in C4D;animating the falloff;assigning a "MOPs_Noise_Modifier", which adds an additional layer of noise to the animation;assigning a "Primitive", check "Do Transformation", then plugin the first slot with geometry from "facet" node and second slot with "MOPs_Noise_Modifier".The animation is essentially complete at this point, subject to a lot of additional tweaking. I also added an "attributetransfer" to transfer the "Cd" from "MOPs_Transform_Modifier" to the animated geometry, which I used later on in Redshift with "RS Vertex Attribute" to add emission.Houdini Node GraphHoudini TimelapseI highly recommend checking Liam's "Professionally Incorrect" Youtube channel, where I learned how to create this particular effect in addition to a lot of effects that replicate C4D's Mograph tools in a more efficient manner:7. RENDERING"Metamorphosis" had the heaviest scene of all my animation projects, with the architecture model totaling 4GB and alembic files over 1TB. To reduce geometry loading times while rendering, I exported each alembic frame into RS Proxies, cutting loading time to seconds. I also used a single GI Irradiance Cache file for each sequence to further optimize rendering time. This "GI Trick" only applies to sequences with slow animation and camera movement, a common technique I use for architectural visualization:set render frame to the first frame of the sequence;set "Motion Blur" > "Transformation Steps" set to 16;set "Frame Duration" to "total number of frames in sequence X 2", for instance I would input 240 for a a 120-frame sequence;set "Start" to 0;set "Global Illumination" > Mode set to "Rebuild (Prepass Only), set to single file;render.Motion Blur settings for preparing IPCOnce the IPC is rendered following the steps above, I would then "Load" the IPC file in my "Irradiance Point Cloud" and render out the sequence with just that single IPC file.For the final renders I use all the default settings, with the exception of setting the Automatic Sampling Threshold to 0.03.This resulted in relatively noisy renders, which I then imported as sequences into Premiere Pro and used Neat Video 5 for denoising.I kept most Neat Video settings as default and "automatic", the only thing that one has to pay attention to is to right click on the Premiere Pro's viewport and to make sure that the "Playback Resolution" is set to "Full" - this would prevent Neat Video from sampling the final renders at full resolution otherwise. Neat Video's UI is fairly straightforward but I recommend everyone to check out their official tutorials to take full advantage of it.Premiere Pro "Playback Resolution"Neat VideoWith the render sequences denoised, I used Red Giant Magic Bullet Looks in Premiere Pro to add imperfections - imperfections that one would find in video footage/films - to the final animation, which included primarily chromatic aberration, Halation and digital noise/film grain. This helped make the final animation appear more “cinematic” and nostalgic.Magic Bullet LooksNote: At the time of writing, a persistent NVIDIA driver issue caused Redshift to crash on long sequence renders. One of long discussion threads around the NVIDIA driver issue linked below: https://www.facebook.com/groups/RedshiftRender/posts/1504427980006407/?mibextid=6NoCDWWhile some artists found that downgrading to NVIDIA driver 462.59 worked, the only fix for me was disabling half of my GPUs.CONCLUSION"Metamorphosis" was created amid the rise of text-to-image AI, a time when many artists, including myself, were anxious about the future of visual arts. With AI able to produce beautiful images quickly and efficiently, I questioned the value of creating art manually.This project allowed me to clear my mind and explore new techniques. It took over three months and thousands of iterations to complete, during which I developed a deeper appreciation for Neo-Gothic architecture and fashion design. Perhaps more importantly, I came to realize the value of the creative journey and my intention behind creating CG art. Unlike AI generated images, every pixel in my work - including "Metamorphosis" - was carefully considered and designed deliberately to express my passion for architecture, fashion and fantasy.I hope this project will inspire others to pursue their own artistic endeavors and express themselves with CG, despite the rise of AI. I hope it will remind them that the process of creating art is a uniquely human experience, and that the journey is often more important than the end result.
Corridor Digital Drops VFX Reveal of Anime Rock, Paper, Scissors
"Rock, Paper, Scissors" is a very interesting anime produced by the Corridor team based on AI. The anime tells the story of twin brothers fighting for control of the kingdom through a game of Rock, Paper, Scissors. You may laugh due to the twin brothers' actions and mental activities.Watch the "Anime Rock, Paper, Scissors":The Corridor team has shared the VFX reveal of this short animation, and after watching it you may be fascinated by their excellent technique.© Corridor Digital© Corridor Digital© Corridor DigitalCheck out the VFX reveal of "Anime Rock, Paper, Scissors":Tips:Fox Renderfarm is a leading cloud rendering service provider and render farm in the CG industry. We offer a free $25 trial for every new user. Thanks for reading!Source: Corridor Digital
O Primeiro Episódio da Série em 3D 'Vamos Brincar com a Turma da Mônica' Está Disponível no YouTube
Turma da Mônica, a série amada pelo povo brasileiro, já é parte da cultura brasileira, e ninguém desde crianças até adultos não conhece a adorável Mônica e seus amigos.Vamos Brincar com a Turma da Mônica é a primeira série de desenho animado em 3D da Turma da Mônica, e é produzida por Mauricio de Sousa Produções (MSP), com animação da Hype Animation e sonorização da Ultrassom Music Ideas.© Mauricio de Sousa Produções© Mauricio de Sousa ProduçõesA série foi lançada no Giga Gloob a partir de 12 de outubro de 2022, com 52 episódios no total, e o primeiro episódio está agora disponível no YouTube em 6 de fevereiro de 2023. Essa série se concentra em histórias para crianças, mas quem pode resistir ao charme da adorável Mônica?Assista ao primeiro episódio:Dicas:Fox Renderfarm é um popular oferecedor de serviços de cloud rendering e render farm na indústria de CG. Estamos oferecendo um teste grátis de $25 para os novos usuários registrados. Abraços!Fonte: Mauricio de Sousa Produções
The First Episode of 3D CG Series 'Let's Play with Monica and Friends’ Available on YouTube
Monica and Friends (Turma da Mônica), a series beloved by the Brazilian people, is already a part of Brazilian culture, and no one from children to adults doesn't know the lovely Monica and her friends.Let's Play with Monica and Friends (Vamos Brincar com a Turma da Mônica) is the first 3D cartoon series of Monica and Friends, produced by Mauricio de Sousa Produções (MSP), with animation by Hype Animation and sound by Ultrassom Music Ideas.© Mauricio de Sousa Produções© Mauricio de Sousa ProduçõesThe series was released on Giga Gloob from October 12, 2022, with 52 episodes, and the first episode is now available on YouTube from February 6, 2023. This series focuses on stories for the children, but who can say no to the lovely Monica?Watch the first episode:Tips:Fox Renderfarm is a leading cloud rendering service provider and render farm in the CG industry. We offer a free $25 trial for every new user. Thanks for reading!Source: Mauricio de Sousa Produções
Watch This Short to Celebrate Diwali Festival with Mickey!
Disney Junior released a "Me &x26; Mickey" short to celebrate the Indian festival, Diwali.Image via Disney Junior"Me &x26; Mickey" is a series of vlog-style video shorts launched by Disney Junior, starring Mickey Mouse, who invites preschoolers to laugh and play together about familiar everyday topics in their lives, as well as engage in small games and challenges.In this short , Mickey shares with the kids what Diwali is, how to celebrate it, and wishes everyone a happy Diwali!Image via Disney JuniorWatch the video and celebrate Diwali with Mickey:Tips:As an industry-leading cloud rendering service provider and render farm, Fox Renderfarm also provides a Diwali Special Offer for Indian users! दिवाली की हार्दिक शुभकामनाएं!Source: Disney Junior
The Making of “Kagura”, A Photorealistic CG Animation Created by Kay John Yim
The making-of tutorial article of "Kagura" was wrote by Kay John Yim for Fox Renderfarm, which is a leading cloud rendering service provider and GPU &x26; CPU render farm in the CG industry.This June, John brought us a tutorial on making "Ballerina". And this time, he shares another well-written article about how he made his new personal project "Kagura", which is also rendered with Fox Renderfarm, a powerful 3D animation render farm.Kay John YimChartered Architect & CGI ArtistKay John Yim's personal site: https://johnyim.com/ArtStation: https://www.artstation.com/johnyimProject "Kagura" is a one-minute full CG animation, my second personal project to feature an animated photorealistic CG character.The project concept is a fantasized version of Kagura (神楽) - a type of Shinto ritual ceremonial dance in Japan. Traditionally, the dancer herself turns into a god during the performance - here depicted as the dancer's ballerina tutu dress transforming into a hakama as she dances on the floating stage, purifying spirits of nature.© Kay John YimThe article focuses primarily on shot 3 &x26; 4 of project "Kagura", where I detail my design and technical process of the 4 main aspects below:The ArchitectureThe AnimationThe TransformationRenderingSoftware I used include:RhinoMoment of Inspiration (MOI)Cinema 4D (C4D)Redshift (RS)Character Creator (CC)iCloneMarvelous Designer 11 (MD)Houdini© Kay John Yim© Kay John Yim© Kay John Yim1. THE ARCHITECTUREThe architecture is loosely based on Ookawaso Hotel's lobby in Fukushima Prefecture, Japan.PureRef board for the projectIt was probably one of the most challenging interior spaces I have ever modeled, due to the following:a. Most photographs available online focus on the floating stage and thus were quite limited in showing the actual space;b. With no access to architectural drawings, I had to eye-ball all the measurements from photographs;c. The space does not conform to a single orthogonal grid, for instance, the stairs and the 1F walkway do not align with the columns.I first gauged the size of the space by the balustrade height - as a rule of thumb, balustrades are usually 1.1 meter tall (varies slightly depending on exterior vs. interior space and the country's building regulation).By estimation, the distance between columns is about 7.7 meters.estimating measurements from photoLooking at the orientation of the floating stage and the columns, I assumed that the space was designed with 2 sets of grids - a construction grid that aligned with the columns (which structurally holds up the space) and a secondary grid diagonal to the construction grid (which serves only as a design grid).I drew up the construction grid uniformly (7.7 x 7.7 meters), and placed columns accordingly. Then I drew diagonal lines on top of the construction grid to get the secondary grid. This gave me a starting point for the floating stage as well as the 1F walkway.drawing up the grids, stairs and 1F walkwayA large portion of the architectural elements then instantly fell into place according to the grids I drew up.Having said that, the modeling process was not exactly straight-forward though. With the lack of references (especially for the corner details), I spent most of the time re-designing and tweaking wall panel sizes and wall stud positions to get to proportions that were aesthetically pleasing.most elements fell into place according to the gridsmodeling by gridI then exported the Rhino model to .3dm, opened up in MOI and exported again into FBX. Doing so gave me clean, quad meshes that I could easily edit and UV-map in C4D.model turntableWhile the majority of the space took less than a week to model, I spent an additional month solely on fine-tuning the details, tweaking the lighting, and framing a composition that I was satisfied with.render iterationsthe final composition2. THE ANIMATION1) Character AnimationThe character animation was created based on mocap animation, which could be found on Reallusion Marketplace.I kept my animation workflow as simple as possible - in fact, I used exclusively "Set Speed" and "Edit Motion Layer" in iClone to get to the final character animation.First, I imported my CC character into iClone, applied the mocap animation onto the character (by drag &x26; drop), and altered the speed with "Set Speed" to create a slow-motion look.slowing down mocap animation in iClone with "Set Speed"Note:Please see my previous article for CG Character creation: Ballerina: A CGI Fantasy Created by Kay John YimAltering the speed, however, exaggerated a lot of movement that looked distracting. Hence I played the character animation on loop and deleted keyframes that I found unnecessary.I then used "Edit Motion Layer" to lift up the arms and modify the finger positions.edit motion layer2) Garment prepOnce I got a decent character animation, I moved on to MD and CC to prepare the garments for animation/simulation.Cloth simulation in Marvelous Designer is extremely finicky - multiple layers of clothing too close together causes a lot of jittering, and that could take an infinite number of simulations to resolve. For the above reason, I separated the two sets of Marvelous Designer garments (ballet tutu &x26; hakama) into 2 categories - Skintight vs Loose garments.The Skintight garments would be animated in CC &x26; iClone, a technique most commonly used in game production. This technique excels in speed but falls short in simulating loose garment details compared to MD. The Skintight garments in this project included:a. ballet tutu leotardb. hakama inner layerskintight garmentsThe remaining Loose garments would be simulated in MD:a. ballet tutu skirtb. hakama excluding inner layerlose garmentsNote:Garment creation is beyond the scope of this article; I highly recommend Marvelous Designer's official YouTube channel for anyone getting started in MD:3) Skintight Garment AnimationMy preparations for the garments in CC are as follows:a. export garment from MD to FBX as T-poseb. import FBX into CC by "Create Accessories"c. assigned "Skin Weight"d. export to iCloneThe skin-tight garment would then be automatically applied to the animated character in iClone.ballet tutu leotard animation in iClone4) Loose Garment SimulationIn general, MD simulates garment better using CPU (over GPU) when there are multiple layers of clothing. Having separated the tutu leotard from the tutu skirt in this particular case, I found GPU simulation actually gave a cleaner and faster simulation than using CPU alone.ballet tutu skirt simulationFor the hakama, I wanted to create a calm but otherworldly aesthetic, so I reduced the "gravity" under "simulation settings” to 0, and upped the "air damping" to 5. This resulted in a constantly floating sleeve and a clear silhouette throughout the animation.hakama simulationWith all the garments animated and simulated, I exported all of them as separate Alembic files.The Character was exported as an animated FBX from iClone.5) Post-simulation clean-up in HoudiniGarment simulated in MD could sometimes result in too many details or polygons with messy connectivity. The former I personally found distracting and the latter would cause problems down the line in C4D when used in combination with "Cloth Surface".I imported the Alembic files into Houdini and used "Attribute Blur" to smooth out the garment, eliminating extra wrinkles.3. The Transformation1) Setting up the CameraHaving imported the character FBX and all the Alembic files into C4D, I then move on to setting up my camera based on the character animation. This prevented me from spending extra time working on details that would not be visible in the final shot.I use "PSR" under "Constraint" to bind the camera's height position to the character's "neck" position - doing so stabilized the camera and avoided distracting movements.2) Tutu Dress to HakamaThe transformation of the tutu dress into hakama was driven by a combination of "PolyFx" and animated Fields within C4D.working with "PolyFX" C4D's "PolyFx" breaks down objects by their polygons - any Mograph effectors assigned thereafter will then affect the object on a per-polygon basis rather than affecting the object itself as a whole.I assigned a "PolyFx", a "Random Effector", a "Plain Effector" and a "Spherical Field" to each of the following parts:tutu leotardtutu skirthakama sleevehakama top (outer layer)hakama top (inner layer)hakama bottomEach of the "Spherical Field"s was then bound to the character's skeleton "pelvis", specific steps detailed as follows:a. create a "PolyFx";b. group the "PolyFx" with the garment;c. assign a "Plain Effector" to the "PolyFx", set scale parameter to -1;d. assigned a "Spherical Field" to the "Plain Effector";e. assigned a "Random Effector" to the "PolyFx" and drag the same "Spherical Field" created in the above steps under the "Random Effector", set all rotation parameters to 360;f. create a Null;g. assign a "Constraint" to the Null;h. under Constraint check "PSR";i. search for "pelvis" in the Object Manager and drag it under the "PSR";j. place a "Spherical Field" under the Null;k. now the "Spherical Field" will move along with the character.With the “Spherical Field” bound to the character, I animated the sizes of the "Spherical Fields" and tweaked the timing to different garment parts to gradually scale down/scale up by their polygon divisions.a breakdown of the animated ballet tutu dress under object managerNote:When in doubt type "SHIFT-C" then type in the Mograph or function you are looking for - I use "SHIFT-C" all the time in C4D.garment animation driven by "PolyFx"3) Tutu Skirt to ButterflyIn addition to the garment transformation drive by "PolyFX", I added an extra layer of animation with a "Cloner" of animated butterflies - this created an illusion as if the tutu skirt disintegrated into a swarm of butterflies and flew away.I use an animated butterfly created by Travis David (download linked below) cloned onto the simulated tutu skirted, driven with a “Plain Effector” in scale to make them appear and disappear in flow with the “PolyFx” animation.garment transformation with butterfly "Cloner"For the final rendering, I added "Cloth Surface" and "Subdivision" to each garment part to break up the polygons into even smaller parts - this resulted in an illusion of the tutu dress being disintegrated and subsequently reintegrated into the hakama.Technically speaking, it was a relatively simple animation, the most challenging parts were timing and developing an aesthetic that flowed naturally with the character movement. The 10 seconds of transformation alone took me more than 2 months to get to the final version; I was constantly adjusting the Spherical Fields' animation through the plugin “Signal”, rendering the viewport sequence, tweaking and re-rendering over and over again. "Cloth Surface" and "Subdivisions" are computationally expensive - each viewport frame took at least 2 minutes to process, totalling about 10 minutes per viewport sequence render.iterationsfinal shot 3 breakdown4. RENDERING1) TexturingI kept my texturing workflow fairly simple - apart from the characters, I used Megascans material and foliage in the final renders.2) Redshift Limitations &x26; WorkaroundsThough Redshift is my favorite offline renderer for its unmatched rendering speed, there were a few limitations regarding Motion Blur and Cloner/Matrix that I had to workaround in preparation for the final rendering."Motion Blur" - "Deformation Blur" to be specific - contributes to the realism of CG animation. However, there is a known limitation of Redshift automatically disabling "Deformation Blur" on "PolyFX" objects. This would cause glitches (objects look as if they pass through each other) in the final render if "Deformation Blur" is turned on globally. While keeping global "Deformation Blur" on, I added a Redshift Object tag on every character and garment object and unchecked "Deformation Blur" on the RS object tags.On the other hand, while "Cloner" and "Matrix" both serve the same purpose of cloning objects, they differ in viewport feedback and rendering speed.Using "Cloner" has the advantage of "you-see-what-you-get" in the viewport, as opposed to using "Matrix" where you have to render out the frame to see the final result.Rendering-wise, "Matrix" has the advantage of being rendered by Redshift much more efficiently than "Cloner" - taking Shot 4 for instance, the final render duration per frame is 3 hours using exclusively "Cloner" as opposed to 2.5 hours using exclusively "Matrix".Hence, I used "Cloner" while working on the shot composition and used "Swap Cloner/Matrix" to replace all "Cloner" with "Matrix" for the final render."Cloner" viewport feedback"Matrix" viewport feedback3) Redshift EnvironmentI used Redshift Environment to give all the shots an atmospheric and mysterious look; it also helped to convey the depth of the scene, especially in a busy composition like Shot 4.The Redshift Environment’s Volume Material was driven by two "Nulls" in height; a fake “Spot Light” directly above the dancing character and two “Area Lighting” from below the stage also contributed to the Redshift Environment.Note:Please see Redshift’s official tutorial on driving Redshift Environment with "Nulls":4) Redshift ProxiesHaving finalized the look of the shots, I exported as many objects as possible into Redshift Proxies for rendering efficiency. I used "Redshift Proxy Exporter" to batch export objects - this saved me a lot of time, especially when exporting foliage.Note:Redshift Proxy Exporter link:https://realaturtur.gumroad.com/l/rsproxyexporterc4dWith everything replaced as Redshift Proxies, this brought my final render time per frame from 2.5 hours down to 2 hours.5. Conclusion"Kagura" is by far the most challenging personal project I have ever done, since I had little to no experience in motion graphics or character animation half a year ago. I learned along the way as I worked on projects "Kagura" and "Ballerina" all through trial and error, rendering out iterations after iterations throughout the past 6 months.With Reallusion and Fox Renderfarm's support, I eventually brought "Kagura" to live, and this has been the most rewarding project since I began my CGI journey.For any self-taught CG artist out there like myself, who is constantly struggling to up their quality and skill set, I would like to share a quote by American novelist Anne Lamott - the quote originally refers to writing but it deeply resonated with me as an artist:Creating art is like driving a car at night. "You can see only as far as your headlights, but you can make the whole trip that way." You don't have to see where you're going, you don't have to see your destination or everything you will pass along the way. You just have to see two or three feet ahead of you.
Ballerina: A CGI Fantasy Created by Kay John Yim
As an architect, Kay John Yim is busy with his daily work. But with his passion for CG art, he has taught himself more than 30 CG software and plug-ins in 2-3 years and has created many fantastic CG works in his spare time. His artworks are rich in detail, magnificent, delicate, and full of romantic imagination.John's recent works © Kay John YimKay John YimChartered Architect & CGI ArtistJohn grew up in Hong Kong, and graduated from the University of Bath (UK) with a degree in Science/Architectural Studies. And he was an exchange student in Architecture at Delft University of Technology (Netherlands). After graduation, he studied architecture at the Architectural Association School of Architecture. He is currently an architect at Spink Partners, a well-known British architectural design firm.Kay John Yim’s personal site: https://johnyim.com/ArtStation: https://www.artstation.com/johnyimThe making-of tutorial article of "Ballerina" was wrote by Kay John Yim for Fox Renderfarm, which is a leading cloud rendering service provider and GPU&CPU render farm:Project "Ballerina" is a 30-second full CG animation, my first personal project to feature an animated photorealistic CG character staged within a grand Baroque rotunda lounge.Ballerina © Kay John YimThe animation is a representation of my inner struggles in all artistic pursuits, both metaphorically and literally.Ballet, an art form widely known to have stringent standards of beauty and highly susceptible to public and self-criticism, is the metaphor of my daily professional and artistic practice. As an architect by day, I work on architectural visualizations, where every detail is being scrutinized by my colleagues, senior architects and clients. As an artist by night, I work on personal CG projects, of which I would do hundreds and up to thousands of iterations to get the perfect compositions and color schemes. No matter how proficient I become in my professional and artistic skills, the inner struggle never fades away.Ballerina © Kay John YimThe project is also literally a technical struggle - every step of the CG character creation process was alien to me. When I started working on the project, I struggled to find a comprehensive guide for creating photorealistic character animation - almost every article or tutorial I came across were either too specialized or too impractical for an indie CG artist to follow.Through months of trial and error, I have since learned a lot about efficient character animation and rendering. This article is an intermediate guide for any indie artists like myself who want to take their CG art to the next level. As much as I would love to cater the guide for everyone, it is practically impossible to cover the nuts and bolts of every piece of software I use, hence I have included links to tutorials or resources wherever possible for beginners to follow along.Ballerina © Kay John YimThe guide is divided into 4 main parts:The ArchitectureThe CharacterThe AnimationRenderingSoftware I used include:RhinoMoment of Inspiration 4 (MOI)Cinema4D (C4D)Redshift (RS)Character Creator 3 (CC3)iCloneZBrush & ZWrapXNormalMarvelous Designer 11 (MD)Houdini1. THE ARCHITECTUREMy primary software for architectural modeling is Rhino.There are many different ways to approach architectural modeling. Having used dozens of CAD and DCC software as an Architect, Rhino is arguably the best architectural modeling software for its accuracy and versatility. Rhino's main advantage over some other more popular DCCs like Cinema4D (C4D) or Houdini is its capability in handling very detailed curves in large quantities.As an Architect, every model I built always started with a curve, usually in the shape of a wall section, cornice or skirting section, swept along another curve of a plan. Rhino's command list might seem overwhelming at first, but I almost exclusively used a dozen of them to turn curves into 3D geometry:RebuildTrimBlendSweepExtrudeSweep 2 RailsFlow Along SurfaceSurface from Network of CurvesThe key to architectural modeling is to always use reference wherever possible. I always have PureRef open at the right bottom corner of my screen to make sure I model in correct proportions and scale. This usually includes actual photos and architectural drawings.For this particular project I used the Amalienburg Hunting Lounge in Munich as my primary reference for the architecture.PureRef board for the projectI downloaded as many high-res references as possible, which included photos of different camera angles, different lighting and weather conditions. This gave me a wide range of details to work with, as well as a general idea of the space relative to human scale.While the architecture consisted of 3 parts - the rotunda, the hallway and the end wall - they were essentially the same module. Hence I initially modeled one wall module consisting of a mirror and a window, duplicated and bent along a circle to get the walls of the rotunda.Rhino modeling always begins with curveswall module duplicated and bent along a curveThe module was reused for both the hallway and the end wall to save time and (rendering) memory.Having built up a library of architectural profiles and ornaments over the past year, I was able to reuse and recycle profiles and ornaments for the modeling of the architecture.Ornament modeling could be a daunting task, but with a couple of ornaments modeled I simply duplicated and rearranged them geometrically to get unique shapes.Rhino ornament placementThe ceiling ornament for instance, was basically a single ornament that covered 1/8 of the dome surface, but radially duplicated 8 times to cover the entire ceiling. The same technique also applies to modeling of the chandelier.All the objects within Rhino were then assigned to different layers by material; this made material assignment a lot easier later on in C4D.assigning objects to layers by materialNotes:The best way to get familiar with Rhino navigation is to model small-scale objects. Simply Rhino has a great beginner's series in modeling a teapot in Rhino:I have posted a few WIP montages on my Youtube channel, while not meant to be tutorials, one should be able to get an overview of my modeling process: https://www.youtube.com/c/jyjohnyimA detailed Rhino tutorial for modeling ornaments:For anyone in a pinch, there are pre-built ornaments for purchase on 3D model stores like Textures.com; some ornament manufactures have free models available for download on Sketchfab and 3dsky.Exporting from Rhino to C4DAfter 4 days of architectural modeling, the Rhino model eventually consisted of 50% NURBS and 50% mesh. I used NURBS mostly for the primary architectural elements (walls, cornices, skirtings) and mesh for the ornaments.Rhino is primarily a NURBS (Non-Uniform Rational B-Splines) software; and although NURBS models are very accurate in representing curve and surface data, most render engines or DCCs do not support NURBS.For this reason I exported the NURBS and MESHES to .3dm and .FBX respectively, and used Moment of Inspiration (MOI) to convert the NURBS model to a mesh.MOI has the best NURBS to quad mesh conversion(over Rhino or any other DCCs) - it always gives a clean mesh that could then be easily edited or UV-mapped for rendering.exporting from MOIImporting into C4DImporting the FBX file into C4D was relatively straightforward, but there were a couple of things I paid attention to, notably the import settings, the model orientation and file unit, listed below in order of operation:open up a new project in C4D (project unit in cm);merge FBX;check "Geometry" and "Material" in the merge panel;change imported geometry orientation (P) by -90 degree in the Y-axis;use script "AT Group All Materials" to automatically organize Rhino materials into different groups.importing FBX exported from MOIimporting FBX exported directly from RhinoI modeled half of the architecture in rhino and then mirrored it as an instance in C4D, since everything is symmetrical.C4D instance & mirroringThe floor (Versailles Parquet tiles) was modeled using photo-texturing method, most widely touted by CG artist Ian Hubert. I applied a Versailles Parquet tile photo as texture on a plane, then sliced up the plane with a "knife" tool to get the reflection roughness variations along the tile grouts. This allowed me to add subtle color and dirt variations with Curvature in Redshift.The floor tile was then placed under a Cloner to be duplicated and spanned over the entire floor.Cloning floor tilesNotes:C4D and Rhino use different Y and Z orientations, hence FBX directly exported from Rhino has to be rotated in C4D.Download link for "AT Group all materials" script: http://www.architwister.com/portfolio/c4d-script-group-materials/Ian Hubert's Youtube Channel has a lot of useful and efficient CG techniques, photo-texturing being one of the most popular:https://www.youtube.com/c/mrdodobird/videosArchitectural Shading (Cinema4D + Redshift)Since I grouped all the meshes by materials in advance, assigning materials was just as simple as dragging and dropping to the material groups as cubic maps or Tri-planar maps.I used Textures.com, Greyscalegorilla's EMC material pack and Quixel Megascans as base materials for all my shaders.For ACES to work correctly within Redshift, every texture has to be manually assigned to the correct color space in the RS Texture Node; generally diffuse/albedo maps belong to "sRGB", and the rest (roughness, displacement, normal maps) belong to "Raw".My architectural shaders were mostly a 50/50 mix of photo texture and "dirt" texture to give an extra hint of realism.RS Shader Graph of the wall material2. THE CHARACTERThe base character was created in Character Creator 3 (CC3) with Ultimate Morphs and SkinGen plugins - both of which were very artist friendly with self-explanatory parameters.Ultimate Morphs provided precise slider controls to every bone and muscle size of the character, while SkinGen gave a wide range of presets for skin color, skin texture detail and makeup.I also used CC3's Hair Builder to apply a game-ready hair mesh to my character.CC3 morphing & Hair BuilderFace TexturingFace was the one of the most important parts of the CG character that required extra attention. The best workflow I found to add photorealistic detail was the "Killer workflow" using Texturing XYZ's VFace model and Zwrap.VFACE is a collection of state-of-the-art photogrammetry human head models produced by Texturing XYZ; every VFACE comes with 16K of photoscanned skin textures, displacement and utility maps; Zwrap is a ZBrush plugin that allows one to automatically fit a pre-existing topology to a custom model.The "Killer workflow" essentially matches the VFACE mesh shape to the CC3 head model; using the Killer workflow, I was able to bake all the VFACE details down to the CC3 head model once the 2 mesh shapes are matched up.My adaptation of the "Killer workflow" can be broken down as follow:export T-posed character from CC3 to C4D;delete all polygons except the head of the CC3 character;export both CC3 head model and VFACE model to ZBrush;use MOVE/Smooth brush to maneuverer VFACE model to fit as closely as possible to the CC3 head model;launch ZWRAP, click and match as many points as possible, notably around the nose, eyes, mouth and ears;let ZWRAP process the matched up points;ZWRARP should then be able to output a VFACE model that matches perfectly to the CC3 head model;feed both models into XNormal and bake the VFACE textures to the CC3 head model.matching points of VFACE (left) & CC3 HEADS (right) in ZWRAPNotes:Full "Killer Workflow" Tutorial on Textureing.XYZ's official Youtube channel:I recommend save the matching points in ZWRAP before processing.I also recommend baking all the VFACE maps individually in XNormal as they are very high-res and could crash XNormal when baked in batch.Skin Shading (Cinema4D + Redshift)Once I had the XYZ texture maps ready, I then exported the rest of the character texture maps from CC3.After that, I imported the character into C4D, and converted all the materials to Redshift materials.At the time of writing, Redshift unfortunately did not yet support Randomwalk SSS (a very realistic and physically accurate subsurface scattering model found in other renderers like Arnold), hence required a lot more tweaking when it came to rendering skin.The 3 levels of subsurface scattering were driven by a single diffuse material with different "Color Correct" settings.RS Shader Graph of "Leg" materialThe head shader was a mix of both the CC3 textures and VFACE textures; the VFACE multichannel displacement was blended with the "microskin" CC3 displacements map.RS Shader Graph of "Head" materialCharacter look-devclose-up render of the characterA “Redshift Object” was applied to the character to enable displacement - only then would the VFACE displacements show up in render.Note:Skin shading is one of the most advanced aspects in rendering. Linked below one of the most helpful tutorial for Redshift Skin shading:Hair ShadingHaving experimented with grooming using C4D Ornatrix, Maya Xgen and Houdini, I decided that using the baked hair mesh from CC3 for project "Ballerina" was leaps and bounds more efficient down the line.I use a Redshift "glass" material with CC3 hair textures maps fed into the "reflection" and "refraction" color slots, as hair (in real life) reacts to light like tiny glass tubes.Note:For anyone interested in taking the CC3 hair to the next level of realism, CGcircuit has a great vellum tutorial dedicated to hair generation and simulation.early test of CC3 mesh hair to hair geometry conversion in Houdini3. THE ANIMATIONCharacter Animation (iClone)I then exported the CC3 Character to iClone for animation.I considered a couple of ways to approach realistic character animation, these included:using off-the-shelf mocap data (Mixamo, Reallusion Actorcore);comissioning a mocap studio to do bespoke mocap animation;using a mocap suit (e.g. Rokoko or Xsens) for custom mocap animation;old-school keyframing.Having experimented with various off-the-shelf mocap data, I found Mixamo mocaps to be way too generic, most of which look very robotic; Reallusion Actorcore had some very realistic motions, but I could not find exactly what I needed for the project.With no budget and (my) very specific character motion requirements, option 2 and 3 were out of the picture. This led me to old-school keyframing.First I screen-captured videos of ballet performances and laid them out frame by frame in PureRef. I then overlaid the PureRef reference (in half opacity) over iClone, and adjusted every character joint to match my reference using “Edit Motion Layer”.Pose 1Pose 2final character animationThe animated characters were then exported to Alembic files.NOTE:While my final project concept depicted ballerinas in slow motion, my original idea was actually to keyframe a 20-second ballet dance, which I very quickly realized to be bad idea for a number of reasons:in slow motion a lot of frames could be interpolated, but real time motion involved a lot of unique frames and hence required a lot more tweaking;subsequently more unique frames meant more rendering problems (flickering, tessellation issues etc.).early test render of my original ideaConsidering this as my first character animation project, I came to the conclusion of doing a slow-motion style sequence instead - 2 unique poses with 160 frames of motion each.Garment SimulationCloth simulation was by far the most challenging part of the project.The two major cloth simulation/solvers that I considered were Marvelous Designer (MD) and Houdini Vellum.While Houdini Vellum was much more versatile and more reliable than Marvelous Designer, I personally found it to be way too slow and therefore impractical without a farm (one frame of cloth simulation could take up to 3 minutes in Houdini Vellum vs. 30 seconds in Marvelous Designer on a Threadripper PRO 3955WX with 128GBs ram).Cloth simulation in MD, while generally a lot quicker to setup than Houdini vellum, was not as straightforward as I imagined.Simulated garments in MD always came with some form of glitches; this included cloth jittering, piercing through character or just complete dislocations. Below are some of the settings I tweaked to minimize glitches:using "Tack" to attach parts of the garment to the character;increasing cloth "Density" and "Air Damping" to prevent garment from moving too fast and subsequently move out of place;simulate parts of the garment in isolation - though not physically accurate, allowed me to iterate and debug a lot quicker.I also reduced "Gravity" in addition to the above tweaks to achieve a slow-motion look.MD Simulation SettingsMD simulationNote:Due to the license agreement of a sewing pattern I used, I am not able to share screenshots of my garment creation process. However the official Marvelous Designer Youtube channel has a lot of garment modeling livestreams which I find to be the most helpful resource for learning MD:Alternatively there are a lot of readily available 3D garment online (notably on Marvelous Designer's official site or Artstation Marketplace) which I used as a basis for a lot of my projects.MD is extremely prone to crashing, there is also a bug in both MD10 and MD11 that prevents saving of simulated garments 90% of the time, so always export simulated garment as Alembic files rather than relying on MD to save the simulation.Simulation Clean-upAfter dozens of simulations, I would then import the MD exported Alembic files into Houdini, where I did a lot of manual cleanups, this included:manually fixing collided cloth and character with "Soft Transform";reducing simulation glitches with "Attribute Blur";blending together preferable simulations from different alembic files with "Time Blend".cleaning up simulated cloth in Houdini with "Soft Transform"There are two tutorials that explain the Houdini cloth cleanup process in great detail, which I watched on a loop while working on the project:Cloth Production in Houdini: https://www.cgcircuit.com/tutorial/houdini-cloth-in-productionHoudini Vellum Creature Setup: https://www.cgcircuit.com/tutorial/houdini-vellum-creature-setupThe cleaned-up cloth simulation was then exported as Alembic to C4D.Alternative to Garment SimulationFor anyone frustrated by the impractical Houdini Vellum cloth simulation times and MD glitches, an alternative would be to literally attach the garment to the character's skin in CC3 - a technique most commonly found in game production.attaching garment to character in CC3While this is a great time-saver alternative, garment created in CC3 lacks realistic cloth movements and wrinkles; I recommend only using this method for objects tightly attached to the character (shoes) or only as a last resort for garment if MD cloth simulation keeps failing.Note:Linked below Reallusion's official guide for creating game-ready garments: https://manual.reallusion.com/Character_Creator_3/ENU/3/Content/Character_Creator_3/3/08_Cloth/Creating_Custom_Clothes_OBJ.htmGarment Baking and ShadingOnce I was done with cloth simulation in MD and clean-up in Houdini, I imported the Alembic file into C4D.MD Alembic files always show up in C4D as one alembic object without any selection sets; this makes material assigning impossible.This was where C4D baking came to play - a process I used for converting the Alembic file into C4D object with PLA (Point Level Animation):drag the alembic object into C4D timeline;go to "Functions";"Bake Objects";check "PLA";then bake.Going through the steps above I was able to get a baked down C4D object that I could easily select polygons and assign multiple materials using selection sets.I then exported an OBJ file from MD with materials, imported into C4D and dragged the selection sets directly onto the baked down garment object. This eliminated the need to manually reassign materials in C4D.I used a blend of linen texture maps (from Quixel Megascans Bridge) and Redshift Car Shader to emulate sequins fabric (think "blink") found in a lot of professional ballet tutu dresses.close-up render of the fabric materialNote: Youtube Travis Davis has a tutorial demonstrating the exact procedures:WARNING: do not use AO or Curvature nodes for the simulated garment materials (or any animated object), as they could potentially produce glitches in final renders.4. RENDERINGLighting & EnvironmentAlthough I tried to keep my lighting as minimal as possible, project "Ballerina" inevitably required a lot of tinkering due to the nighttime setting.The nighttime HDRI did not provide sufficient ambient light to the interior space, and the chandelier bulbs were way too dim as the primary light source. Ultimately I placed an invisible spot light under the center chandelier and used a fake spot light that only affected all the architectural ornaments. The fake light provided an extra level of bounce light that gave just the right amount of illumination without ruining the moody atmosphere.I also added a "Redshift Environment" controlled in Z axis multiplied with "Maxon Noise" to give more depth to the scene.Exterior-wise, I scattered 2 variations of Dogwood Trees with C4D "Matrix" in the surrounding area. They were lit from ground up in the scene to give extra depth.In summary lighting of the scene includes:Dome light (nighttime HDRI) x 1chandelier (mesh lights) x 3Spot Light (center) x 1exterior Area Lights x 4fake Area Light positioned under chandelier (includes architectural ornaments only)RS lightsNotes:Redshift has a very good tutorial on Youtube on controlling the Redshift Environment:The trees were generated with SpeedTree.Lighting takes a lot of consistent practice to master; apart from my daily CG practice, I spent a lot of time watching b-rolls/breakdowns of movies - for instance I took a lot of inspiration from Roger Deakin's lighting and cinematography, as well as Wes Anderson's frame composition and color combinations.Camera MovementsAll my camera movements were very subtle. This included dolly, camera roll and panning shots, all driven with Greyscalegorilla's C4D plugin Signal.I personally prefer using Signal for its non-destructive nature, but old-school key-framing would work just fine for similar camera movements.Signal GraphDraft RendersOnce I had the character animations, cloth simulations and camera movements ready, I began to do low-res test renders to make sure that I would not get any surprises during the final renders, this included:flipbook (openGL) renders to ensure the timing of the animations were optimal;low-res low-sample full sequence renders to ensure there were no glitches;full-res (2K) high-sample still renders with AOVs (diffuse, reflection, refraction, volume) to check what contributed to the prevalent noise if any;submitting test render to Fox Renderfarm to ensure the final renders matched up with my local renders.This process lasted over 2 months with iterations and iterations of renders and corrections.close-up shot Iclose-up shot IIfinal shotFinal Renders & DenoisingI used a relatively high-sample render setting for the final renders, as interior scenes in Redshift were generally prone to noise.RS final render settingsI also had motion blur and bokeh turned on for the final renders - in general motion blurs and bokehs look better (more physically accurate) in-render compared to motion blurs and bokehs added via compositing.Half of the final 2K sequence was rendered on a local workstation, while the rest was rendered on Fox Renderfarm, totalling about 6840 hours of render time on dual RTX 3090 machines.I used Neat Video for denoising the final shot, whereas the closeup shots were denoised using Single Altus (in Redshift).Note:Always turn “Random Noise Pattern” off under Redshift “Unified Sampling” when using “Altus Single” for denoising.Redshift Rendering GI TrickRedshift's GI Irradiance Cache calculation could be quite costly; my final renders for instance have an average of 5 minutes of GI Irradiance Caching time for each frame.In Vray there was an option in the IR/LC setting named "use camera path", designed specifically for scenes where the camera would move through a still scene. Once "use camera path" was enabled Vray would then only calculate one frame of GI cache for an entire sequence.There is a Redshift Forum post written by Andrian that explains how he was able to replicate the same function in Redshift.Borrowing a page from Vray, I use the following motion blur settings to calculate the first frame of Irradiance Cache:RS rendering GI trick motion blur settingThe one Irradiance Cache is then used to render the entire sequence. Two shots of the project were rendered using one single GI cache, resulting in a 10% faster render time overall.NOTE:The GI trick only applies to shots with very little motion; when applied to the 2 closeup shots of project "Ballerina" for example, I got light patches and ghosting on the character skin.ConclusionHaving spent months working on the project, I have gained an appreciation for traditional character animators - I never realized the amount of effort involved in crafting character animations, and the subtlety of details required to bring convincing CG characters to live.Though I would not consider myself to be a character artist, I personally think Character Animations are really powerful in making CG environments relatable, and therefore would still be an essential part of my personal CG pursuit moving forward.
Recent News List
Animation Trends of 2024: The Rise of 3D and Hybrid Techniques2023-11-30
What Are the Types of Cloud Rendering?2023-11-29
Interview with Mumbai-based Studio, CG Katha Studios and Their Storytelling Mission2023-11-29
Pixar Ships Teaser Trailer for 'Inside Out 2'2023-11-28
Cinema 4D Render Farm Tutorial: How to Submit Cinema 4D Tasks for Cloud Rendering2023-11-27
CLO Virtual Fashion Ships Marvelous Designer 12.22023-11-24
How Much Does Cloud Rendering Platform Charge?2023-11-23
Netflix Drops Official Trailer for 'Pokémon Concierge'2023-11-23
Interview with Professional 3D Artist Todor Vladev, Second Place Winner in the Hum3D Sci-Fi Industrial Zone 3D Challenge2023-11-22