Animonsta Studios、Wau Animation、Giggle Garage、Arcana Studio、Assemblage Entertainment、Les 'Copaque Production、ACM SIGGRAPHの学生ボランティアなど、私たちの愛するクライアントからの証言。
- 1. 高速転送技術Raysyncを使用して超高速のファイル転送
- 2. プラットホームは、Windows、Mac、およびLinuxに対応
- 3. ほぼ、100%の稼働率を誇る高いファーム稼働率
- 4. 米国TPN認定のクラウドサービスです
- 5. I / Oでの速度低下を回避するためにSSDストレージを採用
Cloud Render Farmの業界支援
Fox Renderfarmの主催するFGT ART キャンペーン、 FGT 3D チャレンジに参加して、アートワークをシェアするとともに素晴らしい賞品を獲得しましょう。
- Skype / Emailでの連絡が可能
As an architect, Kay John Yim is busy with his daily work. But with his passion for CG art, he has taught himself more than 30 CG software and plug-ins in 2-3 years and has created many fantastic CG works in his spare time. His artworks are rich in detail, magnificent, delicate, and full of romantic imagination.!Collection of WorksJohn's recent works © Kay John Yim!JohnKay John YimChartered Architect & CGI ArtistJohn grew up in Hong Kong, and graduated from the University of Bath (UK) with a degree in Science/Architectural Studies. And he was an exchange student in Architecture at Delft University of Technology (Netherlands). After graduation, he studied architecture at the Architectural Association School of Architecture. He is currently an architect at Spink Partners, a well-known British architectural design firm.Kay John Yim’s personal site: https://johnyim.com/ArtStation: https://www.artstation.com/johnyimThe making-of tutorial article of "Ballerina" was wrote by Kay John Yim for Fox Renderfarm, which is a leading cloud rendering service provider and GPU&CPU render farm:Project "Ballerina" is a 30-second full CG animation, my first personal project to feature an animated photorealistic CG character staged within a grand Baroque rotunda lounge.Ballerina © Kay John YimThe animation is a representation of my inner struggles in all artistic pursuits, both metaphorically and literally.Ballet, an art form widely known to have stringent standards of beauty and highly susceptible to public and self-criticism, is the metaphor of my daily professional and artistic practice. As an architect by day, I work on architectural visualizations, where every detail is being scrutinized by my colleagues, senior architects and clients. As an artist by night, I work on personal CG projects, of which I would do hundreds and up to thousands of iterations to get the perfect compositions and color schemes. No matter how proficient I become in my professional and artistic skills, the inner struggle never fades away.!FINAL RENDER STILL IMAGE 1Ballerina © Kay John YimThe project is also literally a technical struggle - every step of the CG character creation process was alien to me. When I started working on the project, I struggled to find a comprehensive guide for creating photorealistic character animation - almost every article or tutorial I came across were either too specialized or too impractical for an indie CG artist to follow.Through months of trial and error, I have since learned a lot about efficient character animation and rendering. This article is an intermediate guide for any indie artists like myself who want to take their CG art to the next level. As much as I would love to cater the guide for everyone, it is practically impossible to cover the nuts and bolts of every piece of software I use, hence I have included links to tutorials or resources wherever possible for beginners to follow along.!FINAL RENDER STILL IMAGE 2Ballerina © Kay John YimThe guide is divided into 4 main parts:- The Architecture- The Character- The Animation- RenderingSoftware I used include:- Rhino- Moment of Inspiration 4 (MOI)- Cinema4D (C4D)- Redshift (RS)- Character Creator 3 (CC3)- iClone- ZBrush & ZWrap- XNormal- Marvelous Designer 11 (MD)- Houdini 1. THE ARCHITECTUREMy primary software for architectural modeling is Rhino.There are many different ways to approach architectural modeling. Having used dozens of CAD and DCC software as an Architect, Rhino is arguably the best architectural modeling software for its accuracy and versatility. Rhino's main advantage over some other more popular DCCs like Cinema4D (C4D) or Houdini is its capability in handling very detailed curves in large quantities. As an Architect, every model I built always started with a curve, usually in the shape of a wall section, cornice or skirting section, swept along another curve of a plan. Rhino's command list might seem overwhelming at first, but I almost exclusively used a dozen of them to turn curves into 3D geometry:- Rebuild- Trim- Blend- Sweep- Extrude- Sweep 2 Rails- Flow Along Surface- Surface from Network of CurvesThe key to architectural modeling is to always use reference wherever possible. I always have PureRef open at the right bottom corner of my screen to make sure I model in correct proportions and scale. This usually includes actual photos and architectural drawings. For this particular project I used the Amalienburg Hunting Lounge in Munich as my primary reference for the architecture.!SCREENSHOT 1PureRef board for the projectI downloaded as many high-res references as possible, which included photos of different camera angles, different lighting and weather conditions. This gave me a wide range of details to work with, as well as a general idea of the space relative to human scale.While the architecture consisted of 3 parts - the rotunda, the hallway and the end wall - they were essentially the same module. Hence I initially modeled one wall module consisting of a mirror and a window, duplicated and bent along a circle to get the walls of the rotunda. !GIF 1Rhino modeling always begins with curves!GIF 2wall module duplicated and bent along a curveThe module was reused for both the hallway and the end wall to save time and (rendering) memory.Having built up a library of architectural profiles and ornaments over the past year, I was able to reuse and recycle profiles and ornaments for the modeling of the architecture.Ornament modeling could be a daunting task, but with a couple of ornaments modeled I simply duplicated and rearranged them geometrically to get unique shapes. !GIF 3Rhino ornament placementThe ceiling ornament for instance, was basically a single ornament that covered 1/8 of the dome surface, but radially duplicated 8 times to cover the entire ceiling. The same technique also applies to modeling of the chandelier.All the objects within Rhino were then assigned to different layers by material; this made material assignment a lot easier later on in C4D.assigning objects to layers by materialNotes:The best way to get familiar with Rhino navigation is to model small-scale objects. Simply Rhino has a great beginner's series in modeling a teapot in Rhino:I have posted a few WIP montages on my Youtube channel, while not meant to be tutorials, one should be able to get an overview of my modeling process:https://www.youtube.com/c/jyjohnyimA detailed Rhino tutorial for modeling ornaments:For anyone in a pinch, there are pre-built ornaments for purchase on 3D model stores like Textures.com; some ornament manufactures have free models available for download on Sketchfab and 3dsky. Exporting from Rhino to C4DAfter 4 days of architectural modeling, the Rhino model eventually consisted of 50% NURBS and 50% mesh. I used NURBS mostly for the primary architectural elements (walls, cornices, skirtings) and mesh for the ornaments.Rhino is primarily a NURBS (Non-Uniform Rational B-Splines) software; and although NURBS models are very accurate in representing curve and surface data, most render engines or DCCs do not support NURBS.For this reason I exported the NURBS and MESHES to .3dm and .FBX respectively, and used Moment of Inspiration (MOI) to convert the NURBS model to a mesh.MOI has the best NURBS to quad mesh conversion(over Rhino or any other DCCs) - it always gives a clean mesh that could then be easily edited or UV-mapped for rendering.exporting from MOI Importing into C4DImporting the FBX file into C4D was relatively straightforward, but there were a couple of things I paid attention to, notably the import settings, the model orientation and file unit, listed below in order of operation:1) open up a new project in C4D (project unit in cm);2) merge FBX;3) check "Geometry" and "Material" in the merge panel;4) change imported geometry orientation (P) by -90 degree in the Y-axis;5) use script "AT Group All Materials" to automatically organize Rhino materials into different groups.importing FBX exported from MOI!GIF 7importing FBX exported directly from RhinoI modeled half of the architecture in rhino and then mirrored it as an instance in C4D, since everything is symmetrical.!GIF 8C4D instance & mirroringThe floor (Versailles Parquet tiles) was modeled using photo-texturing method, most widely touted by CG artist Ian Hubert. I applied a Versailles Parquet tile photo as texture on a plane, then sliced up the plane with a "knife" tool to get the reflection roughness variations along the tile grouts. This allowed me to add subtle color and dirt variations with Curvature in Redshift.The floor tile was then placed under a Cloner to be duplicated and spanned over the entire floor.!GIF 9Cloning floor tilesNotes:C4D and Rhino use different Y and Z orientations, hence FBX directly exported from Rhino has to be rotated in C4D.Download link for "AT Group all materials" script: http://www.architwister.com/portfolio/c4d-script-group-materials/Ian Hubert's Youtube Channel has a lot of useful and efficient CG techniques, photo-texturing being one of the most popular:https://www.youtube.com/c/mrdodobird/videos Architectural Shading (Cinema4D + Redshift)Since I grouped all the meshes by materials in advance, assigning materials was just as simple as dragging and dropping to the material groups as cubic maps or Tri-planar maps.I used Textures.com, Greyscalegorilla's EMC material pack and Quixel Megascans as base materials for all my shaders.For ACES to work correctly within Redshift, every texture has to be manually assigned to the correct color space in the RS Texture Node; generally diffuse/albedo maps belong to "sRGB", and the rest (roughness, displacement, normal maps) belong to "Raw".My architectural shaders were mostly a 50/50 mix of photo texture and "dirt" texture to give an extra hint of realism.!SCREENSHOT 2RS Shader Graph of the wall material 2. THE CHARACTERThe base character was created in Character Creator 3 (CC3) with Ultimate Morphs and SkinGen plugins - both of which were very artist friendly with self-explanatory parameters. Ultimate Morphs provided precise slider controls to every bone and muscle size of the character, while SkinGen gave a wide range of presets for skin color, skin texture detail and makeup.I also used CC3's Hair Builder to apply a game-ready hair mesh to my character.!GIF 10CC3 morphing & Hair Builder Face TexturingFace was the one of the most important parts of the CG character that required extra attention. The best workflow I found to add photorealistic detail was the "Killer workflow" using Texturing XYZ's VFace model and Zwrap.VFACE is a collection of state-of-the-art photogrammetry human head models produced by Texturing XYZ; every VFACE comes with 16K of photoscanned skin textures, displacement and utility maps; Zwrap is a ZBrush plugin that allows one to automatically fit a pre-existing topology to a custom model.The "Killer workflow" essentially matches the VFACE mesh shape to the CC3 head model; using the Killer workflow, I was able to bake all the VFACE details down to the CC3 head model once the 2 mesh shapes are matched up.My adaptation of the "Killer workflow" can be broken down as follow:1) export T-posed character from CC3 to C4D;2) delete all polygons except the head of the CC3 character;3) export both CC3 head model and VFACE model to ZBrush;4) use MOVE/Smooth brush to maneuverer VFACE model to fit as closely as possible to the CC3 head model;5) launch ZWRAP, click and match as many points as possible, notably around the nose, eyes, mouth and ears;6) let ZWRAP process the matched up points;7) ZWRARP should then be able to output a VFACE model that matches perfectly to the CC3 head model;8) feed both models into XNormal and bake the VFACE textures to the CC3 head model.!GIF 11matching points of VFACE (left) & CC3 HEADS (right) in ZWRAPNotes:Full "Killer Workflow" Tutorial on Textureing.XYZ's official Youtube channel:I recommend save the matching points in ZWRAP before processing.I also recommend baking all the VFACE maps individually in XNormal as they are very high-res and could crash XNormal when baked in batch. Skin Shading (Cinema4D + Redshift)Once I had the XYZ texture maps ready, I then exported the rest of the character texture maps from CC3.After that, I imported the character into C4D, and converted all the materials to Redshift materials.At the time of writing, Redshift unfortunately did not yet support Randomwalk SSS (a very realistic and physically accurate subsurface scattering model found in other renderers like Arnold), hence required a lot more tweaking when it came to rendering skin.The 3 levels of subsurface scattering were driven by a single diffuse material with different "Color Correct" settings. !SCREENSHOT 3RS Shader Graph of "Leg" materialThe head shader was a mix of both the CC3 textures and VFACE textures; the VFACE multichannel displacement was blended with the "microskin" CC3 displacements map.!SCREENSHOT 4RS Shader Graph of "Head" material!GIF 12Character look-dev!SCREENSHOT 5close-up render of the characterA “Redshift Object” was applied to the character to enable displacement - only then would the VFACE displacements show up in render.Note:Skin shading is one of the most advanced aspects in rendering. Linked below one of the most helpful tutorial for Redshift Skin shading: Hair ShadingHaving experimented with grooming using C4D Ornatrix, Maya Xgen and Houdini, I decided that using the baked hair mesh from CC3 for project "Ballerina" was leaps and bounds more efficient down the line.I use a Redshift "glass" material with CC3 hair textures maps fed into the "reflection" and "refraction" color slots, as hair (in real life) reacts to light like tiny glass tubes.Note:For anyone interested in taking the CC3 hair to the next level of realism, CGcircuit has a great vellum tutorial dedicated to hair generation and simulation.!GIF 13early test of CC3 mesh hair to hair geometry conversion in Houdini 3. THE ANIMATION Character Animation (iClone)I then exported the CC3 Character to iClone for animation.I considered a couple of ways to approach realistic character animation, these included:1) using off-the-shelf mocap data (Mixamo, Reallusion Actorcore);2) comissioning a mocap studio to do bespoke mocap animation;3) using a mocap suit (e.g. Rokoko or Xsens) for custom mocap animation;4) old-school keyframing.Having experimented with various off-the-shelf mocap data, I found Mixamo mocaps to be way too generic, most of which look very robotic; Reallusion Actorcore had some very realistic motions, but I could not find exactly what I needed for the project.With no budget and (my) very specific character motion requirements, option 2 and 3 were out of the picture. This led me to old-school keyframing.First I screen-captured videos of ballet performances and laid them out frame by frame in PureRef. I then overlaid the PureRef reference (in half opacity) over iClone, and adjusted every character joint to match my reference using “Edit Motion Layer”.!GIF 14Pose 1Pose 2!GIF 16final character animationThe animated characters were then exported to Alembic files.NOTE:While my final project concept depicted ballerinas in slow motion, my original idea was actually to keyframe a 20-second ballet dance, which I very quickly realized to be bad idea for a number of reasons:1) in slow motion a lot of frames could be interpolated, but real time motion involved a lot of unique frames and hence required a lot more tweaking;2) subsequently more unique frames meant more rendering problems (flickering, tessellation issues etc.).!GIF 17early test render of my original ideaConsidering this as my first character animation project, I came to the conclusion of doing a slow-motion style sequence instead - 2 unique poses with 160 frames of motion each. Garment SimulationCloth simulation was by far the most challenging part of the project.The two major cloth simulation/solvers that I considered were Marvelous Designer (MD) and Houdini Vellum.While Houdini Vellum was much more versatile and more reliable than Marvelous Designer, I personally found it to be way too slow and therefore impractical without a farm (one frame of cloth simulation could take up to 3 minutes in Houdini Vellum vs. 30 seconds in Marvelous Designer on a Threadripper PRO 3955WX with 128GBs ram).Cloth simulation in MD, while generally a lot quicker to setup than Houdini vellum, was not as straightforward as I imagined.Simulated garments in MD always came with some form of glitches; this included cloth jittering, piercing through character or just complete dislocations. Below are some of the settings I tweaked to minimize glitches:1) using "Tack" to attach parts of the garment to the character;2) increasing cloth "Density" and "Air Damping" to prevent garment from moving too fast and subsequently move out of place;3) simulate parts of the garment in isolation - though not physically accurate, allowed me to iterate and debug a lot quicker.I also reduced "Gravity" in addition to the above tweaks to achieve a slow-motion look.!SCREENSHOT 7MD Simulation Settings!GIF 18MD simulationNote:Due to the license agreement of a sewing pattern I used, I am not able to share screenshots of my garment creation process. However the official Marvelous Designer Youtube channel has a lot of garment modeling livestreams which I find to be the most helpful resource for learning MD:Alternatively there are a lot of readily available 3D garment online (notably on Marvelous Designer's official site or Artstation Marketplace) which I used as a basis for a lot of my projects.MD is extremely prone to crashing, there is also a bug in both MD10 and MD11 that prevents saving of simulated garments 90% of the time, so always export simulated garment as Alembic files rather than relying on MD to save the simulation. Simulation Clean-upAfter dozens of simulations, I would then import the MD exported Alembic files into Houdini, where I did a lot of manual cleanups, this included:1) manually fixing collided cloth and character with "Soft Transform";2) reducing simulation glitches with "Attribute Blur";3) blending together preferable simulations from different alembic files with "Time Blend".!GIF 19cleaning up simulated cloth in Houdini with "Soft Transform"There are two tutorials that explain the Houdini cloth cleanup process in great detail, which I watched on a loop while working on the project:Cloth Production in Houdini:https://www.cgcircuit.com/tutorial/houdini-cloth-in-productionHoudini Vellum Creature Setup:https://www.cgcircuit.com/tutorial/houdini-vellum-creature-setupThe cleaned-up cloth simulation was then exported as Alembic to C4D. Alternative to Garment SimulationFor anyone frustrated by the impractical Houdini Vellum cloth simulation times and MD glitches, an alternative would be to literally attach the garment to the character's skin in CC3 - a technique most commonly found in game production.attaching garment to character in CC3While this is a great time-saver alternative, garment created in CC3 lacks realistic cloth movements and wrinkles; I recommend only using this method for objects tightly attached to the character (shoes) or only as a last resort for garment if MD cloth simulation keeps failing.Note:Linked below Reallusion's official guide for creating game-ready garments:https://manual.reallusion.com/Character_Creator_3/ENU/3/Content/Character_Creator_3/3/08_Cloth/Creating_Custom_Clothes_OBJ.htm Garment Baking and ShadingOnce I was done with cloth simulation in MD and clean-up in Houdini, I imported the Alembic file into C4D.MD Alembic files always show up in C4D as one alembic object without any selection sets; this makes material assigning impossible.This was where C4D baking came to play - a process I used for converting the Alembic file into C4D object with PLA (Point Level Animation):1) drag the alembic object into C4D timeline;2) go to "Functions";3) "Bake Objects";4) check "PLA";5) then bake.Going through the steps above I was able to get a baked down C4D object that I could easily select polygons and assign multiple materials using selection sets.I then exported an OBJ file from MD with materials, imported into C4D and dragged the selection sets directly onto the baked down garment object. This eliminated the need to manually reassign materials in C4D.I used a blend of linen texture maps (from Quixel Megascans Bridge) and Redshift Car Shader to emulate sequins fabric (think "blink") found in a lot of professional ballet tutu dresses.!SCREENSHOT 8close-up render of the fabric materialNote: Youtube Travis Davis has a tutorial demonstrating the exact procedures:WARNING: do not use AO or Curvature nodes for the simulated garment materials (or any animated object), as they could potentially produce glitches in final renders. 4. RENDERING Lighting & EnvironmentAlthough I tried to keep my lighting as minimal as possible, project "Ballerina" inevitably required a lot of tinkering due to the nighttime setting.The nighttime HDRI did not provide sufficient ambient light to the interior space, and the chandelier bulbs were way too dim as the primary light source. Ultimately I placed an invisible spot light under the center chandelier and used a fake spot light that only affected all the architectural ornaments. The fake light provided an extra level of bounce light that gave just the right amount of illumination without ruining the moody atmosphere.I also added a "Redshift Environment" controlled in Z axis multiplied with "Maxon Noise" to give more depth to the scene. Exterior-wise, I scattered 2 variations of Dogwood Trees with C4D "Matrix" in the surrounding area. They were lit from ground up in the scene to give extra depth.In summary lighting of the scene includes:1) Dome light (nighttime HDRI) x 12) chandelier (mesh lights) x 33) Spot Light (center) x 14) exterior Area Lights x 45) fake Area Light positioned under chandelier (includes architectural ornaments only)!SCREENSHOT 9RS lightsNotes:Redshift has a very good tutorial on Youtube on controlling the Redshift Environment:The trees were generated with SpeedTree.Lighting takes a lot of consistent practice to master; apart from my daily CG practice, I spent a lot of time watching b-rolls/breakdowns of movies - for instance I took a lot of inspiration from Roger Deakin's lighting and cinematography, as well as Wes Anderson's frame composition and color combinations. Camera MovementsAll my camera movements were very subtle. This included dolly, camera roll and panning shots, all driven with Greyscalegorilla's C4D plugin Signal.I personally prefer using Signal for its non-destructive nature, but old-school key-framing would work just fine for similar camera movements.!SCREENSHOT 10Signal Graph Draft RendersOnce I had the character animations, cloth simulations and camera movements ready, I began to do low-res test renders to make sure that I would not get any surprises during the final renders, this included:1) flipbook (openGL) renders to ensure the timing of the animations were optimal;2) low-res low-sample full sequence renders to ensure there were no glitches;3) full-res (2K) high-sample still renders with AOVs (diffuse, reflection, refraction, volume) to check what contributed to the prevalent noise if any;4) submitting test render to Fox Renderfarm to ensure the final renders matched up with my local renders.This process lasted over 2 months with iterations and iterations of renders and corrections.!GIF 21close-up shot I!GIF 22close-up shot II!GIF 23final shot Final Renders & DenoisingI used a relatively high-sample render setting for the final renders, as interior scenes in Redshift were generally prone to noise.!SCREENSHOT 11!SCREENSHOT 12!SCREENSHOT 13!SCREENSHOT 14RS final render settingsI also had motion blur and bokeh turned on for the final renders - in general motion blurs and bokehs look better (more physically accurate) in-render compared to motion blurs and bokehs added via compositing.Half of the final 2K sequence was rendered on a local workstation, while the rest was rendered on Fox Renderfarm, totalling about 6840 hours of render time on dual RTX 3090 machines.I used Neat Video for denoising the final shot, whereas the closeup shots were denoised using Single Altus (in Redshift).Note:Always turn “Random Noise Pattern” off under Redshift “Unified Sampling” when using “Altus Single” for denoising. Redshift Rendering GI TrickRedshift's GI Irradiance Cache calculation could be quite costly; my final renders for instance have an average of 5 minutes of GI Irradiance Caching time for each frame.In Vray there was an option in the IR/LC setting named "use camera path", designed specifically for scenes where the camera would move through a still scene. Once "use camera path" was enabled Vray would then only calculate one frame of GI cache for an entire sequence.There is a Redshift Forum post written by Andrian that explains how he was able to replicate the same function in Redshift.Borrowing a page from Vray, I use the following motion blur settings to calculate the first frame of Irradiance Cache:!SCREENSHOT 13RS rendering GI trick motion blur settingThe one Irradiance Cache is then used to render the entire sequence. Two shots of the project were rendered using one single GI cache, resulting in a 10% faster render time overall.NOTE:The GI trick only applies to shots with very little motion; when applied to the 2 closeup shots of project "Ballerina" for example, I got light patches and ghosting on the character skin. ConclusionHaving spent months working on the project, I have gained an appreciation for traditional character animators - I never realized the amount of effort involved in crafting character animations, and the subtlety of details required to bring convincing CG characters to live.Though I would not consider myself to be a character artist, I personally think Character Animations are really powerful in making CG environments relatable, and therefore would still be an essential part of my personal CG pursuit moving forward.
As the leading cloud rendering services provider, Fox Renderfarm is here to bring to you a news about SIGGRAPH Asia 2022 Computer Animation Festival.The SIGGRAPH Asia 2022 Computer Animation Festival (SACAF 2022) is now open for submissions. The festival invites creators from around the world to submit their projects and help showcase the world’s most innovative and exciting computer animation.This coming December, SACAF 2022 will convene in Daegu, South Korea to celebrate the vibrant, diverse, and inspiring world of computer animation. From short films to scientific visualizations to AI-enhanced deepfakes, this year’s festival promises its most expansive and compelling program ever.An international jury of top computer animation experts will judge the best works entered in each category; from that pool of top picks, they will hand out three prestigious 2022 awards: Best Student Project, Jury Special, and Best in Show. In addition, judges will select between 30-40 (or more) “best of the best” works from across the submission pool, to be curated into the two “stars” of the festival: the always spectacular Electronic Theater and Animation Theater screenings.Each year, the Electronic Theater and Animation Theater never fail to dazzle and delight conference audiences eager to sit back and enjoy an entertaining and thought-provoking sample of the world’s best CG animation.If it’s animated, and a computer was used at some part of its production, SACAF 2022 wants to see it! Join the festivities and help the festival honor the best in CG animation -- submit your latest projects in any of the following categories:1. Computer Animated Shorts: Includes character animation, narrative works, experimental works, opening sequences, game cinematics, selections and/or montages of animated television series, new-media format.2. Animated Feature Films: Selections and/or montages of computer animation created for animated feature films.3. Music Videos: Commissioned and/or independent works that use any combination of computer animation, digital effects, and live-action to illustrate, enhance, and/or complement a musical creation.4. TV and Web Commercials: Advertisements created entirely or partially with computer animation and/or digital effects. This category also includes promotional spots, broadcast bumpers and graphics, and public service announcements.5. Visualizations and Simulations: Computer animations created to explain, analyze, or visualize information for applications including scientific research, architecture, engineering, systems simulations, education, and documentary projects.6. Visual Effects for Films and TV Programs: Selections and/or montages of visual effects created for live-action films and/or for television programs.7. Real-Time Animation: Game, web, and mobile animations that are rendered in the same amount of time that it takes to play them back. Real-time technology demos are also encouraged! Real-time technology demos should be submitted to Real-Time Live!8. Others: Computer Animations that do not fit in any of the above categories.The submission deadline is 31 July 2022. Visit the SACAF 2022 webpage for more information, including how to submit, submission rules and requirements, and an FAQ.Or you can submit your work directly by logging into the SIGGRAPH Asia Submission System.Please share the news with classmates, team members, and colleagues - SACAF 2022 wants to see their work too!“Using new creative tools and collaboration platforms – from virtual production and game engine-based real-time workflows to visualizations powered by cloud-based, high-performance virtual workstations – more CG-enabled animation is being produced than ever before,” says SACAF 2022 Chair and AWN publisher and editor-in-chief, Dan Sarto. “People are back in theaters, streamers are stuffing their platforms with animated entertainment content for all audiences in all genres, the metaverse is on everyone’s tongues, and technology companies big and small continue to expand how big data can more easily and quickly be turned into useful imagery. We can’t wait to see the latest, most compelling, and inspiring computer animated works produced around the world. Be well, stay safe, and hopefully, we’ll see you this coming December in Daegu!”BY SIGGRAPH Asia
In the season of rebirth, renewal and awakening, Fox Renderfarm launched Evergreen Program as a time-limited offer to benefit more individual artists and SMEs, which in turn to prosper with the CG community as a whole hand in hand.If you're a newly-registered user in 2022, start your rendering journey with a special offer from Fox Renderfarm now! Evergreen Program SPECIAL OFFER：If you meet the conditions, you'll get a $50 Render Coupon and also the Gold Membership for 90 days!Valid from April 27th to July 31th (UTC+8).Details：- All the coupons offered will be validated before 12/ 31/ 2022As the leading cloud rendering service provider, Fox Renderfarm will provide safe and fast cloud render farm services for our customers and we always focus on the continuous progress and development of the CG community.