Interview with SIGGRAPH 2021 Conference Chair, Pol Jeremias-Vila, A Man of Many Responsibilities
Pol Jeremias-Vila is the Co-Chair of SIGGRAPH Asia 2019 Computer Animation Festival (CAF). He has been a consistent force in helping to elevate the conference in many years.
Originally from Spain, Pol is the Lead Graphics Engineer at Pixar Animation Studios where he develops algorithms to help artists make movies. He is credited in multiple movies including Toy Story 4, Incredibles 2, Coco and Finding Dory. In addition to his credits on films, he is also the co-founder of , a website that enables graphics enthusiasts to create and share rendering knowledge.
Since 2012, Pol has been actively involved with SIGGRAPH, holding multiple roles on past conference committees, including as Computer Animation Festival Director, Real-Time Live! Chair, and Virtual, Augmented and Mixed Reality Chair, as well as serving as a content contributor and juror.
Also, he will chair the SIGGRAPH 2021 conference in Los Angeles. Let’s look forward to a new CG memorable ride.
Here’s the interview between Pol Jeremias-Vila and Fox Renderfarm, in which Pol shared his SIGGRAPH experience and the unforgettable memory in SIGGRAPH Asia 2019.
Fox Renderfarm: Why are you so passionate about Computer Animation Festival (CAF)?
Pol Jeremias-Vila: One of the things I like about the Computer Animation Festival is something as technical, as how we render a polygon can be used to tell meaningful stories that can either inform people how the lights went off in Puerto Rico, and how that affected the rest of the country, or it can tell a story about Mascot. It can help with the development of feature films through visual effects, like this simple piece of technology can help tell all these different stories, and it can help create this medium. And I think that's a very interesting field and I personally really like it of course.
Fox Renderfarm: What’s your goal for the CAF in SIGGRAPH Asia 2019?
Pol Jeremias-Vila: We wanted to create a show this year that had a lot of varieties that you could see the different ways in which computer graphics are used. And for us, it was important to showcase scientific visualization. We believe this is a field that uses computer graphics in a very important way, and we wanted to support that. So we actively supported that, you can see it in the film, similarly, visual effects. I think for us that was one of the goals, say, we're gonna make a show that really tells the story that you can use this technology in different ways. And it doesn't need to be just short films. It could be advertising as well.
CAF in SIGGRAPH Asia 2019
Fox Renderfarm: Any unforgettable memory about this CAF?
Pol Jeremias-Vila: This year I’m co-chairing along with Jinny Choo, the SIGGRAPH Asia Computer Animation Festival. So great memories, I think seeing the numbers of submissions coming in was really satisfying. We did a lot of work to outreach to areas in which we hadn't done it as intensively. And seeing all those numbers and seeing all the submissions from schools all over the world, it was really rewarding.
Fox Renderfarm: There is an ‘inter-see show’ in this CAF, any efforts behind that?
Pol Jeremias-Vila: For us, it was a way to break the rhythm of the show and make sure that there was a little bit of surprise as well, like something that wasn't really expected. We have inter-see shows after each piece, so you can expect the inter-see show, and then we put something that may be surprising for some people, hopefully, funny, and trying to make it all be a more coherent experience. Even though they're disconnected stories, we try to create a flow that lasts an hour and 40 minutes. It doesn't feel like too disconnected. It needs to flow.
Fox Renderfarm: Could you share some rendering technology development trends with us?
Pol Jeremias-Vila: One of the things that we are seeing in the Computer Animation Festival (is) that there are more submissions that are using real-time engines to produce short films. And this is always interesting and we try to support every technology that is used for filmmaking. So this year we showcased some works that were using real-time engines as well as other techniques that are already using, offline renderers, path tracing or ray tracing. Again, we don't necessarily look at the technology per se more, so like the artistic composition and a story. We do try to showcase the different ways in which you can use computer graphics to tell stories. For example, this year we have scientific visualization, we have advertising; we have visual effects breakdowns, short films. All of them use computer graphics regardless of being a real-time render or being an offline render. They all use this medium to tell stories and that's what really matters, and it's at the core of the festival.
Some works of CAF Electronic Theater
Fox Renderfarm: Any difficulties that you’ve met when you were working on CAF? And how did you solve it?
Pol Jeremias-Vila: So there is an obvious physical difficulty when you are on site. You have to deal with screens, projectors, light that might be coming in from other rooms. We try to create this perfect environment and to enjoy films. And we try to be as respectful as we can with the works that are submitted to our conference. We care a lot about how that video is playback. (I’m) not sure (if) that is difficult but it's definitely one of the parts that we try to really take good care (of). Another part that's interesting always is how to deal with these big numbers of submissions, and how to make sure that they are all properly reviewed that we have enough opinions on each of the pieces. So we can have our jurors provide good decisions, so they have enough information to do a good decision. What we see though is that we do have a lot of content that gets through our hands and we would love to have more spaces, in which we can show it. So this year we have also an Animation Theater that goes and runs all day long.
CAF in SIGGRAPH Asia 2019
Fox Renderfarm: SIGGRAPH is closely associated with the emerging technologies, how do you integrate them better?
Pol Jeremias-Vila: Personally, one of the things we did in North America in 2017, was to invest in a new way of seeing 360 and VR films. And for us, it was to create a new physical space that people could go in the same way that you go into the Electronic Theater to see the best of the 2D films. Can we create a physical space that people can go and enjoy VR? I think it was a great success. It's happening as well here in SIGGRAPH Asia. And I'm sure that sometime, we will see some forms of stories that are grounded in the real world through AR or something like that. I don’t know exactly what that will be, and I think that's why SIGGRAPH always needs to be aware of what's happening in those spaces, what are those stories going to look like, and how we are going to support those creators. That's where I see that as SIGGRAPH members, we need to be thinking about those things, talking about it, and talking to the people that are creating those stories to make sure they have a place here, and they can show it.
Fox Renderfarm: SIGGRAPH 2021 will be in Los Angeles! As Conference Chair for SIGGRAPH 2021, anything you want to share with us?
Pol Jeremias-Vila: Really excited! We will start working the way on preparing SIGGRAPH 2021. We bring the team together. It's going to be in Los Angeles. I'm not sure what technologies will be around in 2021. We might have some surprises. The team that we are working together is spectacular. I'm really confident that we're gonna have a really awesome show. We're gonna have actually our first on-site meeting in February of next year. So, this project takes time to prepare. So, really excited about it!
Fox Renderfarm: Any other things you want to share with CG enthusiasts?
Pol Jeremias-Vila: Yes! I have it (the brochure) right here!
Fox Renderfarm: Any other things you want to share with CG enthusiasts?
Pol Jeremias-Vila: If you have an opportunity to see the Computer Animation Festival 2019. We hope you really enjoy it! Please request showing in your local areas. We'll be more than happy to try to arrange that. We hope you enjoy the show!
Special thanks to Rajeev Dwivedi from Live Pixel Technologies.
Interview with Mike Seymour, an Outstanding Digital Humans Researcher
What happens when technology has a human face? How digital humans will affect our lives? These are the questions that Mike Seymour is exploring. Mike is a Digital Humans researcher who researches on new forms of effective communication and education using photoreal, realtime computer generated faces.
Mike Seymour @ SIGGRAPH Asia 2019
Mike was Chair of Real-Time Live! in SIGGRAPH Asia 2019, organizing the program showcased the cutting-edge real-time technologies, from mobile games to console games to virtual and augmented reality from around the world. He is also the co-founder of MOTUS Lab at The University of Sydney.
Mike Seymour at TEDxSydney 2019
As the lead researcher in the MOTUS Lab, Mike is exploring using interactive photoreal faces in new forms of Human Computer Interfaces (HCI) and looking at deploying realistic digital companions and embodied conversational agents. This work has special relevance for aged care and related medical applications such as stroke victims, and those with memory issues.
He suggests that we need to find new ways to provide interaction for people, beyond typing or simply talking to our devices, and that face-to-face communication is central to the human experience. At the same time, he examined some of the many ethical implications these new forms of HCI present.
He is well known for his work as a writer, consultant and educator with the websites fxguide.com and fxphd.com which explore technologies in the film industry. These websites now have huge followings, as they provide an important link between the film and VFX community and the researchers and innovators who constantly push the limits of technology.
Some films and TV series Mike has worked on
In addition to fxguide.com and fxphd.com, Mike has worked as VFX supervisor, Second Unit Director or Producer on some TV series and films, winning AFI Awards Best Visual Effects for the movie Hunt Angels in 2007 and being nominated for Primetime Emmy Awards for the TV mini-series Farscape: The Peacekeeper Wars in 2005.
Fox Renderfarm was honored to have an interview with Mike Seymour in SIGGRAPH Asia 2019. Here’s the interview between Mike Seymour and Fox Renderfarm.
Fox Renderfarm: Would you give a brief introduction to Human Computer Interfaces (HCI)?
Mike: So I research Human Computer Interfaces or HCI, which is the idea of how we deal with computers. And if you think about it, most computers are just getting input from a mouse or a keyboard, but what if we could talk to our computers, what if the computers could respond to us emotionally. So the work that I do with digital humans or virtual humans is putting a face on technology, we’re putting a face there so that we can interact with that. Because after all, we work really well with faces, we respond to faces, we travel great distances to see someone face to face. So we think it'd be really interesting if we could take that idea of having a face, and put it on a computer, and allow us to work with that in a much more natural and human way.
Fox Renderfarm: What are your biggest achievements of HCI so far?
Mike: So one of the interesting things that's happened just in the last couple of years has been this amazing nexus of technology and approaches. We got this combination of things that are really blowing the doors of what's possible. Because we can start to produce very photorealistic digital humans, in other words, people that really look like us. Now, this is super important because if we produce something that looks not very good, we actually have a negative reaction to it. It's not like audio, whether you have sort of good quality, better quality, and then great quality. With people, we have either cartoons, or we need very very high quality. But if we have something that's not so good, people actually reject it out of hand. So we call it a non-linear response, in other words, as it gets better in quality, your reaction varies up and down a lot. So only recently, we've been able to produce this incredibly realistic faces. And most importantly for HCI, those faces can run in real time, so they can smile at you in real-time, talk to you in real-time, nod and gesture, just very different from a video or something you might see in a feature film, where they might have hours and hours to produce a clip. We need to produce these things in sometimes as short as about 9 to 12 milliseconds.
MEET MIKE @ SIGGRAPH 2017
Fox Renderfarm: Have you met any challenges in the HCI development process?
Mike: One of the big challenges we have is actually we've done a lot of really great work on faces and on being out to produce digital humans. That work’s not done, but it's certainly advanced tremendously in the last sort of three or four years. We're now having the grapple with how do we solve some of the issues over voices. If I'm actually talking to someone in China and I'm in Sydney, and like my colleague is from China, and of course he speaks the language that I don’t. So if we're on a conference call, and somebody at the other end doesn't speak Chinese, like I don't speak Chinese. We have this problem that I have to solve the language. Now, if I've got an avatar, something that I'm puppeting, then I would be able to speak in English, and have a version of me speak in Mandarin, and be able to understand across barriers. That’s good, and that's great. But what if I'm not puppeteering it, what if I actually want the computer to talk to me. I now need to make a synthetic voice. And the challenge right now is to see if we can do what we’ve done for faces to audio, to voices. It’s kind of a thing you may not expect. But of course, what we want is the computer to speak in a really natural way, to have the right cadence, the right kind of tone, the right kind of attitude. So getting that natural sounding and audio, it's not that it's harder than it is to do the vision. But we actually are a lot less tolerant of problems with audio. If you're watching a movie and the vision isn't quite right, then you can hear everything, you’ll be really happy. But if you were in a situation that the vision looks great, but you couldn't hear what the actors were saying, you'd switch the channel or go do something else. So what we're trying to do now is get the audio to be impeccably good so that it can go along with what we've been doing in vision.
MEET MIKE @ SIGGRAPH 2017
Fox Renderfarm: How do you think our life will be changed by HCI, with deep learning algorithms, GPU graphics cards rendering, and 5G?
Mike: The astounding thing is that now, we actually have more compute power than we need to do some of the functions we want to do with the computer. We can afford to spend some of the compute power, producing these amazingly interactive user interfaces. That's part one, and that's obviously been influenced enormously by GPU, and the much faster graphics. And on top of that, we've had a new approach to how to use the graphics which is AI or deep learning. So now we have the second part of the jigsaw puzzle which allows us to do incredibly clever things by letting the machine learn my face, and then synthesize a plausible version of my face, again, in real-time, because of that GPU. And then the third part of that jigsaw puzzle is that we're able to do that now increasingly with 5G. Now, 5G is obviously very new, but what it offers us is not just bandwidth, which we imagined it would be able to sort of transfer more data, that's part of it. But one of the real secrets for 5G is low latency. So, in fact, we can have interactivity, so things come to live when they are realistic, and rendered quickly. Because we've used actual faces to construct them, and then we have this very low latency, so we can interact. All of that is just going to change how we do communication education, even in areas you might not imagine, such as health.
Fox Renderfarm: Fox Renderfarm is going to provide online real-time rendering services, is that possible to cooperate with you on the HCI research?
Mike: We are really keen to work with people all over the world, and it's the mantra of our lab that the research that we do, we actually don't own the IP, so we give away all the data. We work with companies around the world so that we can give back to the community. Our interest is seeing that this moves forward. And one of the great things about rendering on the cloud, and the idea of being able to have a really good infrastructure that's on a global basis is that, with high-speed communications, and with 5G, we are increasingly seeing this being something that we can adopt into things that general people can use. So, at the moment we’ve got a history where I might be using a if I'm a really big company. But what we're seeing now is this move to the importance of being able to do things that can be democratized, and I think we're gonna see this vast explosion where we want to have quite a lot of power on our personal device, but actually tapping into a broader deep learning, AI kind of environment to provide this great interactivity. And as that happens with low latency, and the kind of infrastructure we're seeing. The ability to scale up is just going to produce sensational results.
Fox Renderfarm: As the Chair of Real-Time Live! in SIGGRAPH Asia 2019, what’s your biggest surprise?
Mike: There are a lot of submissions to Real-Time Live! this year. But Real-Time Live! is a little different from other things because you need to actually mount a performance. It's a bit like volunteering for a stage show. If I am coming here to do a show, I will bring my powerpoint on my laptop. But if I'm coming here to do Real-Time Live!, like the Matt AI project, and a number of other projects that are being seen, you actually have to bring a whole lot of computers, a whole lot of gear and actually mount a live presentation. You have nine minutes to sort of wow the audience, and of course, it's very unforgiving because, in nine minutes, you can’t afford to switch the computer off and start again. So we've been really impressed by the variety of the projects, and the variety of applications that they’re addressing. So we have teams that are addressing making digital characters talk, which is one of my favorites, I love that one. But we've also got ones that people are looking at how to use VR and real-time graphics for science research, for communication, as well as just artistic pieces that are very much just producing a really amazing show in their own right.
Real-Time Live! in SIGGRAPH Asia 2019
Fox Renderfarm: You were doing VFX before, and you are a researcher and also Co-Founder for fxguide.com, what’s the biggest influence along your multi-dimensional career path? What do you do to keep yourself inspired and motivated?
Mike: I was in the visual effects industry for many years and got nominated for Emmys and AFIs, and that was all great. I enjoyed that and it was terrific work. What I decided a little while ago, having done quite a lot of research and teaching and increasingly doing consulting work to companies around the world, which we still do, I thought it would be really interesting to up that research component and get more involved with hardcore research. So I still come consulting, I do work for major Hollywood studios, and I enjoy that work tremendously. But what I'm interested in is can we, in addition to that work, in the entertainment industry, take that tech and apply it to these other areas. So, for example, my research area at the moment is seeing if we can take some of these digital human technology and use it for stroke victims. So, people that have had a stroke and have trouble forming short-term memories, are very good with long-term memories. But they literally find everything that's going on around them today a little unfamiliar and disconcerting. As an extraordinary high level of stroke in the world, a lot of people have strokes, and quite a high percentage are actually under the age of 65 and wanting to still continue to contribute and work, because they are of that younger age. Now, of course, we want everybody to benefit from this, but particularly those people that are still trying to work in the world, if you have problems with short term memory, all technology starts to become a challenge. And we expect someone to use a computer just (as) to use a phone these days. Well if we could put a familiar face on the technology, a face from their past, a face that is I don’t think is a real person, but they are familiar, reassuring. Then this new thing, this new technology whatever it is, suddenly no longer seem quite so harsh, so unfamiliar, so disconcerting. And we think that's a really good way of being able to help with rehabilitation. So this is just one of the areas that we are looking at, taking this terrific tech from the entertainment industry, which I love to death, but just seeing if we can help people that are less fortunate, that have been through really hard circumstances.
Fox Renderfarm: Who or what projects inspire you most in VFX and Interactive Technology respectively?
Mike: So it's been really great work done in technology around the world. Obviously, some of the big film companies like Weta Digital and ILM have been doing terrific work. The research that I've been doing, we've managed to partner with companies around the world. So when we were doing a digital version of me, for example, we are partnering with Epic Games, but also with Tencent, which is terrific. And companies in Serbia, in England, and so it's an international kind of collective. And one of the things that really inspires me is how open these companies are working together and sharing what's going on. Because there's a lot more to be gained by expanding what we can do, than people worrying about individual bits. So the community that's doing this work has been really generous and really open with their work.
Fox Renderfarm: What’s your comment on Gemini Man?
Mike: Gemini Man is one of the most startling and just groundbreaking pieces of production that I've certainly seen, I was really impressed by a number of things. Firstly, they were doing work at Weta Digital, where we really knew the character very well at both ages. We know Will Smith as he is today, and Will Smith earlier in his career. We know from our own research that the more you are familiar with the face, the harsher you are. So if you have a younger version of someone you didn’t know, it may look great to your eyes, but their brothers or sisters would be very upset by that wouldn't feel right to them. So what we're trying to see is if companies like Weta can produce very familiar faces in a way that we find acceptable, reassuring, entertaining, and I think they've really done that with Gemini Man. The second thing that really impressed me is that in that film, while it's an action film there are a lot of slower emotional scenes, where there is really no way to hide. The young Will Smith is on screen and the camera isn't flying around. Sure, there are bike chases but there are other scenes he is really acting so that the audience can buy into that performance, I think it's terrific. I really applaud the work that the team of Weta Digital have done, it's absolutely well breaking.
images source: fxguide.com
Fox Renderfarm: Any other things you want to share with CG enthusiasts?
Mike: I think one of the things that I've been really happy about is how internationally the community has come together. There are teams now have got like pockets of excellence. There's a couple of teams in China that are just spectacularly good. And obviously, what we've seen with the work in China, I’ve actually lectured up in China, and visited many times, is we've got a real depth of both technical expertise and creativity. So it's really great to see the infrastructure being built up, things like the and so on. So that they can provide that technical support that will match the creativity, I think that’s been really good. Now there are two teams in China, I can think of, there's a team in Europe, a team in New Zealand, a team in Serbia, and in London, and of course, America. And so what's great is to see that this is a very balanced international effort, and I love the fact that here at SIGGRAPH Asia, we’ve got all of the teams coming and presenting their work and sharing things. Because, as I said earlier, there’s so much can be gained by people cooperating and working collaboratively together. And from all my years in the film industry, it's a thousand people that do the visual effects on a film. So you need this great collaboration of artists this great infrastructure from companies supporting that. And then, of course, you need people willing to be open and share their ideas, as they're doing here at SIGGRAPH Asia. So, it's really great.
Interview with Ernest Petti, Revealing the Production Secrets of Frozen 2
Fox Renderfarm Interview
During the visual and information feast - SIGGRAPH Asia 2019, Fox Renderfarm is delighted to have the chance to talk with Mr. Ernest Petti, Studio CG Supervisor at Walt Disney Animation Studios, who has also devoted to the production in Frozen 2, the biggest-worldwide-opening animated film of all time.
Ernest Petti has been working with Walt Disney Animation Studios for over 19 years, now is the Studio CG Supervisor. In this role he acts as a bridge between Production and Technology for long-term strategic initiatives, orchestrating the initiatives and projects of the Workflow team and uniting them to fit within the studio’s vision for workflow. Prior to this, he served as Technical Supervisor on Ralph Breaks the Internet (2018) and the 2016 Oscar-winning feature Zootopia. Ernest joined Disney in 2000 as a software engineer in the Technology group and has served as a supervisor in Lighting, Look Development, and Tactics. Credits include 2014's Oscar-winning feature Big Hero 6, as well as Wreck-it Ralph (2012), Tangled (2010), and Bolt (2008).
In the Featured Sessions of SIGGRAPH Asia 2019, Ernest delivered a presentation named - "Frozen 2" and the Past, Present, and Future of Tech at Disney Animation, and he was also being part of the discussion and communication: Proactive Large-Scale Pipeline Efficiency Management, with a panel from large-scale Animation and VFX studios to share insights to their challenges on how to balance between creating amazing visuals as well as given a tight production time frame.
During our interview, Ernest expressed his excitement about this year’s SIGGRAPH - how interested he was to connect with other people, companies and technologies. Besides, among all the cutting-edged technologies shown, machine learning sparks his curiosity about its application during his work. Of course, the development of rendering technology arouses his wonder about how to make a more interactive and direct manipulation with rendering, especially with GPU that comes along.
More insights into the production of Frozen 2 is definitely what Fox Renderfarm would not miss, and are also what we can’t wait to share with you. Let’s check out the interview video and article, and see how Walt Disney Animation Studios combines timeless storytelling with innovative technology.
(F=Fox Renderfarm, EP=Ernest Petti)
F: Could you tell us your main responsibilities in Frozen 2? How did you cooperate with the VFX departments along the production?
EP: My role is Studio CG Supervisor, I’m in the studio level position that kind of overlooks the long-term technical development and artistic workflows over the course of shows.
I work closely with the technology group and with the productions, and try to find the bridge between those over time. I was the Technical Supervisor on Ralph Breaks the Internet. On that show, we did stuff like the first steps into nested proceduralism for some of the buildings on the internet that paved the way then, and was built on top of further for Frozen 2. So there is that sort of continuity of shows that we passed on. And then in my current role, looking at workflow is a big thing that we are focused on and is in the concurrent collaboration and making that as smooth as possible between different departments. So talking to the groups in Frozen 2, like all the Visual Effects Supervisor like Steve Goldberg and the Technical Supervisor like Mark Hammel, and working with them and understanding what they're doing on their show, and making sure it's in line with the shows before, and moving into the future so that we can really build to what will come next.
Basically, when the new set of leadership starts on a show, we try to connect with them, then start understanding what our show’s specific needs, and what are things that we want to advance in the studio that makes sense to, also try to dock on at that show so that we can have some continuity.
F: How did you cooperate with the Production Director and the Production Designer to actualize the creativity through the technologies?
EP: When that story starts forming and the show leadership really is working with the Production Designer and with the Director to understand the story and what the look of the film is, achieving that comes first. So we really want to partner closely on what technology might be needed to make that happen. It’s very important that we’re able to achieve that. Then in partnership with that, it’s can that build off things that were already in the plan; should that accelerate things that we may have been thinking about but weren't going to necessarily line up with that timing; and are there things that aren't necessarily tied to show needs, but we do want to advance and this would be the right timing to do that, for instance, the work in USD - of course, we're hearing about lots of studios, we’re trying to make significant advances in USD in our pipeline for Raya and the Last Dragon, which is our movie coming out next November - so that's not a show need, it's nothing out of the artistic vision of that movie that said we need USD, but it'll help advance a lot of future tools and workflows. And we need to find the right place to start feathering that in.
F: Which part do you like the most in the production of Frozen 2? Why?
EP: It’s a movie that has a lot of scope and scale to it. I like that it kind of takes you in more surprising directions. It takes you outside of what you've seen before in the first one, so it's not staying in the same zone, it's leaving Arendelle. It’s going out into the wild into a different environment and world, and it has sort of unique Spirits and settings that we haven't necessarily done before.
F: How did you achieve the scale of autumnal trees and foliage through technical changes?
EP: In a lot of our films, we have (been) trying to strike the balance between artistic stylization and procedural simulation to make sure we have the complexity and richness that we want, and yet still the stylization that we need. And we've worked overtime to build the tools to give that stylization for, say a single tree. And then you place them well to get a cluster of trees that looks nice. But now, when you have a whole forest that has a certain level of stylization to it, and it has a lot of depth to the ground cover, the pebbles and everything else around it as well. We needed to prove our toolset so that we would not only have that sort of balance of stylization and complexity on the single tree level, and then make a whole forest of them, we can stylize the appearance of the forest as well. So we had nested proceduralism which would allow us to build up, like here's a pebble, here's a cluster of pebbles, now here's a ground cover that includes some leaves and a cluster of pebbles; and then it includes a tree, and then there's a grove of trees, and then the grove of trees expands to the forest. And you can sort of stylize but also build up and populate at each of those levels. And then we created a tool called Droplet that was essentially a procedural painting tool that you could then paint down the trees in a more painterly fashion, so that you could have more direct control over the style and flow of the forest as a whole, and all the trees throughout it. So it did definitely lead to expanding on our Bonsai tree tools and our Aurora instancer, as well as developing the new tool like Droplet.
Bonsai Instancing Zootopia Test
F: What’s the challenging part of the production? How to solve it?
EP: I think there's a couple of areas environment side we had a very lush rich forest environment that includes very colorful diverse autumn forest, but also because its fall leaves are on the ground that also had to be very rich. On top of that when you start adding in the elemental spirits and you have something like Gale, the Wind Spirit, you're tying that environment as a character and having to make sure there's a lot of coordination between how the environment is built; and how the character of the wind plays through that, and then interacts with the rest of the environment, and with any characters and the scenes with Anna or Elsa or any of the other characters. So this film presented a lot of challenges with collaboration. A lot of things that like the Water Spirit and like the Gale that didn't fit neatly into one department, one group of people or a linear pipeline. So the challenge is finding the ways to iterate smoothly when you're having to have a very tight connection between people across departments.
I think we always start with the sort of research into trying to ground the challenge that we're looking at, and what the closest connection is to the physical world. When you have the Water Spirit taking the form of a horse, you study water, you study horses, and then you bring all the people across departments together, and everywhere from art and trying to understand the stylization, and how far you want to go in wateriness versus sort of solidity. The effects departments, the spray and the foam of the mane and the tail to the animators, so you really have everyone working together to look at the challenges together, form more of a team around the problems you're trying to solve.
F: What did you do to make these characters realistic?
EP: There is the realism you want, the realism of a horse movement or the realism of water movement. And where do those conflict, and how do you find the right balance between those, and the choices you may make for a beautiful horse animation may not work when the mane and tail are refractive water that you can see through. Say, the mane goes in front of the face, it's not actually completely covering the face, you're kind of seeing through that. So that's again where what decision might be made in animation may need to be iterated on when you see a render. Because of the effects of the water on the character, so it's definitely a challenge to find just the right balance for that character.
F: In this process, what kinds of tests did you do to give the designers the idea?
EP: I think with all of the tests and with the Nokk as well, we did start with some hand-drawn tests. Even seeing once again the example of legs, and how much the leg should sort of splash away into water, and how much they could stay fairly solidified, was something that we tested with some hand-drawn tests first. And then you take that into animation, and then you would try to run little sort of various types of character tests, like a still test of the Nokk with just some head animation. That informed that we needed to take the water distortion and reduce it on the face. Because there were subtle movements, that distortion was making the rig harder and keep that just on the body. Then you would do a test on how much spray and spindrift should be in there. And you do a running test. So it's kind of you really work closely as a group and sort of run these tests to explore different aspects and keep the Directors in the loop for that time.
F: Could you explain more about the unified rendering?
EP: I think when we talk about unified rendering and looking forward, at a lot of places at Disney animation, we have a glViewport that we use for when we're viewing things in our various departments and getting previews as we're working, and then you do a final frame render that on a and takes a significantly longer chunk of time. Sometimes those technical requirements require different paths and different pipelines. We would love to find paths where almost what you see is what you get, and so there's more of a continuum from the preview that you see, to the final frame. It's almost more of a transition from speed to quality over time, but it's less of a dichotomy.
F: Any suggestions for the audience when watching Frozen 2?
EP: The movie takes place three years after the original story. The movies made six years after the original one came out, so there's been a lot of tech technology advancements. And I hope people can see it in all the beautiful images that are on the screen. At the same time, we want to bring you back to the same characters that you love from the first film. And you'll see some nice additions, like of advancement. Olaf now has a permafrost covering so that he won't melt as it's getting into autumn. He's learned to read now, and all the characters have sort of progressed. Because there has been a time period that's passed in the film as well.
F: You have made so many great animation features, which one do you feel most proud of? Why?
EP: I love different aspects of all of them. I have a special connection to Zootopia to a certain degree because XGen was one of the first developing (tool), when I first started at the company way back. And it was a big sort of fur-based show, and there was a lot in there that connected with me. Returning to Wreck-it Ralph with Ralph Breaks the Internet, it's always fun to revisit a place that you've been to before. And even going all the way back to Bolt that had a certain painterly style to it. That was exploring sort of a looser look that was a very different look at that time.
Thank Mr. Ernest Petti again for accepting our interview. Keep up with Fox Renderfarm and follow us on social media platforms, more interesting and insightful content is waiting for you!
Special thanks to Dan Sarto from Animation World Network, Ian Failes from VFXVoice and Chang Wei-Chung from InCG Media.
Interview with Jinny Choo, SIGGRAPH Asia 2020 Conference Chair
Fox Renderfarm was honored to interview some big cheeses in SIGGRAPH Asia 2019. The next one we want to introduce is Jinny Choo, the Computer Animation Festival (CAF) Co-Chair for SIGGRAPH Asia 2019 and the Conference Chair for SIGGRAPH Asia 2020.
Jinny has contributed to SIGGRAPH Asia for many years. What’s more, she has successfully organized or chaired several international events including Indie-AniFest (the Korean Independent Animation Film Festival), SICAF (Seoul International Cartoon and Animation Festival), BIAF (Bucheon International Animation Festival), the GISF SF festival and many others since 2000.
After making her first short animated film in 1999, Jinny started her career as a freelance artist in animation and media arts. She majored in Animation and Illustration and received an MFA in Art and Film and a Ph.D. in Animation Studies and Content Producing from Chung-Ang University in Korea, and Jinny is currently serving as a guest professor and researcher in Korea National University of Arts (K'ARTS).
As a researcher, Jinny’s major area is the theories & artistic practices of animation and interactive media through a combination of traditional media and digital tools, and she has carried out various research and projects in integrated art and technology and animation therapy as co-researcher or lead researcher at K'ARTS since 2009.
Here’s the interview between Jinny Choo and Fox Renderfarm.
Fox Renderfarm: How did you start your CG journey?
Jinny Choo: Well, it’s a long story. But I really love the whole (things) in the CG and animations, or in movies. So, I think I naturally fell in love with the CG.
Fox Renderfarm: Any people inspire you most in the industry? Why?
Jinny Choo: There are so many, actually, filmmakers inspired me, mostly, like independent filmmakers, because with the awesome ideas and unique perspectives. So I really love their masterpieces, especially Michael Dudok de Wit from Holland, who created The Red Turtle, the animated feature. He is an amazing filmmaker, who really shows that poetic animation itself. I really like his works. There are so many other filmmakers I can’t even count.
Fox Renderfarm: Any research or projects you want to share with us?
Jinny Choo: I think that animation as an art, and as a medium, it has a great and huge potential to collaborate with other media, or other art forms. So I actually urge my students to (think how to) use animation as a medium for their expanding projects, for instance, like a medium (in) art, and for game and stage, for their animation, I once had a project with the industry, which was using animation characters. So we created animation characters with our students. It’s like the animation-related games. Around that time, we learned a lot, because animation can do more key roles in the future.
Fox Renderfarm: You have organized many festivals and conferences, any unforgettable memories you want to share with us?
Jinny Choo: I actually organized animation festival (since) around 2000, it’s been like 20 years. Some of them are international, some of them are really small ones. Every moment with the festivals, they are always memorable, you know experience, especially the festival is that a place not only watching the newest animations, but you can actually meet creators, directors, and to share ideas, perspectives, which is really really great. So if you don’t go to the festival, you never know what’s behind the animation. That’s really really interesting for me, and a great journey for me to organize animation festivals, and of course, Computer Animation Festival is one of them. That’s why I’m doing for many many years.
Fox Renderfarm: Would you give some highlights of this year’s CAF? Any submissions give you surprise? Why?
Jinny Choo: Well, amazingly this year, the number of the submissions actually raised a lot. We got over 520 submissions from all around the world, which is great. Every year, the quality of the animation itself, the visual and techniques are really really improving. So that’s why (we have) high expectations every year. And this year, some of the students’ works are amazing. I mean, you can’t really tell like between the student’s work and the expert’s, I mean, the professional work. It’s like a really blurred line. Most of the students’ works are really great. So you got to see our Electric Theater Show, as well as Animation Theater, (a) whole new experience for you.
Fox Renderfarm: You are a researcher and educator and even a festival/conference organizer for many years, any challenges you’ve met to balance these responsibilities? How to solve it?
Jinny Choo: It’s the time. I have to divide my time exactly for education, teaching, research and organize(ing) the festival. Yeah, sometimes it’s like struggling to adjust, I mean, get into the right process. Well, it’s been a really long time for me to do these stuff, multiple tasks. So I’m getting used to it. Because teaching is one of my favorites, and research of course, and organizing festivals is also my favorite, I can’t really choose. That’s why I just face strict about the schedule, and try to adjust everything in time and right on the track. Hopefully, I’m doing well.
Fox Renderfarm: Would you give us a brief introduction of the CG industry in Korea?
Jinny Choo: The CG industry, especially the movie industry is huge. We have tremendous CG companies in Korea. They are doing really really well. And mega-hit movies, actually they are collaborate(ing) with really famous CG companies in Korea, like 10 CG companies actually are outstanding, so they are doing most of the Korean movies, I mean, the CG stuff. For instance, the level of the Korean CG industry, it’s almost the same as the States or other countries. And more talented experts and professionals are working for the huge projects, in collaboration with other countries, of course, the big studios in the States as well as in China. There is so much collaboration stuff going on with China these days. So, I think the CG industry in Korea is pretty bright and is still growing.
Fox Renderfarm: SIGGRAPH Asia 2020 will be in Korea! How is everything going? Any highlights you want to share with the audience?
Jinny Choo: I’m really thrilled that SIGGRAPH Asia actually is coming back to South Korea after 10 years. We hosted the 3rd edition of SIGGRAPH Asia in Seoul, but this time the city of Daegu is hosting the 13th edition of the conference. So as you know that SIGGRAPH Asia is the key place/part in suggesting the newest technologies in CG, animations or visual effects. So we are going to maintain the SIGGRAPH Asia spirit and programs, but there will be some prospective sessions with the novel technologies, and there will be games, so (there) will be another inspiring conference and visual feast for everyone. We are very looking forward to it.
Fox Renderfarm: Have you heard of Fox Renderfarm?
Jinny Choo: Yes, I do! I just heard from one of my international students from China, he introduced one of the mega-hit animation features in China, which is Ne Zha. And I heard that the (Fox Renderfarm) is used for this movie, the visual is really really amazing.
Fox Renderfarm: Any other things you want to share with the CG enthusiasts?
Jinny Choo: The CG, for the animations and the movies, the story comes first, but without technologies, I mean visual effects or other visual technologies, it would be difficult to show and to share. I’m always thinking that the animation or the movie is about art and technology bring together. So I think SIGGRAPH Asia is the one you can actually share both and experience both ways. For next year, SIGGRAPH Asia 2020 in Daegu. We are looking forward to you being part of the SIGGRAPH Asia 2020. Please come join us!
SIGGRAPH Asia 2019, Exploring the CG Dream
SIGGRAPH Asia 2019, the 12th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia, was successfully held in Brisbane, Australia from November 17th to 20th.
The 4-day event included a diverse range of juried programs, such as the Art Gallery / Art Papers, Computer Animation Festival, Courses, Doctoral Consortium, Emerging Technologies, Posters, Technical Briefs, Technical Papers and XR (Extended Reality).
This year, the conference comprises 250 sessions and features over 800 speakers. As the sponsor for SIGGRAPH Asia 2019, Fox Renderfarm was honored to interview some of the speakers, including Jinny H.J. Choo (SIGGRAPH Asia 2020 Conference Chair), Pol Jeremias-Vila (Computer Animation Festival Chair), Sidney Kombo-Kintombo (Animation Supervisor of Weta Digital), Mike Seymour ( Real-Time Live! Chair), Alyn Rockwood (Doctoral Consortium Chair), Ernest Petti (Studio CG Supervisor of Walt Disney Animation Studios), and Guy Williams (VFX Supervisor of Weta Digital). Please stay tuned with us, exclusive interviews will be brought to you soon!
The annual event, which rotated around the Asian region, attracted the most respected technical and creative people from all over the world who are excited by research, science, art, animation, gaming, interactivity, education and emerging technologies. Now, let’s review the highlights of this fantastic conference.
Opening Ceremony and Keynote Session
Tomasz Bednarz, SIGGRAPH Asia 2019 Conference Chair, gave an overview of how SIGGRAPH Asia 2019 came to be in Australia. Keynote Speaker Donna J. Cox presented an extraordinarily insightful presentation on 'Revolutions in Mapping the Digital Universe: Stories of Satellites, Supercomputers, and the Art of Data Visualization'.
SIGGRAPH Asia 2019 Experiences
68 companies and brands, representing 17 countries and regions participated in the SIGGRAPH Asia 2019 Exhibition, some of which have organized Exhibitor Talks. The event showcased the latest cutting-edge hardware and software applications in the computer graphics and interactive techniques space.
The SIGGRAPH Asia Doctoral Consortium was a forum for Ph.D. students to meet and discuss their work with each other and a panel of experienced SIGGRAPH Asia researchers in an informal and interactive setting.
The Featured Session Program cast a spotlight on major breakthroughs, techniques, arts in the field of Computer Graphics and Interactive Techniques, such as Childish Gambino's Pharos - Real-Time Dome Projection for Live Concert, Making of Pixar's Onward, How Weta Digital Created Junior for Gemini Man, Star Wars: Over Four Decades of Storytelling with Innovation, and so on.
Computer Animation Festival
Asia's premier computer animation festival showcased a world-wide collection of the year's best works. In four exciting days, presenters showcased their most innovative exploration and transition in computer-generated animation and visual effects. This year’s winners are:
BEST IN SHOW: Kids
Co-Created by Michael Frei & Mario von Rickenbach, Playables, Switzerland Distributor: Wouter Jansen, Some Shorts, The Netherlands
JURY PRIZE: Spring
Director: Andy Goralczyk, Blender Foundation, The Netherlands Producer: Francesco Siddi, Blender Foundation, The Netherlands
BEST STUDENT PROJECT: The Ostrich Politic
Director: Mohammad Houhou, Miyu Distribution, France Producer: Moira Marguin, Miyu Distribution, France
Moreover, Its panels and talks included presentations by experts on a variety of topics related to the creation of computer animation and visual effects, as well as behind-the-scenes presentations by creators from the studios and schools, whose works are screened at the festival.
Real-Time Live! made the future of interactive techniques live on stage. Participants could watch the most innovative interactive techniques as they were presented and deconstructed live by their creators.
For artists and scientists, SIGGRAPH Asia is where enthusiasts of computer graphics and techniques gather. It is also a unique interactive platform for exhibitors, fostering connections between exhibitors and the SIGGRAPH Asia community, bringing together new friends, and creating new business opportunities.
Fox Renderfarm will continue to support CG learning and communications platforms like SIGGRAPH Asia. We are also looking forward to seeing you in SIGGRAPH Asia 2020 in Daegu, South Korea!
Global computer-generated animation and visual effects brands to gather at SIGGRAPH Asia 2019
Brisbane, Australia, 20 September 2019 – Some of the world’s leading brands, academic institutions and start-ups have secured their presence at the region’s foremost event for computer-generated animation and visual effects, .
“We are delighted that some of the industry’s most respected brands are supporting SIGGRAPH Asia for its debut in Australia,” says Conference Chair, Tomasz Bednarz. “The Australian market for animation, games, CGI/VFX, interactive media and VR/AR, as well as research in this space, is rapidly growing, and as hosts of SIGGRAPH Asia’s 12th edition, we are excited to show the rest of the world the opportunities Australia presents.”
Sponsors and exhibitors from over 20 countries at SIGGRAPH Asia 2019 include Adobe Research, AWS, Carpe Diem Solutions, Computational Visual Media, Forum8 Co., Ltd, Foundry, , HTC Corporation, Industrial Light & Magic, PIXAR, Qualisys, SideFX HOUDINI Software, Tracklab, Tsinghua University – Tencent Joint Laboratory, UBISOFT, Unity Technologies, VICON Motion Systems, Weta Digital, Xsens and ZQ Racing, to mention a few. They will showcase hardware and services in the categories of computer graphics, interactive and innovative technologies, high-performance computing, as well as education, training and research. The support of these companies underscores the importance of SIGGRAPH Asia 2019 as a learning hub and showcase for emerging technologies in the industry.
"As a long-term sponsor and exhibitor, Fox Renderfarm believes that people contributing to, and attending, SIGGRAPH Asia truly thrive together. They do this by sharing the latest research and techniques, exploring advanced technologies, communicating industry insights, and envisaging a blueprint hand in hand,” commented Rachel Chen, Marketing Director, . “SIGGRAPH Asia is the place for Fox Renderfarm to showcase our fast and secure CPU & GPU rendering solutions and connect to customers from around the world."
SIGGRAPH Asia 2019’s Gold Sponsor, the NVIDIA Deep Learning Institute (DLI), will also be conducting hands-on training workshops over two days on Fundamentals of Deep Learning for Computer Vision and Multiple Data Types. Aimed at developers, data scientists, and researchers, NVIDIA will help participants get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cards, healthcare, online services and robotics. These workshops are open to Platinum (PP), Full Conference (FC), Full Conference 1-day (1D) pass holders, who will be notified on how to register for the workshops.
An additional component within the SIGGRAPH Asia 2019 exhibition is the , an initiative that enables small and boutique enterprises to connect with, showcase to, and scale with, brands, investors and senior creatives from around the world. Brisbane, the host city of SIGGRAPH Asia 2019, has a bourgeoning startup community in creative tech, companies that are at the forefront of technological innovation in the fields of animation, games and visual effects.
Participating international academic institutions and universities include Lost Boys – School of VFX (Canada); Ritsumeikan University, College of Image Arts & and Sciences (Japan); Visual Computing Center at KAUST (Saudi Arabia); Victoria University of Wellington and the Wellington ICT Graduate School (New Zealand). Participating Australian institutions include the University New South Wales’ School of Art & Design and UQ Centre for Energy Data Innovation.
SIGGRAPH Asia 2016 | Biggies Gathered at FoxRenderfarm’s Booth
The 9th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2016 ) took place in The Venetian Macao from 5-8th December, 2016. The annual event, which rotates around the Asian region, attracts more than 6000 computer graphics and interactive techniques industry leaders, experts and scholars.Paul Debevec, Senior Engineer of Google Daydream; Brian Cabral, Director of Engineering at Facebook; Hongbo Fu, Chair of SIGGRAPH Asia 2016; Hsin-Yao Liang, President of Rayvision. The SIGGRAPH Asia 2016 Committee consisted of world’s famous enterprise experts and authoritative scholars from universities. Hongbo Fu from City University of Hong Kong took the Conference Chair. As experienced entrepreneur in computer graphics industry, Hsin-Yao Liang, the president of Rayvision (FoxRenderfarm), took the Featured Sessions Chair of SIGGRAPH Asia 2016.The theme of SIGGRAPH Asia 2016 was “Key to the Future”. As the only one Chinese enterprise representative of the committee, Rayvision (FoxRenderfarm) attended SIGGRAPH Asia 2016 as the exhibitor (Booth B-02) and displayed their splendid rendering projects, which drew attention to the worldwide experts in the field of computer graphics and interactive techniques, including Kurt Akeley, CTO of Lytro Inc; Brian Cabral, Director of Engineering at Facebook; Dan Sarto, Co-Founder and Publisher of AWN; James Cunningham and Oliver Hilber, the director and producer of Accidents, Blunders And Calamities etc. Accidents, Blunders And Calamities directed by James Cunningham just received Jury Special Award of SIGGRAPH Asia 2016 Computer Animation Festival.Victor Wong and Shuzo Shiota, the chair and co-chair of SIGGRAPH Asia 2016 Computer Animation Festival presented Jury Special Award to James Cunningham and Oliver HilberJames Cunningham, the director of Accidents, Blunders And Calamities; Oliver Hilber, the producer of Accidents, Blunders And Calamities.Kurt Akeley, CTO of Lytro Inc; Hsin-Yao Liang, President of Rayvision; Brian Cabral, Director of Engineering at Facebook.Dan Sarto, Co-Founder and Publisher at Animation World Network - AWN.com
SIGGRAPH Asia Review | Featured Sessions Planned by Rayvision
SIGGRAPH Asia 2016 has officially ended on 8th December, 2016 in The Venetian Macao Resort Hotel. The 4-day exhibition consists of Art Gallery, Computer Animation Festival, Keynotes, Featured Sessions, Emerging Technologies, Technical Briefs, Technical Papers, Symposium on Mobile Graphics and Interactive Applications, Symposium on Education and Symposium on Visualization.The most popular program of SIGGRAPH Asia 2016 is Featured Sessions, which is host by Hsin-Yao Liang, the president of Rayvision, and planned by Marketing of Rayvision. Featured Sessions have 2 panels: He's Back! T2 25 Years Later (Panel 1) &The Future of Imaging (Panel 2). Many authoritative experts and scholars from computer graphics industry all over the world come together to attend this grand meeting.Panel 1：He's Back! T2 25 Years Later (Scott Ross, GM ILM, Co-Founder of Digital Domain with James Cameron;Mark Dippe, Assistant Visual Effects Supervisor for T2;Steve “Spaz” Williams, Computer Animation Supervisor for T2.)25 years ago, movie history was made with Terminator 2: Judgment Day. The use of computer generated images in cinema had finally come of age. James Cameron and ILM utilized state of the art computer technology to help create some of the most memorable characters in the film. Scott Ross, the General Manager of ILM in 1991, hosted an in depth discussion with Mark Dippe and Steve "Spaz" Williams, the team that created the breakthrough images. Attendees were given a behind the scenes look at the challenges they faced, the technology they used, the process that was developed and the moment when they realized they had forever changed the landscape of feature films.In addition, the newly-remaked 3D version of Terminator 2: Judgment Day will be released in 2017. Let’s expect the fantastic visual special effects of T2!Panel 2：The Future of Imaging(Brian Cabral, Director of Engineering at Facebook;Kurt Akeley, CTO of Lytro Inc;Radu B. Rusu, CEO and Co-Founder of Fyusion, Inc;Shen Shaojie, Chief Roboticist, DJI.)Computational Photography and Artificial Intelligence are not just massively redefining human visual experiences, or the way people create content, but also equipping robots everywhere with vision and cognitive intelligence. We are honored to have visionary executive scientists from revolutionary firms: Lytro, Fyusion and DJI, who have been successfully building critical stepping-stones for the global society, and keep leading the revolutions for tomorrow. In this panel, they shared their past and current adventures as well as their visions for the future of imaging.
SIGGRAPH Asia | Featured Session Panel 2: The Future of Imaging
SIGGRAPH Asia 2016 | Featured Session Panel 2: The Future of Imaging
SIGGRAPH Asia | Featured Session Panel 1: He’s Back! T2 25 years later.
SIGGRAPH Asia 2016 | Featured Session Panel 1: He’s Back! T2 25 years later.
Recent News List
Over 200 SCAD alumni and students contributed to 21 Academy-Awards-nominated films for 20202020-02-24
Fox’s Got Talent January Winners Revealed: Jelly, Demonstrating Connection, Unity and Hope2020-02-21
5 Key Features in Blender 2.82 that Boost Your Creation Productivity2020-02-20
Creating an Alien Alchemist Inspired by Yoda and Spirited Away2020-02-11
Jensen HuangがGTC CHINA 2019の講演でRAYVISIONクラウドレンダリングにNVIDIA RTXのスーパーチャージを発表2020-02-10
‘Tanhaji’, Rendered with Fox Renderfarm, with a Worldwide Gross of US$49 Million Became the Highest-grossing Bollywood Film of 20202020-02-10
Creating Photorealistic Marseille Oceanic Views in Cinema 4D2020-02-05
Creating the Sophisticated Chevrolet Corvette 1960 in 3ds Max2020-01-21