FoxRenderfarm
  • España
    España
  • English
    English
  • Português
    Português
  • Deutsch
    Deutsch
  • Türkiye
    Türkiye
  • 日本語
    日本語
  • Italiano
    Italiano
  • Français
    Français
  • Россия
    Россия
  • 한국어
    한국어
  • news center
    NEWS CENTER
    Interview with Mike Seymour, an Outstanding Digital Humans Researcher

    Interview with Mike Seymour, an Outstanding Digital Humans Researcher

    Interview with Mike Seymour

    What happens when technology has a human face? How digital humans will affect our lives? These are the questions that Mike Seymour is exploring. Mike is a Digital Humans researcher who researches on new forms of effective communication and education using photoreal, realtime computer generated faces.

    Mike Seymour377

    Mike Seymour379

    Mike Seymour381

    Mike Seymour @ SIGGRAPH Asia 2019

    Mike was Chair of Real-Time Live! in SIGGRAPH Asia 2019, organizing the program showcased the cutting-edge real-time technologies, from mobile games to console games to virtual and augmented reality from around the world. He is also the co-founder of MOTUS Lab at The University of Sydney.

    Mike Seymour

    Mike Seymour

    Mike Seymour at TEDxSydney 2019

    As the lead researcher in the MOTUS Lab, Mike is exploring using interactive photoreal faces in new forms of Human Computer Interfaces (HCI) and looking at deploying realistic digital companions and embodied conversational agents. This work has special relevance for aged care and related medical applications such as stroke victims, and those with memory issues.

    He suggests that we need to find new ways to provide interaction for people, beyond typing or simply talking to our devices, and that face-to-face communication is central to the human experience. At the same time, he examined some of the many ethical implications these new forms of HCI present.

    FXGUIDE

    He is well known for his work as a writer, consultant and educator with the websites fxguide.com and fxphd.com which explore technologies in the film industry. These websites now have huge followings, as they provide an important link between the film and VFX community and the researchers and innovators who constantly push the limits of technology.

    Some films and TV series Mike has worked on Some films and TV series Mike has worked on Some films and TV series Mike has worked on Mike Seymour

    Some films and TV series Mike has worked on

    In addition to fxguide.com and fxphd.com, Mike has worked as VFX supervisor, Second Unit Director or Producer on some TV series and films, winning AFI Awards Best Visual Effects for the movie Hunt Angels in 2007 and being nominated for Primetime Emmy Awards for the TV mini-series Farscape: The Peacekeeper Wars in 2005.

    Fox Renderfarm was honored to have an interview with Mike Seymour in SIGGRAPH Asia 2019. Here’s the interview between Mike Seymour and Fox Renderfarm.

    Fox Renderfarm: Would you give a brief introduction to Human Computer Interfaces (HCI)?

    Mike: So I research Human Computer Interfaces or HCI, which is the idea of how we deal with computers. And if you think about it, most computers are just getting input from a mouse or a keyboard, but what if we could talk to our computers, what if the computers could respond to us emotionally. So the work that I do with digital humans or virtual humans is putting a face on technology, we’re putting a face there so that we can interact with that. Because after all, we work really well with faces, we respond to faces, we travel great distances to see someone face to face. So we think it'd be really interesting if we could take that idea of having a face, and put it on a computer, and allow us to work with that in a much more natural and human way.

    Fox Renderfarm: What are your biggest achievements of HCI so far?

    Mike: So one of the interesting things that's happened just in the last couple of years has been this amazing nexus of technology and approaches. We got this combination of things that are really blowing the doors of what's possible. Because we can start to produce very photorealistic digital humans, in other words, people that really look like us. Now, this is super important because if we produce something that looks not very good, we actually have a negative reaction to it. It's not like audio, whether you have sort of good quality, better quality, and then great quality. With people, we have either cartoons, or we need very very high quality. But if we have something that's not so good, people actually reject it out of hand. So we call it a non-linear response, in other words, as it gets better in quality, your reaction varies up and down a lot. So only recently, we've been able to produce this incredibly realistic faces. And most importantly for HCI, those faces can run in real time, so they can smile at you in real-time, talk to you in real-time, nod and gesture, just very different from a video or something you might see in a feature film, where they might have hours and hours to produce a clip. We need to produce these things in sometimes as short as about 9 to 12 milliseconds.

    Mike Seymour4744

    Mike Seymour4745

    Mike Seymour4746

    MEET MIKE @ SIGGRAPH 2017

    Fox Renderfarm: Have you met any challenges in the HCI development process?

    Mike: One of the big challenges we have is actually we've done a lot of really great work on faces and on being out to produce digital humans. That work’s not done, but it's certainly advanced tremendously in the last sort of three or four years. We're now having the grapple with how do we solve some of the issues over voices. If I'm actually talking to someone in China and I'm in Sydney, and like my colleague is from China, and of course he speaks the language that I don’t. So if we're on a conference call, and somebody at the other end doesn't speak Chinese, like I don't speak Chinese. We have this problem that I have to solve the language. Now, if I've got an avatar, something that I'm puppeting, then I would be able to speak in English, and have a version of me speak in Mandarin, and be able to understand across barriers. That’s good, and that's great. But what if I'm not puppeteering it, what if I actually want the computer to talk to me. I now need to make a synthetic voice. And the challenge right now is to see if we can do what we’ve done for faces to audio, to voices. It’s kind of a thing you may not expect. But of course, what we want is the computer to speak in a really natural way, to have the right cadence, the right kind of tone, the right kind of attitude. So getting that natural sounding and audio, it's not that it's harder than it is to do the vision. But we actually are a lot less tolerant of problems with audio. If you're watching a movie and the vision isn't quite right, then you can hear everything, you’ll be really happy. But if you were in a situation that the vision looks great, but you couldn't hear what the actors were saying, you'd switch the channel or go do something else. So what we're trying to do now is get the audio to be impeccably good so that it can go along with what we've been doing in vision.

    Mike Seymour6719

    Mike Seymour6720

    MEET MIKE @ SIGGRAPH 2017

    Fox Renderfarm: How do you think our life will be changed by HCI, with deep learning algorithms, GPU graphics cards rendering, and 5G?

    Mike: The astounding thing is that now, we actually have more compute power than we need to do some of the functions we want to do with the computer. We can afford to spend some of the compute power, producing these amazingly interactive user interfaces. That's part one, and that's obviously been influenced enormously by GPU, and the much faster graphics. And on top of that, we've had a new approach to how to use the graphics which is AI or deep learning. So now we have the second part of the jigsaw puzzle which allows us to do incredibly clever things by letting the machine learn my face, and then synthesize a plausible version of my face, again, in real-time, because of that GPU. And then the third part of that jigsaw puzzle is that we're able to do that now increasingly with 5G. Now, 5G is obviously very new, but what it offers us is not just bandwidth, which we imagined it would be able to sort of transfer more data, that's part of it. But one of the real secrets for 5G is low latency. So, in fact, we can have interactivity, so things come to live when they are realistic, and rendered quickly. Because we've used actual faces to construct them, and then we have this very low latency, so we can interact. All of that is just going to change how we do communication education, even in areas you might not imagine, such as health.

    Fox Renderfarm: Fox Renderfarm is going to provide online real-time rendering services, is that possible to cooperate with you on the HCI research?

    Mike: We are really keen to work with people all over the world, and it's the mantra of our lab that the research that we do, we actually don't own the IP, so we give away all the data. We work with companies around the world so that we can give back to the community. Our interest is seeing that this moves forward. And one of the great things about rendering on the cloud, and the idea of being able to have a really good infrastructure that's on a global basis is that, with high-speed communications, and with 5G, we are increasingly seeing this being something that we can adopt into things that general people can use. So, at the moment we’ve got a history where I might be using a render farm if I'm a really big company. But what we're seeing now is this move to the importance of being able to do things that can be democratized, and I think we're gonna see this vast explosion where we want to have quite a lot of power on our personal device, but actually tapping into a broader deep learning, AI kind of environment to provide this great interactivity. And as that happens with low latency, and the kind of infrastructure we're seeing. The ability to scale up is just going to produce sensational results.

    Fox Renderfarm: As the Chair of Real-Time Live! in SIGGRAPH Asia 2019, what’s your biggest surprise?

    Mike: There are a lot of submissions to Real-Time Live! this year. But Real-Time Live! is a little different from other things because you need to actually mount a performance. It's a bit like volunteering for a stage show. If I am coming here to do a show, I will bring my powerpoint on my laptop. But if I'm coming here to do Real-Time Live!, like the Matt AI project, and a number of other projects that are being seen, you actually have to bring a whole lot of computers, a whole lot of gear and actually mount a live presentation. You have nine minutes to sort of wow the audience, and of course, it's very unforgiving because, in nine minutes, you can’t afford to switch the computer off and start again. So we've been really impressed by the variety of the projects, and the variety of applications that they’re addressing. So we have teams that are addressing making digital characters talk, which is one of my favorites, I love that one. But we've also got ones that people are looking at how to use VR and real-time graphics for science research, for communication, as well as just artistic pieces that are very much just producing a really amazing show in their own right.

    Mike Seymour10886

    Mike Seymour10888

    Mike Seymour10889

    Real-Time Live! in SIGGRAPH Asia 2019

    Fox Renderfarm: You were doing VFX before, and you are a researcher and also Co-Founder for fxguide.com, what’s the biggest influence along your multi-dimensional career path? What do you do to keep yourself inspired and motivated?

    Mike: I was in the visual effects industry for many years and got nominated for Emmys and AFIs, and that was all great. I enjoyed that and it was terrific work. What I decided a little while ago, having done quite a lot of research and teaching and increasingly doing consulting work to companies around the world, which we still do, I thought it would be really interesting to up that research component and get more involved with hardcore research. So I still come consulting, I do work for major Hollywood studios, and I enjoy that work tremendously. But what I'm interested in is can we, in addition to that work, in the entertainment industry, take that tech and apply it to these other areas. So, for example, my research area at the moment is seeing if we can take some of these digital human technology and use it for stroke victims. So, people that have had a stroke and have trouble forming short-term memories, are very good with long-term memories. But they literally find everything that's going on around them today a little unfamiliar and disconcerting. As an extraordinary high level of stroke in the world, a lot of people have strokes, and quite a high percentage are actually under the age of 65 and wanting to still continue to contribute and work, because they are of that younger age. Now, of course, we want everybody to benefit from this, but particularly those people that are still trying to work in the world, if you have problems with short term memory, all technology starts to become a challenge. And we expect someone to use a computer just (as) to use a phone these days. Well if we could put a familiar face on the technology, a face from their past, a face that is I don’t think is a real person, but they are familiar, reassuring. Then this new thing, this new technology whatever it is, suddenly no longer seem quite so harsh, so unfamiliar, so disconcerting. And we think that's a really good way of being able to help with rehabilitation. So this is just one of the areas that we are looking at, taking this terrific tech from the entertainment industry, which I love to death, but just seeing if we can help people that are less fortunate, that have been through really hard circumstances.

    Fox Renderfarm: Who or what projects inspire you most in VFX and Interactive Technology respectively?

    Mike: So it's been really great work done in technology around the world. Obviously, some of the big film companies like Weta Digital and ILM have been doing terrific work. The research that I've been doing, we've managed to partner with companies around the world. So when we were doing a digital version of me, for example, we are partnering with Epic Games, but also with Tencent, which is terrific. And companies in Serbia, in England, and so it's an international kind of collective. And one of the things that really inspires me is how open these companies are working together and sharing what's going on. Because there's a lot more to be gained by expanding what we can do, than people worrying about individual bits. So the community that's doing this work has been really generous and really open with their work.

    Mike Seymour14323

    Mike Seymour14325

    __Fox Renderfarm: What’s your comment on Gemini Man? __

    Mike: Gemini Man is one of the most startling and just groundbreaking pieces of production that I've certainly seen, I was really impressed by a number of things. Firstly, they were doing work at Weta Digital, where we really knew the character very well at both ages. We know Will Smith as he is today, and Will Smith earlier in his career. We know from our own research that the more you are familiar with the face, the harsher you are. So if you have a younger version of someone you didn’t know, it may look great to your eyes, but their brothers or sisters would be very upset by that wouldn't feel right to them. So what we're trying to see is if companies like Weta can produce very familiar faces in a way that we find acceptable, reassuring, entertaining, and I think they've really done that with Gemini Man. The second thing that really impressed me is that in that film, while it's an action film there are a lot of slower emotional scenes, where there is really no way to hide. The young Will Smith is on screen and the camera isn't flying around. Sure, there are bike chases but there are other scenes he is really acting so that the audience can buy into that performance, I think it's terrific. I really applaud the work that the team of Weta Digital have done, it's absolutely well breaking.

    Mike Seymour15693

    Mike Seymour15694

    Mike Seymour15696

    Mike Seymour15697

    images source: fxguide.com

    Fox Renderfarm: Any other things you want to share with CG enthusiasts?

    Mike: I think one of the things that I've been really happy about is how internationally the community has come together. There are teams now have got like pockets of excellence. There's a couple of teams in China that are just spectacularly good. And obviously, what we've seen with the work in China, I’ve actually lectured up in China, and visited many times, is we've got a real depth of both technical expertise and creativity. So it's really great to see the infrastructure being built up, things like the render farm and so on. So that they can provide that technical support that will match the creativity, I think that’s been really good. Now there are two teams in China, I can think of, there's a team in Europe, a team in New Zealand, a team in Serbia, and in London, and of course, America. And so what's great is to see that this is a very balanced international effort, and I love the fact that here at SIGGRAPH Asia, we’ve got all of the teams coming and presenting their work and sharing things. Because, as I said earlier, there’s so much can be gained by people cooperating and working collaboratively together. And from all my years in the film industry, it's a thousand people that do the visual effects on a film. So you need this great collaboration of artists this great infrastructure from companies supporting that. And then, of course, you need people willing to be open and share their ideas, as they're doing here at SIGGRAPH Asia. So, it's really great.

    Key Words

    search
    3D Art Competitions|3D Tool|Trailer|Fox Renderfarm Promotion|Cloud Rendering Farm|Annecy Festival|MILESTONES|Fox Renderfarm|Desktop Client|Blender|Bollywood films|Hum3D Contest|NVIDIA|Silkroad Digital Vision|Malaysian Animated Films|Indiajoy|Architectural Visualization|CGarchitect Architectural 3Dawards|Best cloud rendering services|Render cloud|VFX|Upin & Ipin|Green Book|Fox Renderfarm Interview|Mission Mangal|Kre8tif!|Fox Renderfarm Mini Program|CG|CG Tech Summit Malaysia|film works|cloud rendering|Golden Horse Award|Shadow|SIGGRAPH Asia 2018|Morrigan Flebotte|VWArtclub Annual *Contest|Asswad Mouhamad|animation works|IMax Studio|Boonie Bears|Renderosity|Gary S. Kennedy|Evermotion Challenge 2018|Brian Beaudry|Bobby Bala|Mohit Sanchaniya|Katapix Media|Flying Car Productions|Razer|The Shipment|FoxRenderfarm|CG Tech Summit|Alpacalypse Productions|halloween|Hum3d Survial Car Challenge|Federico Ciuffolini|Ralf Sczepan|Iavor Trifonov|Clarisse|CGTS Malaysia|Isotropix|C4D|Tomasz Bednarz|V-Ray|Cinema 4D|MAXON|siggraph caf|Evermotion challenge 2017|CGTrader Space Competition|film of the year|Le Anh Nhan|Planet Unknown|Fox Renderfarm 2017 Achievements|CSFF|Julio Soto|boar 2017|Deep|SIGGRAPH Asia 2017|Chris Sun|Glass Cage|Making Life of Bri' n Chris|anthem studios|The Rookies|Peter Draper|Makuta VFX|Baahubali 2|CG Competition|enchantedmob|CG Studios|Academy Awards|CGVray|weeklycgchallenge|SketchUp|siggraph 2017|Chris Buchal|SIGGRAPH Asia|LightWave|Indigo Renderer|Rafael Chies|V-Ray RT|CPU Rendering|NVIDIA Iray|Chaos Group|OctaneRender|Redshift|STAR CORE|CICAF|VR|Mr. Hublot|Ribbit|GPU Rendering|Linux|Monkey Island|LuxRender|HPC|Render Farm|Life of Bri|WORLD LAB|Michael Wakelam|3D Rendering|Online Render Farm|Alibaba|Baahubali|VAX|Malaysia|2015 Hangzhou Computing Conference|Oscar|SIGGRAPH|CGTrader|Kunming Asia Animation Exhibition|Evermotion|RenderMan|

    Categories


    Fox Renderfarm News

    Fox Renderfarm Projects

    Fox Renderfarm Lectures

    CG Challenges

    Top News

    Blog

    Fox News

    Fox Talk

    Trending

    Recent News List


    Dreamscapes Challenge Winners Announced!

    Dreamscapes Challenge Winners Announced!

    2024-04-19

    Animated Live-Action Movie 'IF' Coming to Theaters

    Animated Live-Action Movie 'IF' Coming to Theaters

    2024-04-18

    Achieving Cinematic Visuals: The RenderMan 26 Advantage

    Achieving Cinematic Visuals: The RenderMan 26 Advantage

    2024-04-17

    Warner Bros. Drops Trailer for 'Joker: Folie à Deux'

    Warner Bros. Drops Trailer for 'Joker: Folie à Deux'

    2024-04-16

    The Rising Virtuoso: Josip's Meteoric Ascent in 3D Environment Artistry

    The Rising Virtuoso: Josip's Meteoric Ascent in 3D Environment Artistry

    2024-04-15

    "Dark Matter" on Apple TV+: A Sci-Fi Series That Will Bend Your Reality

    "Dark Matter" on Apple TV+: A Sci-Fi Series That Will Bend Your Reality

    2024-04-12

    Empire’s Echoes: The All-New "Star Wars: Tales of the Empire"

    Empire’s Echoes: The All-New "Star Wars: Tales of the Empire"

    2024-04-11

    Rendering Redefined: The Updated Capabilities of Arnold 7.3.1

    Rendering Redefined: The Updated Capabilities of Arnold 7.3.1

    2024-04-10

    Introducing Maya 2025: A Quantum Leap for 3D Artists

    Introducing Maya 2025: A Quantum Leap for 3D Artists

    2024-04-09

    Partners

    • Foxrenderfarm

      Potente Servicio de Render Farm

    • TPN
    • TPN

      Contacte con el comercial

      Contacto con los medios: Quique Casas

      Email: quique@foxrenderfarm.com

      Telf: +34 660788293

      Conectate con nosotros

    Fackbook Customer Reviews