Real-Time Rendering And The Future of Animation

Real-Time Rendering

All the elements of a scene, from models, environments, lights, and more all rendered and shown in a fraction of a second.

Real-Time Rendering

All the elements of a scene, from models, environments, lights, and more all rendered and shown in a fraction of a second.

Technological advancements in animation have centered around shortening the time it takes to view animation output and enhancing the artist’s tools. Most times these advancements go hand in hand.

Let’s face it; animation is a waiting game. You model and rig characters, build sets, position your lights, animate your action and send your scene off to render. And then … you wait. Get the footage back, make some quick adjustments, and then … you wait. That’s the nature of animation. Since the days of waiting for the film to develop the current method of sending off frames to render, you have to wait to see the fruits of your labor.

Real-time rendering practically eliminates the waiting process by removing the rendering phase from the animation production pipeline. The closer you can get to the look of the final film early on and throughout the production process, the less time spent on guesswork, which means fewer revisions.

What Is Real-Time Rendering?

If you’ve played a video game, you’ve experienced real-time rendering. Instead of rendering single CG frames and then compiling those frames together for later playback, real-time rendering can display a CG environment complete with lighting, effects, textures, and simulations at about 60 frames per second. Taking the frame rendering down to milliseconds means it’s fast enough to be perceived as “real time.” Rendering all of those elements, as well as computing how they interact with each other, takes a lot of processing power. For this reason, video games have typically been designed to be simpler than film and TV, but the lines are starting to blur.

To achieve the quality and detail of film and TV, hours and hours of computing time is used to create a single image. Most people have heard of Pixar’s single frames taking 20 hours or more to compute. With the advances in technology and processing power, incredibly realistic (or stylistic) environments with characters, props, sets, lights, and the entire game world can be created and displayed in fractions of a second. Real-time rendering does all the computing and studios are already seeing up to a 30% to 50% time savings.

 

Real-Time Rendering Workflow

Peter Monga, an independent animator from New Zealand, is creating his own children’s television series using the Unreal engine. With proper planning and by creating stylistic characters and simple environments he’s set himself up with a streamlined workflow for creating episodes of Morgan Lives in a Rocket House. Using software like Maya and Photoshop for animating and texturing, he can skip sending his work to a render farm and ignore the entire lighting, rendering, and compositing part of the process. He can adjust things like depth of field, color grading, ambient occlusion, reflections, and a ton of elements that would otherwise overload the budget.

 

Real-Time Workflow

 

For the creation of Mr. Carton, Michaël Bolufer harnessed the power of the Unity engine for his 13 episode series. Using Unity allowed the team to bypass the time-consuming CG rendering and compositing process and do the sequencing for each episode directly in the editor using Flux. Being able to go straight to sequencing with the animations, models, camera, lighting, and scene elements is where the time and cost benefits of real-time rendering shine.

 

Real-Time Rendering and FX

Big budget films are making use of real-time technology to streamline production time and costs. High fidelity previews mean shorter iterations in pre-production.

Halon Entertainment pushed the previsualization envelope with their work on War for the Planet of the Apes. By making real-time rendering a core part of the early previs process, high-quality shots with large datasets and complex imagery were able to be reviewed quickly, and adjustments to the shot could be made on the fly. With the ability to immediately move cameras and scene elements, the director was able to see the shot in action and make decisions about camera placement and composition. This ability to implement immediate, well-vetted feedback was a huge time saver down the line, and those early decisions meant the team was able to focus on crafting the film elements that would have the most visual impact.

Real-time rendering doesn’t even have to be traditional character animation. Just about any element in a film that needs to be rendered can be done with a real-time engine: such as those dynamic user interface elements we see on heads-up displays or futuristic touch screens.

FX artist Vincent Parker was able to create live interface displays for the film Passengers that could be installed on set for the actors to interact and touch. Using a Unity engine meant he could update the UI/UX for the screens in real time without the need to pull the set piece apart to plug in the new graphics. It also allowed the actors to touch actual screens with a real interface, instead of filming hand motions that FX artists would then have to match up to rendered digital effects later on.

Limitations of Real-Time Rendering

The most significant bottleneck of real-time rendering is the processing power of the host machine. Those milliseconds can start getting longer and longer the more information the screen has to display. Most animation can be done beforehand and imported, but dynamic elements like shadows, particles (smoke, grass, water), and gravity require more robust processors to deal with all the additional information. Without enough processing power, the scene could start stuttering or become a virtual slideshow as the computer struggles to display everything in the scene.

Hair and clothing also pose a particular problem as these elements typically interact directly with dynamic elements, like a character model. Since these are non-scripted elements, they can cause unforeseen issues like clipping or displacement. This limitation is why in many games (like Fortnite and Firewatch) and video productions (like Morgan and Mr. Carton) the clothing and hair are modeled into the character.

One way to add more elements to your scene is to lower the level of detail displayed, so it becomes a balancing game of quantity over quality. It’s a trade-off that professionals can deftly balance and over time the rendering engines themselves will become better at making those adjustments on the fly.

The Future of Real-Time Rendering

In just the past five years, real-time rendering has evolved by leaps and bounds. Real-time engines are climbing the charts in popularity among the current rendering software choices. The ability to render a single still image quickly is a driving force for real-time render users, especially in the CAD and architectural field. More and more people looking to offset production costs and get more done in a shorter amount of time and that means more time for creativity and innovation.

While the real-time rendering engines have evolved to the point that beautiful films and games are being created within the current limitations, advances in A.I. mean that steps like denoising will eventually be handled automatically so that even highly detailed scenes will ultimately be rendered in real time. OTOY has already made advances in A.I. denoising that allow for 200 sample, near perfect results, in real-time.

As processing power increases the ability to simulate elements like cloth and hair in scenes will also likely go real-time. Take a character out for a spin and put them through their paces to see how things like particle effects will interact with the scene. As long as the processing power is there, more and more subtle physics can be computed in each scene.

Real-time rendering will also creep into our daily lives as we interact more and more with photo-realistic avatars who can mimic the facial expressions and actions of off-screen actors. Someday you may be talking with someone online who isn’t even a real person, and you’d never be able to tell. Player characters and A.I. driven NPCs in games would be nearly indistinguishable. Your favorite actor may not even be a real person at some point.

As real-time rendering bleeds into real life, the creative possibilities are limitless.

Barry T. Smith
Barry T. Smith
As Nimble Collective's Content Marketing Ninja, Barry is responsible for bringing Nimble news and goodness to all the animating boy and girls of the world. And he does it through the power of the Internet, and not a rickety 'ol sleigh and tiny reindeer.

Leave a Reply

Your email address will not be published. Required fields are marked *

Request a Demo

What's Hot!

Copyright © Nimble Collective – Privacy PolicyTerms of Use