Scott Meadows was a recent Los Angeles transplant in the early 2000s, having moved to the west coast to chase down work after studying architecture and spending time in the employ of a start-up in Austin. He had some photography skills and done work in digital design, but he was still a bit stumped when he got a call from an old colleague who had an offer for him.
"Out of the blue, they called me up and said, 'Hey, do you wanna do previs on Bad Boys II with Michael Bay?' My first question was, what's previs?"
Meadows wound up taking the job, and over the last 15 years, has helped push what was back then a small, specialized process intended to help prep what was still largely practical movie production. He is now the head of both previs and virtual production — an even newer technique — at Digital Domain, where he helps build some of the biggest blockbuster films in the world.
Previs — short for pre-visualization — is absolutely crucial on all movies, and even more essential on massive effects-driven films that often have thousands of artists working on different shots at any given time. It allows filmmakers to figure out what they want their movies to look like, how they're going to make that happen, and what to do when those initial plans inevitably fall through. And increasingly, it's becoming part of the movie itself.
It was just a two-man team on Bad Boys II, but the process grew larger with every project. Meadows moved on to films like Van Helsing, The Lion, The Witch and the Wardrobe, and Mr. and Mrs. Smith before getting the call to work on the movie would change his career — and, in hindsight, CGI movies altogether: TRON. Digital Domain was the effects house on Disney's long-awaited sequel/reboot, and Meadows has been there ever since. He spoke to SYFY WIRE about his career, recent projects like Black Panther, and the future of his burgeoning filed.
So, what did you find out previs actually does?
Initially, we start with a script or at least a beat sheet. Information from the director about the sequences that they want to do. You assess what assets are needed, what kind of characters, stunts, environments, vehicles, all that stuff. Then you start building that stuff, building your playset. Then you jump into shot production.
Other times we've been put in where they didn't have anything. Then you just dive into it as if you're making the film. You're animating shots, putting together sequences, reviewing that with the director on a weekly basis. There are reviews with the studio, with the producers, but once it's locked down or in a stable state then they use it for all kinds of things.
They will take that and then use that to figure out how much the visual effects will cost. They use it on the technical side for how to actually do the shots. It just gives all the departments something to refer to as a blueprint. In the beginning, some directors would be very engaged with it and would stick to it, but others would work through it, then when it ultimately came down to principal photography they would just kind of throw it out and do whatever they wanted.
But over the years, as it's gotten more and more a part of the process, studios and directors are following it much more closely. And that's the whole point of it — if you spend the time in the beginning planning and you stick to that plan, then you know what you're gonna end up with after the shooting has happened. Obviously there are things that change along the way and something that always seems to happen is that you'll design a sequence. For example, on Pixels, we did the Pac-Man sequence and initially I think the scene was supposed to take place in Hong Kong and then that changed to New York, and then ultimately we shot it in Toronto as New York.
You're initially guessing at the physical location and as you get further down the road, they decide "oh this is the actual location that we're gonna use." So a lot of the times you have to re-engineer the sequence to work with the new location, and we try to make it more accurate. We'll go and survey the location, so we use Google Earth or whatever we can get our hands on for dimensions and then try to make it as realistic as possible, so that on the day they understand what kind of equipment they can get in there.
You work with the DP. Work with the visual effects supervisor. It starts out as a design tool and then once it gets more honed in it becomes a little bit more of a technical exercise. Then once you make it through that phase, once they've actually shot everything, then you start doing a similar process, but with plates. That's post.
Which is like a second round of production for you.
They're cutting together the scenes, but for example on TRON or one of those movies that has a lot of CG, there's not a lot [of film] for the editors to know what they're looking at and to cut. So then you come in and start temping out those backgrounds. You basically do a dumbed down version of that actual visual effects shot. So you would actually get the plate from editorial. You would track it. You would bring it into the system and then use our assets a lot of the time, or if you have the time and the quality needs to be higher, then you improve those assets. So then you would do animation and composite, and kick that back to editorial.
It saves them money because as soon as the plate lands at a facility that's when it starts to cost money, just because of the amount of people involved that are looking at it. The idea is that you do as much design and tweaking as you can in previs and post-vis so that you're not spending all your visual effects budget trying to figure out what the shot is.
How good does the previs look? How accurate are you trying to make it? Or is it more of a sketch, visually speaking?
I'd say it's like a video game from the late '90s. It just depends. We do certain shows that really push the look of it. It just really depends on the show, like how important is that because it definitely takes time and the tricky thing is compared to visual effects, is that we're not rendering anything. We don't have time to render it.
We do what's called a Playblast where it's what you see is what you get if you were to open up Maya. You can preview textures and lighting and all of that stuff. It has limitations. You can only have 16 lights cast shadows, and they're not that great. You can only have your texture resolution can only be so high. There's a lot of hardware limitations, but the idea is that you're moving really quickly, because if we were to render each shot, that would take a lot more time.
So the idea is that we're working more and more in real time with everything. That's where virtual production and the game engine side of things has started to creep in to the arena. What we're working on now is instead of using Maya's real-time visualization, we're pumping that into a game engine so that you have even more fidelity and more control. Basically a better image at the end of the day.
Do you work a lot with production once the shoot gets going?
We work with the art department as things change. They feed us a lot of models, and that's another important workflow where the production designers are working with concept artists. If there's a lot of design to be done like say on TRON, with the vehicles, or on Black Panther, the suit or the different environments. They'll feed us those models and then we rebuild them to work better in our system, — it's not as simple as just dropping in a model from Sketchup.
On Black Panther for example, the production designer worked on the main design and kicked us off with those models, but then once we got into shot, things would come up or ideas would come up and we would design a lot of it. Several of those things ended up in the film. There's a shot where we are with the door and we drop down underwater and we go and we watch the water drain into these pipes and then we pull out through it and you see the waterfall start to dissipate.
So it's way more significant now, in terms of its importance.
I would say the biggest change overall is just the how it's used. It's more of a standard now. In the beginning it was only a handful of films, we were a small crew that was hired by the production. Everyone brought their own gear. And now there's several previs companies in town. That, and I think the game engine technology and also some of these real-time renderers like Redshift and the software itself, it's all evolved. The stuff that we use now compared to when I first started, it's gotten a lot better.
And now you're doing virtual production — where does that fit in?
Think of it as more everything is real-time. So you're basically creating a sandbox for the director to come in and we have a really large stage here. We've got two. We've got a smaller stage and a large stage. So it's all about doing everything, making these decisions in real time. Instead, in previs, you would meet with the director, he'd give you some notes. He might sit over your shoulder and you might frame up some shots, but then ultimately he would walk away and you would work on that and then review it with him at a later time.
Virtual Production is all about making those decisions in real-time so that basically we have a virtual camera where we would have actors in suits. We would load in the environment models. We would rig everything up so that the actors can be acting. We're capturing their emotions.
The director has a virtual camera where he's filming this action. We've got it feeding from MotionBuilder into game engine, which allows greater fidelity with your lighting. You can make lighting decisions. You're just getting a better sense of what the final product will look like. It's all happening in real time. We've even taken the virtual cam stream and piped that into an Avid bay where we're basically recording shots and then moving on to the nest one and then seeing how they actually cut together.
What I'm really interested in is developing these kind of new workflows where we're making hybrid show instead of starting out in animation like you would in previs, just at the desktop. We could go out on the stage and start to block stuff out and capture motion more quickly and make master scenes.
So how did this help with Black Panther?
What we did for a lot of the fight sequences that saved us a lot of time was having a stunt choreographer put together cutscenes of the stunt action. He used the mo-cap actors to act it out. They had a stunt gym where they would act everything out with props and then they would cover it with a handheld camera and then they would cut it all together and review it with Ryan [Coogler]. Once he was happy with the overall action then we came in with 3X and mo-cap suits because we were in Atlanta at the time so we didn't have the mo-cap volume for motion capture there. We did have one for virtual cam, but this was easier.
So basically we went to the same stunt gym, put the stuntmen in the suits and captured the whole thing. That allowed us to then take that motion, put it onto our previs rigs and then start recreating the sequence so it in the computer so it was full CG. Once that's there, then you can start changing stuff. Exploring different angles, putting them in different environments.
In that situation, it saved a ton of time because if you were to stunt choreography, if you were to hand-key frame that stuff, it would take a long time and you'd have to have a team of really talented animators. If they're making changes all the time, then obviously that stuff could get thrown out. That's where motion capture is super useful, just for bashing out different ideas.
What's the future of all this?
I feel like the game engine side of things, if we stay on course, the tools are just gonna get better and better. We might even be able to get to the point where previs becomes the shot. You're not even doing stuff in post, or maybe you're doing the amount of stuff that you do in post is less and less because especially if it's a full CG movie, where it's all a virtual world anyway.