Jonathan Fawkner from Framestore, and Pete Dionne from MPC, discuss the innovative visual effects techniques that brought the smart-aleck yellow cartoon sleuth to life on screen, under the direction of Rob Letterman.

Despite the cartoon origins of the Pokémon characters, filmmaker Rob Letterman (Monsters vs Aliens) and production VFX supervisor Erik Nordby (Passengers) were determined to place Detective Pikachu and his fellow characters into real-life locations and sets for Warner Bros. and Legendary Pictures’ action-comedy, Pokémon Detective Pikachu. “Erik was aware that he was creating inherently artificial fantasy characters, so he wanted to ground them in a sense of reality,” describes Jonathan Fawkner, Framestore creative director of film. “The vast majority of our footage was plate-based, shot on film.”
The Third Floor was embedded in the production to provide the postvis needed to help filmmakers make editorial sense of the mostly empty live-action footage. “Postvis was interesting for us because we had lots of second unit shots for the big sequences, “ explained Fawkner. “You end up with a lot of empty plates, which makes it hard for an editor to sort through the footage and know what to look for. What was quite new for me on this one was that I convinced Eric, Rob, and the editor Mark Sanger to hand over all the rushes for our big action sequences. This allowed us to come up with animation ideas and create plates that would support them. We would recut the sequence with different plates, put the postvis on top, and send them back to the editor who would either say, ‘That’s a good idea. How about putting it here?’ or ‘That doesn’t work.’ It was an organic process editorially.”
Pipeline R&D for the production focused on character development. “The biggest innovation was our fur solver called Fibre, which we used for the first time on Christopher Robin,” noted Fawkner. “We could also use our jet system, which lets us set up creature effects pipelines that often work right out of the box. We can jet any one shot without having to set it up, and the artists get a great first look at the creature effect. That stopped creature effects from being a bottleneck as a manual per-shot activity, letting us be much more bespoke with the shots that needed it.”
“Part of the story involves a poison cloud that effectively fills up a large space,” Fawkner revealed. “That was probably our biggest simulation challenge. It wasn’t just about making it big or making it read correctly. Every shot had to tell part of the story, so art direction was crucial in designing where the poison gas cloud would fit. The intricacies of the story meant some characters needed to be in the poison gas, while other characters had to avoid it. Every shot had geographic and editorial requirements, so the pictures became very art directed, which is difficult when dealing with big simulations.”
MPC digitally recreated an entire Scottish valley for the giant Torterra to occupy. “Our heroes find themselves in a forest that begins to move and shift under their feet as they flee for safety – at the end of the scene, they are actually on the back of a mountain-sized Torterra as it rustles around with its pals in what we previously thought was a natural valley of mountains,” shared MPC VFX supervisor Pete Dionne. “To achieve this, we shot as much plate-based material as possible in the Scottish Highlands mountains and valleys. This established the look of our immediate digital forest and inspired how to scale a normal Torterra Pokémon up to the size of a mountain.”
A five-person team traveled across Northern Scotland, capturing location data through a combination of ground, drone, and helicopter photography, along with fine and large-scale LiDAR scanning. “This data capture was crucial because we had hundreds of assets of all scales and frequencies to create, from high-res clumps of grass to entire CG mountain landscapes,” explained Dionne. “Creating this CG forest with a convincing amount of detail and variety required scattering millions of instances of our assets throughout the terrain. We started by composing our terrain from bare sculpted grounds, derived from LiDAR and photoscans, then scattered all our grass, foliage, tree, and debris assets on top, using a complex Houdini workflow to procedurally replicate the natural logic of this Scottish landscape.”

“Every single foliage and tree asset was simulated and cached out in a variety of movements, building a vast library of clips triggered by the level and frequency of terrain movement on any given shot,” Dionne continued. “All of this, including the simulations, was designed with an intricate instancing workflow, which then allowed us to render around 100 shots of this dense shaking forest. This was probably the most complex CG build and sequence I’ve ever been a part of, and the team did an amazing job executing it.”
A climactic third act battle takes place in Ryme City, with principal photography in London. “Most street-level scenes and set enhancement was limited to skyline embellishments and adding additional Pokémon-themed CG signage throughout the city,” stated Dionne. “The aerial establishing shots and the battle above the street became really challenging since this work required a full CG city. Our goal was to create a modern, bustling city with a much denser and taller skyline than London had to offer.”
A five-block radius of Leadenhall Street was rebuilt for the film’s Pokémon parade scenes. “This required a crazy coordinated effort of street-level photography, street-level and rooftop LiDAR acquisition, as well as extensive aerial drone and helicopter photography,” added Dionne. “From this data, we could recreate a high-resolution CG version of a downtown core to use as a base. The next stage was to scatter around 50 additional custom-designed skyscrapers across downtown, which gave us a more congested area for Mewtwo and Pikachu to navigate during their aerial battle.”
To create CG establishing shots of Ryme City from the outside, the environment needed extending. “We accessed a large-scale open-source LiDAR of Vancouver and its surroundings then dropped our embellished London core into the middle,” Dionne remarked. “As Vancouver lacks the tower density Rob Letterman wanted, Manhattan was used as inspiration for laying out hundreds of additional skyscrapers into our digital city. It took over eight months to complete. However, because this build was so robust and fully digital, we could turn around 200 massive shots in about two months, which was thrilling to see come together.”









