English French German Italian Japanese Korean Portuguese Spanish

Building the World of 'Blade Runner 2049'

Blade runner framestore pillars

'Blade Runner 2049' continues the story of the original 'Blade Runner' film from 1982. As it reveals the relationship between human beings and Replicants, it tells the story of one human and the Replicant he loves, and follows the downfall of our social structure. The visual rendering of 'Blade Runner' deeply impressed a generation of audiences, and has influenced science fiction movies ever since then.

Consequently, for today's VFX artists working on the new 'Blade Runner 2049', combining a complex sci-fi story with romance and social conscience was never going to be easy. Fairly spare on dialogue, the film required creative, subtle visualisation to support the new characters and story, and the effects frequently contribute to the performances to help fill in some of the unspoken story points. The artists worked with Director Denis Villeneuve and VFX Supervisor John Nelson.

At Framestore’s Montréal studio created some of the concept artwork used in pre-production, and delivered almost 300 VFX shots in post. Their VFX team of 175 people worked on nine sequences, which included the creation of several CG spinner cars, while the Art Department created concepts of key environments and assets used by all of the VFX studios.

LA to San Diego

The production and VFX teams worked together to help the audience imagine the chaotic 30-year interim period between events in the first film and this one. After the world's ecosystems collapsed, famine occurred around the world and the Replicants viewers met in the first film have meanwhile revolted. A new, more cooperative version has been produced, some of which are used as Blade Runners, hunting down and retiring the few remaining older Replicants that are still active. The story's main character K is one of these Blade Runners.

Blade runner framestore spinner

One of Framestore's main tasks was to visualise an area extending roughly from Los Angeles to San Diego known as Trash Mesa, extending out over many kilometres, covered with debris, wreckage and broken ships.

Principle photography for this film took place in Hungary, mainly at studio locations, with aerial and reference imagery shot in Spain and also Bangladesh, as well as Iceland. To shoot K's journey to Trash Mesa, the production built a backlot set in Hungary with a wood scaffold construction, covered with dirt, various pieces of trash and scrap metal, and green screen in the background. Framestore’s VFX Supervisor Richard Hoover visited the set during the shoot before the team's work in post production started.

Trash Library

The setting also called for a massive digital environment for which the team created an asset library of trash comprising approximately 670 assets. CG Supervisor Adrien Saint-Girons said, “One of the biggest challenges of Trash Mesa was the sheer number of assets we had to manage along with per shot changes to the environment required to meet the director's vision.” The artists used the library assets to populate each camera angle in the footage, only 'switching on' what was needed in a given shot, and could rotate through them as the camera panned through the scenes, resulting in the constantly varying views we see.

Blade runner framestore tm3

In Bangladesh, old abandoned ships, ship parts and their textures had been captured in series of photogrammetric images. The images were then processed to generate 3D assets based on concept art, textured and animated with lights and effects simulations as the doors opened and trash tumbled out of the vehicles. Other photography and aerial footage of Bangladesh were used as reference to build a beach littered with the shipwrecked tanker ships.

Richard said, “As it would have been impossible to create one large, complete environment asset, Framestore also used layering to manage the huge environment, building layers representing the known California terrain base, photo reference from Iceland, and the CG debris. Otherwise, the volumes of junk and the detailed variations would have been too great. Some shots required further variations later in production, resulting in actual variant subsets by the end.”

Controlling the Atmosphere

The atmosphere in the 2049 world was another example of using layering to produce looks that could be tuned and adjusted from shot to shot. “The atmosphere is not only very dense, hazy and occluded, often cloudy and layered with rainfall, but gives viewers clues about the speed of vehicles, for example, and relative distance. As a story device it is also used to reveal portions of the environments at particular moments, and casts a heavy, depressed mood over some of the sequences,” Richard said.

Blade runner framestore tm2

“To be able the control and use the atmosphere to this extent and in this way, the team actually built two separate atmosphere assets with different qualities – one was even and consistent and the other had much more variation, thinner in some places, thicker in others. By balancing them in the composite as required for the effect they were after, they could control the atmosphere without starting from scratch each time and also make sure it had the same look.”

Lighting Up

The look of the environments overall relied on three sets of factors, again, mainly controlled through a layered approach – atmosphere as just described above, lighting and environment maps. The environment maps were captured in Iceland, shot with very flat lighting as 360º scenes.

“On the sets, the use of green screen was minimal, in fact, even though nearly all of the physical sets were replaced in post. The director preferred to shoot on sets that resembled the story locations. Building the sets this way would have made the lighting on the live action elements more accurate, but demanded a lot of rotoscoping, of course. Nevertheless, LIDAR scans and texture photography of the sets were captured as reference for the 3D environments.

Blade runner framestore tm

Lighting was controlled with two lighting schemes – one was quite flat and even, the other more backlit – that were manipulated in the composite depending on the scene. The lighting team would light the sequence consistently, giving the compositors a chance to create a natural progression from stormy early morning to the brighter overcast plates of the crash site. Volumetric renders, which were produced at a high cost in terms of render power, could also be added to the composite.

Las Vegas in Ruins

Once K encounters the story of Deckard and Rachael and begins looking for signs of them, the search takes him to Las Vegas, now in ruins. The film's depiction of abandoned, post-disaster Las Vegas was built from the ground up, based on concept art inspired by the work of artist Syd Mead, plus currently available information including USGA data of the Las Vegas Valley. On top of this were placed simple models of several modern day casinos that the audience can recognise. Finally, some newer buildings representing the time after the apocalypse were built and added on top of that infrastructure.

“The cinematographer Roger Deakins dictated the look of the Vegas sequence by how he shot the plates on stage,” said Richard. “He supplied a photo reference for how the sun should appear through the atmosphere, in this case resembling a sand storm. With both Denis and Roger, we often discussed how far we should see into the distance. There was a very fine line between creating depth in the shots and only allowing the audience to get a hint of what was out there.”

Blade runner framestore vegas shoes2

Blade runner framestore shoes

Roger Deakins had several essential ideas about the look of Las Vegas, in particular the lighting. The orange light, toxic but oddly beautiful, was an in-camera effect achieved with Kodak gel filters. This look was based on references of the Sydney sandstorm of 2009. Framestore recreated it in their own 3D environments with a similar approach, like a photographer. After capturing HDRI data on the set, they could start with white lights matching those the production was working under and then apply filters with values similar to those Roger had used. Richard remarked that in the plates as supplied, they found no information in the blue channel owing to the filters, which was both strange to work with and for the audience to look at.

Roger also used various lenses during the shoot to keep the images very clear and give them a particular look. However to work on their shots in post, Framestore had to first un-distort the images to align them all for the effects work, and then their compositing supervisor Luigi Santoro rendered the shots with the correct lens distortion to match the photography again.

Lost in the Haze

The team were fortunate to get an early start on Las Vegas, and continued working on the look, build and detail from November 2016, adjusting it until the end of production. They carried out three phases for each building in the environment. The first was a blocking stage for efficiency, followed by the addition of more overall detail to determine the composition, and then finally a fine-tuning, beauty stage.

Blade runner framestore vegas2

Like Trash Mesa, one of the many challenges was depicting the huge scale, but the options for doing this were completely different. Richard said, “The director favoured a spare, empty feeling that was hard to show in such a thick atmosphere. Buildings were not constructed in a way that showed specular highlights, for example, and the windows had no glass in them. Also, the out-sized statues the viewer encounters with K as he enters the city were shot on a stage and composited into scenes at varying scales.

“Use of levels of detail can help depict a large scale, but much of the detail here was lost in the haze. Instead, we decided to introduce lots of human-scale assets into scenes such as tables and chairs and street furniture, indicating how large the buildings nearby would be. It also contributed to Las Vegas' history as a place full of people and activities.”

in several scenes throughout the movie, a neat reminder of the first film are the flying spinner vehicles that Framestore was assigned to resurrect and update. The team modelled K’s spinner from reference of the full-scale prop, and the production built two of them as physical models. One of these could actually be driven. Framestore also gave one of their spinners, which appears in a crash sequence, an original new design.

Expression of Joi

K's girlfriend Joi was built with an interesting 'shell' effect that aims to balance the her body opacity with the fact that she is a hologram. To do this, Framestore created an inverse camera effect that allows us to look through her body from the front and see through to the side of her facing away from the camera, from the inside. This required a digital version of Joi that is used to render the inside of her coat. The team had to track her body in the plate as accurately as possible and sculpt her CG coat per shot to match the plate coat.

Thus, the finished look can either be used as a way to make the audience forget her true nature – as K sometimes does himself - or alternatively to remind us of what she is at certain times, as a story point.

Blade runner framestore joi2

Blade runner framestore joi

Expressing emotion and evoking emotion in others is an important feature of Joi. Although she can't touch or help others physically, she of course depends on them – in her case, on K – for her existence. The on-going changes in her appearance reflect K's condition as well. Her dependence is touching in itself. Her software was similar to that of a replicant's, that is, devoted mainly to information gathering.

“When K’s spinner is struck by lightning Joi is affected,” said Richard. “When they crash, she’s afraid that K is dead. It was important to depict her attachment to K as her systems were failing. She tries to save K, but she can’t touch him or pull him from the wreckage. It was an emotional scene.’

Her emotional state is visualised by a voxel effect, created by Framestore's FX lead, in which her geometry sometimes breaks down into a simplified version of herself, made up of pixelated voxels, or small cubes. These cubes could be altered to express different states - for example, their size would change according to the speed of her performance. The compositing artist working on Joi made her look a creative effort, dialling into the spots where the cubes would break up and choosing his own scale and colour.

Romance and Replicants

For most viewers of this movie, the most dramatic story within the bigger 'Blade Runner 2049' plot is the romance between Rick Deckard and Rachael, a Replicant. At a critical moment in the film, Deckard is confronted by an exact copy of Rachael as she appeared to him in the film 30 years before, which throws him into confusion.

Blade runner MPC

Although the production had access to the actress Sean Young who played Rachael in the original film, the passing years meant her features would have changed noticeably. The production VFX supervisor John Nelson made up his mind early on that creating a completely digital face and head for Rachael was the best approach to recreating her youthful self. On set with the other actors, a body double performed Rachael's part in the scene. But her head would be replaced by this animated digital head in the new footage, perfectly matching Rachael's original appearance and movements.

Blade runner MPC2

MPC in Montreal was chosen for this task. The company's visual effects supervisor Richard Clegg started on the project about one month before principle photography, which took place in September 2016. He brought specific experience of re-building and animating photoreal faces from his work on MPC's team that built the young Arnold Schwarzenegger for 'Terminator: Genisys' in 2015. But this time, the challenges were different, for several diverse reasons

Skull

To begin, the CG sculpt of Rachael’s head was the major challenge. Not only had Sean Young's features  changed over time. Cinematic image quality has changed as well. The earlier 'Blade Runner' was shot on film, and the new one on the ARRI Alexa. Furthermore, in the original film, Rachael's scenes were shot on a fairly dark, shadowy set slightly obscuring her original performance and looks. A shallow depth of field had been used for drama and expression, often leaving her face out of focus.

Richard said, “Fortunately, abundant photo and video reference was available of Sean Young. She had appeared in other films at about the same time, such as 'Dune', and she shared other images of herself.”

To form a static, stable base and reference for their model, linking the current Sean Young and the on-screen Rachael for the new movie, MPC started with her skull. Before production, a detailed scan of Sean Young’s head was captured on a Light Stage at the University of Southern California Institute for Creative Technologies (USC ICT).

Blade runner MPC6a

MPC’s artists used this scan as a reference to build their own skull for their complete head sculpt and animation. Because skulls do not change much over time the skull model had anatomically accurate, real-life measurements for her identifying features and proportions of her head, such as the bridge of the nose, cheekbones and jaw line.

Perfect Head

Richard commented, “Sean Young has a changeable face, capable of very subtle expression from moment to moment. Her jawline, for example, is delicate and changes from angle to angle as her head and the camera change relative positions. This was different to Schwarzenegger's head.”

Once they were confident of the accuracy of the CG skull, it was lined up against scenes from the original film. As mentioned above, the available footage of Rachael was sometimes compromised by dark, high-contrast lighting with a shallow depth of field that frequently put her in soft focus. Considerable guess work was required to sculpt the rest of the head over these images. MPC’s 3D modelling artists spent many hours until they had created an identical match.
 
The team styled Rachael's hair using MPC’s proprietary groom software Furtility, matching the hair, eyebrows and eyelashes from her opening scene in 'Blade Runner'.

Blade runner MPC3

Animation and Roto-animation

However, the animation was the core of MPC's work and, according to Richard, also the hardest part. Although the body double acted out the performance on set for the camera, MPC’s main job was to replace her head with their animated, photoreal CG head. During the shoot, from two to six witness cameras were set up to capture each take from different positions and used to achieve precise roto-animation of the actress, and then composite and reanimate the CG head over the top of the real footage.

Blade runner MPC4

Blade runner MPC5

To make sure the animation of the face and head would be authentic, both the body double and Sean Young herself performed each shot, directed by Denis Villeneuve. A team from Dimensional Imaging came to the set in Budapest with their capture rig, used to record videogrammetry data during the shoot. Videogrammetric measurement determines the three-dimensional coordinates of points on an object by capturing measurements made in two or more video images, taken from different angles. Solving the data in specialised software tracks the object as an animated 3D mesh.

Dialogue

Richard said, “To help with animating the dialogue, DI4D conducted a dedicated facial motion capture shoot, during which the audio was captured simultaneously, resulting in a recording that was perfectly synchronised. MPC’s FACS capture kit was used to capture a full spectrum of facial poses and expressions. Although all of the animation we see in the movie was keyframed, having this precise, real life reference was invaluable for the animation, revealing accurate face shapes to the animators for different sounds.

"Another challenge that we faced was the dynamic stage lighting used for the shoot. The DP, Roger Deakins, had erected a large ring light that constantly rotated and was never static. That and the moving caustic lighting coming from the water surounding the stage prevented us from capturing an HDRI light map which meant that all of our digital lights had to be animated to match."

Richard believes that to pursue this kind of animated facial re-creation work in the future, a breakthrough would be to somehow carry out the capture to the highest standard available, in anticipation of any projects, and then keep the data as a reference. “Videogrammetry and capture, rendering and shading are developing and improving all the time,” he said. “Animation remains our biggest challenge.”