The effects team at Iloura built a cavernous underground world for 'Sanctum', Producer Andrew Wight's story of danger and adventure. VFX Supervisors David Booth and Peter Webb with editor Ben Joss talk about their own adventures taking the movie, shot in stereo 3D, through post production.


The film was shot on the Gold Coast in Queensland until March 2010, when the footage was brought back to Melbourne for post. Editor Mark Warner and the editorial team's first step was two weeks of assembly for the director to look at, from which the producer's edit was cut. During assembly, First Assistant Editor Ben Joss was also working with the director to cut together the VFX sequences.

Cave Research
The VFX team at Iloura was keen to sort out their pipeline and make a start on the assets they had to build. Although at that stage only 69 VFX shots were estimated, once the artists' work got underway, the count rapidly rose. By the end, David Booth and the Iloura team had completed 380 shots, of which 330 appeared in the final film.

VFX shots were needed from the beginning of the film, with a helicopter flight over the jungle forest, the aerial reveal of the cave's doline – the Esa-ala Caves' entrance, an enormous sink hole surrounded by forest - the hole itself and the characters' entry into it, and continued throughout the film, focussed mainly on extending the foreground sets, shot against green screen, with massive, detailed CG cave interiors.

Although fairly extensive practical sets had been built at the filming locations on the Gold Coast, a number of the underground caverns needed to be as large as football stadiums and reveal fantastic structural designs that would have proven monumental to construct. As the production's on-set VFX Supervisor, David was responsible for gathering all information the FX team at Iloura would need to create these extensions from the ground up.

He and Producer Andrew Wight took a trip out to South Australia where they could study a real doline set in limestone although, at 3m across, it was much smaller than the Esa-ala Caves represented in the film. Even so, on their trip, David photographed vegetation, light angles and effects in and around the doline. He also collected photo references of rock formations and textures, which the team could use to fabricate their own rock textures and surfaces to project onto their 3D cave geometry.

Working On Set
David worked on set throughout production, recording camera data and angles, lenses, lights, and advised the DP during green screen shots and the art department on what should be built physically and what could be achieved with green screen in CG.

David explained that when shooting scenes in stereo involving green screen replacement, it was important to keep the interocular distance locked during camera moves, to avoid having to repeatedly adjust the convergence point later in post. To begin work on such shots, the team's tracker would make sure the live action left and right eye images matched properly before the artists inserted the background extension in the right eye shot, which they used as their master. When everything was complete the same work was applied to the left eye. Because setting the convergence was critical to getting these shots to work, from the foreground straight through to the CG background, keeping the I/O locked during the shoot was helpful.

Terrain Challenges
David shot the helicopter footage for the forest flyover shown as the characters approach the caves at the start of the story. It hadn't been easy to capture, not least because getting the large, heavy rig into the helicopter proved a challenge. Landing spots were so far from the shooting location that stopping between takes to adjust the rig or clean the lenses was impractical – any adjustment had to be done while flying.

David said, "One of the most challenging CG sequences illustrates what the director had underestimated – to tell this story, the audience had to see the environments in detail and comprehend their scope and size and consequently, many shots from various angles had to be created throughout the film. This scene was at the vast cave entrance, as the characters abseil and parachute down to the interior below. A ledge was built for the actors to jump from, over a 40ft deep space. The VFX team had to build the cave environment, the parachute, the parachutist, birds, falling water and light shining through it – all in CG, down to the bottom."

Image Correction
David, also a skilled compositor, had gained experience with stereo images from his work on the 3D 'Cane Toads' documentary the year before, and put considerable time into correcting some of the shots that had stereo issues, ranging from adjusting convergence to coping with water splashing up from the flooded sets onto the lenses. David said, "The side by side rigs were more compact and manoeuvrable for working underwater, but these can only achieve an I/O of 60-65mm at the smallest. As a result, the close up to mid-range underwater shots frequently needed convergence adjustment." Convergence angle adjustment was not confined to underwater shots, and formed an important part of David's work on the film. It sometimes involved blowing up shots to compensate for loss of imagery after the shots have been shifted within the frame, left and right, to change the angle, but David managed to avoid this as much as possible by rebuilding the missing information from the available footage from both cameras, using Shake for the compositing. Further shot correction tasks David carried out included removing 'floaters', objects that, only when viewed in 3D, would sit uncomfortably in stereo space between the actor and viewer, causing a distraction.
Another issue was the huge amount of rushing water on set during the flooding scenes. If one lens was splashed with water, David could in most cases reconstruct the damaged image from the 'good' eye, rebuilding the frame with a blend of the two in relation to Z depth. Ben Joss remarked on this problem as well and said a drop of water appearing on one lens is quite hard to cope with in stereo and very obvious on the monitor. Very occasionally, he would need to create a 2D shot to insert into the rest of the footage.

Also the headlamps the actors were wearing tended to dominate shots in which the convergence had been changed, and David would have to reduce the detail in the lamps and adjust the shot. Furthermore, while they read reasonably well in the live action footage, they weren't so successful against green screen and had to be removed and replaced with CG beams, ensuring they were shining in the correct direction to work stereoscopically as well.

Alignment and Colour
Sanctum was Iloura's first stereoscopic production and consequently required quite a bit of R&D before starting on the actual shots. The effects team's VFX Supervisor Peter Webb explained that a sound understanding of how the brain perceives 3D and processes left and right images helped them decide how to best approach their shots. Moreover, because a viewer's experience looking at a 2D image in a cinema is slightly different to observing the 3D world, understanding the theory behind achieving a good stereoscopic image was important.

The original footage had to be prepared before it went into their VFX pipeline, checking for alignment and colour disparity problems. The success of all of their effects work would depend on correct stereo alignment and positioning of the convergence point relative to the interocular distance. The team helped David Booth with some of the image correction work, using Nuke with the OCULA plug-in. The right and left images of shots requiring set extensions and compositing in CG elements, in particular, had to be tracked before the team started work.
Their pipeline consisted of Nuke for compositing, Syntheyes for most of the stereo tracking. 3ds Max and Maya were used for the CG component and also for lighting set ups.

Aerial Views
The editor or director especially liked certain views from the helicopter footage David had shot, but decided some takes didn't run long enough or show the precise angle they wanted to see. In such cases CG extensions often helped – extending the forest in certain directions and tracking in the imagery. Aerial views of the Esa-ala Caves' entrance were a part of many of the aerial shots of course, and these had to accommodate every angle the photography adopted. This meant realistic vegetation and rock had to be completed in full detail to match the landscape.

But in fact, they ended up 'cheating' on the size of the hole in some shots in order to produce the necessary dramatic effect. In some instances they had to make it twice the actual size than in others - after all, it was going to be the focus of the entire story. Nuke with OCULA, plus a few of Iloura's own techniques, helped blend the CG into the shots while making sure the stereoscopy was consistent.

Sense of Scale
A basic challenge was communicating the correct sense of scale inside the caverns. In a stereoscopic film, achieving this partly depends on setting the interocular distance correctly on their CG camera rigs. Setting the two CG cameras too far apart will make a space appear small, so they had to ensure that the technical scale of their rigs produced the desired effect. Adjusting the relative sizes and positions of people, rocks and cavern walls within the 3D stereo space took lengthy trial and error.

Working with light, how it fell on objects and calculating the light fall-off and pinpointing distance when the camera faced out into the dark reaches of the set extensions was another way of conveying their size. Up to a point, this could be based on how light from the actors' headlamps behaved in the built sets. Atmospherics such as drifting rain and water helped to occupy the huge spaces and catch stray shafts of light.

Creating such massive set extensions as the caverns tends to make the usual use of HDR images of the set for lighting inadequate. However, given the story's underground location, the absence of a natural light source would have meant the only available light would have come from the divers' helmets, requiring the artists to creatively introduce enough additional light to make a worthwhile viewing experience while maintaining an authentic feeling.

"Our designs for the sets were just a start point," said Peter. "From there, working with the lighting and the stereo became a creative process." All of these elements were calculated into their complete light simulation, and although everything started with real-world calculations, once each sequence was in the review theatre, it had to be assessed for dramatic impact, storytelling and the director's vision. Sometimes the environments needed to be brighter, darker, or appear closer or further from the camera.

Reference
Reference material for building the CG sets came from a quite a few sources. Production supplied stills and moving footage of the environments they wanted the caverns to resemble. As an experienced caver himself, writer and producer Andrew Wight had incorporated some of his own experience in a caving disaster in the Nullarbor into this story, and was able to help influence and guide the way the CG sets' looks developed. He sat with the team and helped review their work and commented on what looked realistic and feasible in terms of lighting and structure.

To create the rock surfaces for their set extensions, they used the hundreds of high-res reference stills David Booth had taken on set and on the research trip he and Andrew had made to South Australia. Helping to define the interior structures, in the absence of a real location to scan, were Andrew Wight's photos and reference material gathered during his caving trips, which gave them a real-life start point for the looks of the environments and lighting.

But each cavern had to have its own looks and personality – St Jude's Cathedral, the Belfrey where Ben and Frank discover a Japanese tank, the huge desert-like set the team called the 'Sunless Sahara' due to the sand dunes covering its floor – and had to be referenced separately. The team also did some of their own research. Nevertheless, no single source was able to give them exactly what they wanted or needed for each of the caverns and so the caves seen in the film are created with digital matte paintings projected over CG modelled terrain.


Setting the Stage
Elaborate CG work was required in the sequence early in the story that, from the audience's point of view, helps set the stage for the environments in the rest of the film, when young Josh and his father's financier Carl step out of a helicopter at the Esa-ala Caves' entrance. A small section of the cliff face was built for the actors to stand on while they talked and prepared their gear, but the remainder is digital – plants, rocks, birds. Achieving a complete, coherent environment and communicating the vast space dropping away below the actors without looking artificial, needed correct stereo alignment combined with very accurate interior detail of the walls. Once complete, its looks also had to match the CG sink hole they had created in the aerial shots from the helicopter during the characters' approach.
A later sequence in which Frank and Josh struggle from the interior of the caves and find themselves at the bottom of a massive, deep hole open to the surface required the 3D team to build an almost completely CG environment except for a decaying, abandoned Japanese tank and a tiny set for the actors to stand on next to it. In some shots even the tank was CG. Again, showing the hole's huge size, as well as creating the rough, vertical, overgrown walls to communicate the impossibility of escape for the two characters at this point in the film, were the team's challenges.

Sunless Sahara
The 'Sunless Sahara' environment, as the VFX team called it, was possibly the largest and most challenging of the cave set extensions the team had to create. It involved unusual subterranean sand dune formations that may occur when such caves are repeatedly flooded, although lack of exposure to wind makes their looks somewhat different to dunes at the surface. They used a terrain building software called WorldBuilder to help them envision how an eroded dune might look instead, and then finished modelling from there to achieve the composition necessary for the shot.

A number of CG doubles were needed to replace characters at certain moments. For example, although the actor playing Carl did leap out into the space over the doline, requiring wire removal, from that point a digital double replaced him in a shot showing him descending under a CG parachute. Some body doubles were also used in underground caving sequences, especially looking down narrow shafts at climbers with their headlamps switched on and some of the divers underwater in scuba gear. Fortunately, none of these were at close range, so facial detail wasn't required, and performances were fairly generic and didn't need motion capture. The artists started with standard human models and then worked with photos of the cast to customise their proportions and colouring.

Props and Creatures
They also built all kinds of props - scuba tanks and rebreathers, cavers' helmets – all exactly matching similar gear used on set. Some actual gear was brought into the studio to augment the photos taken on location. When one of the actresses abseils disastrously down a cliff face, not only did her body have to be modelled but the rope she hung from had to be animated convincingly to replicate the way it would react to her movements.

A few environments nearer the surface needed the addition of CG creatures. CG bats were added to the Belfrey set, for example, and swallows flew through the doline catching insects. These animals helped complete the authenticity of the upper cave system environments. Flocking algorithms helped them distribute large populations of birds or bats, combined with some hand animated individuals to add interest and variety as they took off or landed. "Our cave swallow references included their small mud nests, which we built clinging to the walls and rocks. These creatures, I think, really help the shots to work and feel alive for the viewer."

Underwater World
The Iloura team were often called on to help enhance or provide continuity for the many scenes shot underwater. A reconfigurable set piece, similar to a labyrinth, was built and used submerged in the tank facilities at the Warner Bros Gold Coast studios, and a few shots were captured in some actual underwater caves. All the divers seen were shot live action except for a few distant figures in very wide shots. The long, sustained sequence at the very end of the film as Josh swims with his tiny air tank out of the cave to the surface and freedom, was also all live action, carefully edited together.
Particulate matter was often important in the underwater sequences to give the audience a constant feeling of the presence of water surrounding the action, but couldn't be uniformly applied. On Andrew's advice that water in undisturbed underground caves is typically crystal clear, the sequence shot at St Jude's Cathedral for example contains very few particulates, but they used them at other times to avoid the look of divers floating in space. These water effects, plus grading in Nuke to control the lighting, also helped provide continuity from shot to shot as sequences were edited.
Underwater green screen was used in some cases to take people from the set to a bigger environment. This can be tricky to work with because the water contaminates the colour of the characters and of the screen equally with blue, making them more similar. What the artists actually rely on for keying is the difference between the screen and the characters. As well as lighting, the water has to be filtered and temperature controlled for consistency.

Digital Roller Coaster
At the start of the movie, Josh and Carl are shown a digital fly-through of the amazingly complex and convoluted Esa-ala Caves system on a computer screen before they embark on their fateful adventure. It was worked into the story to help the help the audience understand the nature of the environment they were entering. The camera passes through the chambers and tunnels as if on a roller coaster, revealing a simply rendered view of the exact cave system in which the story unfolds. "This fly-through was one of the first things we did on the film," Peter said. "It was fun, and formed the basis of the cave structures we used in our effects and environments work."

The stereo aspect of the project combined with the fact that this was their first project creating and working with assets and CG environments at such a large scale and with such unusual, even fantastical designs, made this project a significant challenge for the facility. Peter estimates he spent at least five to six months researching before any shots arrived at Iloura, followed by six to seven months of actual production. The knowledge and experience they have gained seems to be paying off, and they are now working on another stereo project.

Stereo Editing
Working with the stereoscopic aspect of this project was an ongoing issue for the editors of 'Sanctum'. First Assistant Editor Ben Joss explained that although they successfully managed to review their work on JVC passive stereo monitors using circular polarised glasses, as in the RealD cinema system, the footage had to be cut with the left and right images on the timeline using the right eye as their 'master' eye.

To make the monitoring work, Ben had to get the left and right eye into their Avid system. The footage was digitized straight off the HDCAM SR deck onto their Avid system as HdDNX36 MXF media, one eye at a time. The recorder put both images onto the same HDCAM SR tape into separate streams, a format Sony calls 422x2. They edited the film first on the right eye, and assistants conformed the left eye to match it afterwards. Because the left eye had been digitized already, they didn't need to return to the tapes, but simply treated it as separate media.
It was Ben's naming convention for left and right media that allowed Editorial to establish the stereo set-up in the Avid after completing the right eye edit. He exported the EDL, used a text editor to find and swap the names of the files to denote the left eye, and re-imported it into the Avid. Once it had the left eye component, it would pick up the correct files and carry out the process.
In turn, the only way to show a 3D image in the JVC monitor was to adopt a side-by-side arrangement in the Avid. They took the right eye image, reduced it down vertically 50% and moved it to the right, did the same with the left eye, moving it to the left. The monitor could then interpolate this into an interlaced image.

Phasing
"We used the right eye as our master eye because it wouldn't have passed through the beamsplitter and was slightly better quality, and did the same with the side by side rig footage," Ben said. The left eye from the beamsplitter was also inverted, or flopped, back to front. To flip it into the correct position and give consistency between the two streams, they used a device called the Stereo Brain when digitizing the footage into the Avid. Digital Pictures, responsible for the digital intermediate, grade and deliverables for the movie, had sourced this equipment for them.

As the editors worked on the film through the various passes, they sent the EDL to the Technical Director at Digital Pictures, Nic Smith, who would update the timeline and use it to conform the footage for the DI on their Lustre suite. From the Lustre, the VFX team extracted the clips they needed to work on as DPX files in separate left and right passes.

Digital Pictures soon discovered a phasing problem between the two versions, in which the time code sometimes varied between Iloura's version and the editor's EDL. In a number of cases, this appeared to have been caused by not giving the deck enough pre-roll time when digitizing the footage into the Avid, although some other shots with plenty of pre-roll showed the same problem. Nevertheless, the camera crew aimed to allow at least 5 seconds pre-roll before the action but in reality, action on set often occurs right after the camera rolls, so this could be difficult to manage.

QuickTime Check
It was critical that Digital Pictures was creating the exact same film represented by the QuickTime that Ben exported for them. He exported both eyes as QuickTimes, which they matched their output to, regardless of what the time code said. Fortunately, the discrepancy could be corrected visually and because they were working on tape, not film, the error wasn't overly costly to correct.
Creatively, the main challenge was the linear nature of the story written for 'Sanctum'. As a result, their capacity as editors to shift story elements around or change the focus was very limited. To heighten the sense of claustrophobia in the tight spaces, for example, they simply had to rely on the DoP to supply the appropriate shots and work with those.

Although the Avid does have a stereoscopic menu, they didn't actually use any of its features. "For example, there's a setting for the type of stereo footage being dealt with – side by side, up and down, interlaced, anaglyph and so on, plus your preferred viewing format," Ben explained. "But while stereo footage involves two discrete images, the Avid wants these to have been combined in some format, before it interprets it for viewing. On the client monitor, it will always show both left and right images but on the internal monitor you can select different displays. Also, if you want to use correction tools to adjust conversion or blow up a shot, none of the available effects tools work. You would need to have both images there and treat them separately.

Discrete Images
"So although we could review the footage in 3D on the JVC monitor, we always had to have both, discrete left and right images there on the timeline to do any editing, which would then happen simultaneously. Initially we did try editing with a combined stereo image, alongside the right eye on its own, with the side by side image the Stereo Brain allowed us to produce. But it wasn't of a high enough quality to output a 3D version from the Avid for their preview screenings.
The side by side image also had to be rejected because, if they wanted to review in 3D, it wouldn't maintain effects such as speed ups, slow downs, re-size, re-position or titles after splitting into left and right for monitoring. Nor could they fix the convergence on this side by side version. Thus, the left eye had to be digitized discretely, to keep both eyes on the timeline. Ben said, "James Cameron was impressed that we could review our work in 3D on our monitor, which is something he hadn't been able to do for 'Avatar'. He did warn us about the timeline problem, though. He visited the shoot and received dailies each day on DVD in 2D."

STEREOSCOPIC RIGS
Both beamsplitter and side-by-side rigs, two of each, were used in the production. The two types of rigs had different advantages and shortcomings. The beamsplitter is larger and bulkier and produces an inverted left eye image, but it places the cameras one above the other, thus allowing an interocular distance of as small as 0 if required, making it useful for close up shots.

The side by side rigs are more compact and both left and right eye images are oriented correctly, but the I/O distance is larger and less controllable. Typical interocular distances during production were about 2.5cm on the beamsplitter and 6.5cm for the side by side.
Consequently, while the side by side rig was chosen for underwater work because it was smaller, and given underwater housing for the diving sequences, close-up shots were a challenge and needed fixing in post.

As mentioned earlier, David developed a compact beamsplitter rig in underwater housing, employing two DSLR cameras to overcome the side by side rig's wide convergence angle that prevented close up shots underwater. David also ensured the rig would be synchronous. He successfully constructed a rig of the required size configured to hold two Nikon D3S cameras, and Nikon helped by modifying the cameras to be synchronously operated. Simon Christidis, the underwater cameraman, made the housing and controllers. The camera was quite successful and David experimented with it on some of the shots, but unfortunately they were not used in the film, mainly because the MPEG 4 image quality was considerably different to the HDCAM footage shot on the Sony CineAlta for the rest of the movie. But it worked well and when James Cameron visited the set and saw it, he was impressed by its potential for future productions.

Words: Adriene Hurst Images: Courtesy of Iloura, Great
Wight Productions and Universal Picture
MORE FEATURE ARTICLES
Featured in Digital Media World. Subscribe to the print edition of the magazine and receive the full story with all the images delivered to you. Only $79 per year. 
PDF version only $29 per year 
subscribe