Luma Pictures Scales Up with AWS Cloud-based VFX Pipelines
The visual effects work of Luma Pictures contributes to feature films ranging from superhero blockbusters to children’s fantasy to sci-fi thrillers, including recent films like ‘Once Upon a Time in Hollywood’, ‘Captain Marvel’ and ‘Jojo Rabbit’. The studio delivers this work successfully with a team of only about 200 artists between its Santa Monica headquarters and Melbourne location.
To be able to compete with larger VFX facilities, Luma makes technical R&D a priority. Their developments include a focus on automation and adopting new tools that help them work faster and more efficiently, and leave more time for what makes their work stand out – artistic skill and creativity.
Because rendering is so demanding in terms of hardware resources and time, Luma integrated Deadline render management software, developed by AWS Thinkbox, into their pipeline in 2013, shortly after their studio n Melbourne was opened. Deadline makes it possible for Luma to share render workloads between the two sites, making sure that on-premises resources are running well at capacity. To take advantage of cloud computing in VFX, Deadline now also supports rendering in the cloud, and has a straightforward user interface that visualises cloud-based and local resources to make compute management more accurate and timely.
Luma developed an in-house proprietary tool called RILL to achieve the complex FX for this character.
Luma has used the cloud for burst rendering on many projects, but first used Amazon Elastic Compute Cloud (EC2) resources for VFX work on 'Spider-Man: Far From Home’. Luma Head of Software Chad Dombrova and his development operations engineers adapted fairly quickly to the process of configuring Luma’s infrastructure to make use of AWS compute by using Amazon Route 53 scalable Domain Name Service (DNS).
Amazon Route 53 gives developers a way to route end users to Internet applications by translating website names like www.example.com into the numeric IP addresses. It connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS.
This service allowed them to integrate between on-premises and AWS resources. They were also able to customise stock Amazon Machine Images (AMIs) to align with their required specifications using the Dynamic Host Configuration Protocol (DHCP) options sets or instance user data, among other features. Because the AMI holds the information the system requires to launch an instance, users need to specify an AMI when they launch an instance. The DHCP is the protocol servers use to dynamically assign an IP address and other network configuration parameters to each device on a network so they can communicate with other IP networks.
The FX team converted the character into a deforming cloud that could be controlled and rendered, and developed a new lighting system
“No matter how much we optimise our local resources, we’ll experience peaks and valleys in farm utilisation depending on where we are in production. With cloud-based resources, you turn a static resource into a variable, then you just solve for time. Instead of trying to figure out how many render days we need to deliver a job, we can determine how many instances it will take to deliver that job by the deadline, then we can spin up those resources,” said Chad Dombrova. “One of the really interesting things we discovered about using Amazon EC2 resources is that the CPUs were extremely fast. We found the AWS C5 instances to be about 30 percent faster than the comparable resources we used in the past.”
Studio in the Cloud
Based on their experience with AWS for ‘Spider-Man: Far From Home’ and upcoming relocation of the Santa Monica studio, Luma has been considering a full Studio in the Cloud workflow, with rendering, storage and virtual workstations to all run on AWS in the cloud.
Up to now, the uncertainty of moving camera-resolution content quickly enough to the cloud for creative teams early in the production process has been an obstacle. The use of 5G may, or may not, change that situation but, if teams on production sets could begin the ingest process in the cloud as soon as footage is captured, it would create opportunities to use elastic resources for rendering environments, ingesting workflows and using machine learning for QC and metadata capture. Instead of moving master content between users, access and collaborative services could take place from a secure content repository in AWS.
About Luma’s move, Chad said, “We’re just moving down the street but it’s still a very complex process. We have a lot of legacy infrastructure which made us question whether we want to move all those on-prem resources or if this is the moment we move into the cloud, so I put together a spreadsheet for a total cost of ownership calculation and determined that once the current machines reach end-of-life, it doesn’t make sense to buy any more.”
As Luma grows more accustomed to using AWS in the cloud for VFX production, Chad is looking looks to collaborating with AWS directly on integrating optimisations. “I’d love to see a globally distributed and federated file system as an option for Studio in the Cloud, and I think machine learning could be very useful for data tagging, determining ideal instance specifications for a job, and tiering storage based on when a file was last accessed. Working natively in the cloud is an exciting proposition - if we were there now, our move would be a walk across the street.” aws.amazon.com