Qvisten Animation Keeps Afloat with Qube! and Microsoft Azure
Qvisten Animation has recently completed ‘Anchors Up’, a new CG feature for children due for release across Scandinavia in October 2017. However at the start of 2017, the Qvisten team's project was in danger of sinking. With just six months until deadline, less than 1 percent had been rendered. Even running day and night, its 40-node in-house farm wasn't large enough to complete the job on time. To avoid disaster and complete the film on time, Qvisten augmented their farm with cloud-based resources at Microsoft Azure, using PipelineFX Qube! render farm management software to route jobs to the cloud.
Qvisten Animation, based in Oslo, has grown accustomed to working to tight deadlines. Since its foundation in 1994, the studio estimates that it has averaged at least a commercial a month, initially specializing in traditional and stop-motion animations, and later moving into CG and motion graphics. Now employing nearly 80 people, the facility has since branched into feature animation, producing numerous films and shorts that have been released across Europe.
Anchors Up was Qvisten's biggest challenge to date. Although Elias, the brave rescue boat and main character in the movie, and friends had already been seen on television screens in over 100 countries around the world, his TV assets were in need of major upgrades. Qvisten had only one year to complete its part of the CG feature.
“At the beginning, we didn't think we could do it,” said Alexander Rydberg, Qvisten's IT manager. “We always knew that finishing shots would be tough close to deadline, but by May, the number of render jobs started shooting up. We needed to double, and then triple, the size of our render farm.”
Because the studio operates on a tight budget – even at peak, the ‘Anchors Up’ team totalled only 22 people – buying new machines wasn't an option. Qvisten started considering cloud options, identifying Azure as the best place to rent the extra capacity mainly owing to the Azure Batch facility that can be scaled up quickly to hundreds of nodes to prevent artists from experiencing avoidable downtime.
Alexander also remarked on the ease of use when they started using Azure Batch. “It gives us the ability to boot up however many machines we want with just a few lines of commands,” he said. “The GUI of the whole Azure package is intuitive and easy to set up. It’s also important to note that one core on Azure isn’t the same as a core at a different cloud provider. I achieved better performance on an Azure 16-core F-series VM than on a 20-core machine at a different vendor, which is worth knowing when comparing prices on the different cloud services. It doesn’t matter if another cloud service is cheaper if it is also slower.”
Complex Animation Pipeline
The main hurdle was Qvisten's complex custom animation pipeline. The studio uses a large number of different software packages in production, primarily Maya and Houdini for 3D modelling and effects, the Arnold renderer and Nuke for compositing but also ZBrush, Photoshop, Procreate, Substance Painter, 3ds Max, Phoenix FD and Premiere Pro. The complete pipeline would need to be replicated in the cloud.
A recent update to version 6.10 of the Qube! render manager makes it possible for users to access and manage Azure Batch’s much greater scale and compute power, pushing jobs to newly created nodes inside Azure Batch pools. To help Qvisten, who needed this integration before development was finished, PipelineFX created a command-line script that their artists could use to add and remove Azure nodes to and from their Qube! farm.
When submitting a job, an artist doesn't have to think about or plan ahead where it will go. If space is available in Qvisten's render farm, Qube! automatically routes it there, and if not, it goes to the cloud. “You just launch Maya through the script, then press Submit as normal and Qube! does the rest of the routing work,” said Alexander.
Automated Routing and Processing
“This special command-line that John Burk at PipelineFX made for us directly in Qube! makes it possible to boot up a full range of specific batch nodes on Azure - for example, ‘20 nodes on F16 machines using image X’. Instead of doing this in the Azure GUI, having that facility in Qube! saved me so much time when I needed to start a lot of different pools in Azure Batch. When the batch nodes are booted, Qube! automatically sends jobs to those nodes if there are any jobs matching those image criteria.”
Since some scenes in ‘Anchors Up’ contain over 100GB of textures, Qvisten keeps all resources pre-installed on each node by preparing and installing disk images that they created themselves, containing shared resources for the movie. Qube! can then achieve a lot of the pipeline replication as Job pre-flight and/or Work pre-flight scripts. Rather than having to re-upload the files with every render job, the image was replicated on each cloud node before rendering begins, a simpler, lighter operation that saves time and bandwidth.
The disk image works like a sample image, a reference package that contains textures - a lot of TX files, PNGs and EXRs – and basic programs installed such as Qube! worker, Houdini and Maya. Python 2.6 and 2.7 and their Python scripts are also included. In other words, all the resources that might be needed to render files for a particular project are present in this image. For example, using XGen in a file would generate errors if Maya weren't installed.
Alexander said, “When Qube! starts an Arnold job, it checks the .ass (Arnold scene source) file for a list of textures it will need to render that frame. If a texture doesn't exist on the node in the right path or a newer version is required, a download starts. By replicating the image on every cloud node in advance, the required textures were available on each node’s own C drive.
“This is how we run our on-premises farm and we didn't need to change that process in the cloud since we had already created the image with all the textures. It only needed a push of a button to deploy the package with all the needed programs. All the other scripts in our pipeline were read from the server.
“The disk image preparation proceeded as follows - boot up a virtual machine, deploy the program package, transfer all the textures, generalize the image and, finally, copy that image. Then we just use the image on our batch-nodes to supply all necessary resources.”
The studio has also been successfully running fluid simulations in the cloud, which was of course an essential part of a movie whose cast consists of animated boats. With shots in a two-minute storm sequence requiring up to 400GB of RAM, transferring the work to Azure enabled Qvisten to overcome the limitations of its own in-house farm.
Licenses On Demand
“Being up against deadline forced us to try a new approach,” Alexander said. “But we always work best under pressure and in this case it helped us find the easiest and best answer to our problems.” Using the Azure and Qube! combination, Qvisten was able to raise the size of its render farm from 40 nodes to over 180, able to process the 5,000 frames a day needed to deliver ‘Anchors Up’ on time.
Moreover, because Qube! developer PipelineFX operates a metered licensing policy allowing users to rent extra licenses of Qube! on demand, the studio was only charged for the time the extra virtual machines were actually in use, down to the minute. The same command-line support used by Qvisten was recently made available in Qube! 6.10, enabling other studios to try on-demand cloud rendering. Integrated support is expected to follow soon. www.pipelinefx.com