Object storage is currently recognised as one of the most promising types of storage for the massive amounts of video that content producers must store and secure after each production. The challenge is not about capacity alone but also management, protection, accessibility and archiving. Object storage is highly scalable, handles quantities of unstructured data with simplicity and is suited to cloud deployment. Fortunately, software development has picked up to meet the challenge.
Digital Media World spoke to Eric Bassier, senior director of products at Quantum, about object storage, video and the media and entertainment industry. He said, “Unstructured video data is a good fit for object storage. Unstructured datasets have common requirements and go through similar stages, from acquisition to archiving. It is usually machine-generated, ingested from edge storage devices used on cameras, satellites, vehicles and research labs. The data needs fast ingest and cataloguing, through a fast file system, in preparation for processing – analysis, post production and distribution.
“Furthermore, the content needs to be managed, preserved and protected. The important factors at this stage are massive scalability, access, and safety and durability – these factors are what now makes public and private clouds more attractive to more organisations as well. Quantum has been developing their own object storage system and software to both meet those demands, and work in the cloud.”
Inside Object Storage
Object storage breaks files into pieces or discrete units called objects, and spreads them across system hardware. The data is stored in a single-namespace repository, rather than as files in folders or as blocks on servers. Each modular, self-contained repository owns the data, with a unique identifier used to find the object over a distributed system, plus metadata that describes the data.
The metadata is important. It includes background info on age, securities, access and so on, and can include an extremely detailed description about the contents and origin of the data. To retrieve the data, the storage OS uses the metadata and the identifiers, which support the use of policies that perform robust searches.
Object storage requires a simple API that will be understandable to most clients, is cost efficient because users only pay for what is used, and readily scales to extremely large data quantities. Since the objects have enough information for an application to find the data quickly, it is good for storing unstructured data.
When Quantum decided to develop their own object storage system, instead of starting from the ground up, they purchased ActiveScale object storage from Western Digital in February 2020. Their development team was familiar with the system after having integrated it with the StorNext file system, and helping to market it as a suitable archiving approach for their customers.
“Object storage is online archiving, which is very practical for organisations with video assets. Quantum StorNext’s file system has been well-integrated into ActiveScale, allowing inter-tier movement of massive amounts of data and making it easier to scale and manage the system. Massive scale plus data durability also makes the storage cost-efficient. Configuration options are single-site on-premises, an AWS hybrid environment, two sites, or three geographically dispersed cloud sites, distributing the data. The data in all of theses scenarios is stored under a single namespace for durability and availability.”
Object storage is most often used in support of an application, not only file systems but also asset management, file sharing and custom in-house tools, plus analytics, archival and backup and recovery software. An ActiveScale on-premises private cloud can serve as an interface to handle put and get requests for S3 buckets that have been set up for different uses, access groups and provisioning. ActiveScale can also be the front end to an archive.
“Combinations of high-performance, shared StorNext File Storage with ActiveScale Object Storage are being used for editing, finishing and VFX workflows in media production,” said Eric. “Data is ingested into the StorNext File Storage, retrieved for processing and transformation, then moved into the object storage repository.
“From there, management is policy-based with one-to-one mapping between files and objects, moving them between storage pools but maintaining single namespace presentation. Quantum is now working to make the data repository more searchable. A further option is to use the repository for direct distribution to web viewers.”
Backup and Recovery
An application that is becoming more common is using object storage as the centre of an enterprise backup and recovery set-up. For example, data can be backed up directly to an ActiveScale private cloud, serving as a data lake. Using Quantum DXi backup appliances, the data can then be deduplicated and replicated for public or private cloud object stores, for handling by specialised backup and disaster recovery applications like those from Veeam, Commvault or Veritas, or to scalar tape storage – or to combinations of those options.”
The DXi appliances’ functionality makes backup and recovery less expensive and time-consuming. They are based on Quantum’s variable-length deduplication algorithm for data reduction, keeping disk storage and bandwidth use as low as possible when replicating and helping to scale capacity as data grows, with more linearity and in finer increments. DXi’s ingest and restore functions work with most backup software and are fast.
Veeam develops backup, disaster recovery and intelligent data management software for virtual, physical and multi-cloud infrastructures. Commvault develops enterprise data protection software. Veritas specialises in data agility, instant access and recovery, cloud storage and optimisation. These companies, as well as Quantum, recommend a 3-2-1 strategy for enterprise backup.
Disasters and Data Durability
The strategy consists of three copies of the data, one for production and two as backup. These copies should be distributed between at least two types of storage – for example object plus tape, disk plus tape, object plus public cloud. Finally, one of the copies should be kept off-site as a protection against disaster and ransomware attack. Veeam has automation functionality supporting this strategy.
Due to its scalability, ease of management and its approach to data availability through durability, customers in data centres have begun to use object storage for many different purposes and workloads. For data centres, availability, or uptime, is a critical measure because any downtime is costly. But it quantifies the physical storage system availability, and does not qualify the condition of the data once the fault is corrected.
Data durability is equally important. Striping data across multiple drives – the method used in a RAID system – can protect against drive failure, but an Unrecoverable Read Error during a rebuild operation means an amount of data is permanently lost. As drive capacities have increased, so have rebuild times. Object storage achieves data availability with erasure coding in which data is broken into fragments, expanded and encoded with redundant data pieces and stored across various locations or storage media.
Since only a subset of the redundant pieces are needed to restore the data, rebuild times and degradation are reduced. Widely distributed erasure coded data, plus continuous validation of the data written on the media, can increase data durability. www.quantum.com