The world’s interest in viewing video content is massive, growing and creating huge opportunities for content producers. But if the intention is to continue meeting that demand over time, according to Quantum, producers must plan their video storage infrastructure strategically.
Quantum recently undertook research with ESG, an analyst and research company specialising in market intelligence for IT organisations. They investigated the specific infrastructure challenges that come with managing data for video production. Their findings, concerned mainly with unstructured data like video and text, identified infrastructure needs that remain consistent whether users are working on feature films, YouTube content or corporate videos.
A large majority of Quantum’s survey respondents said that in order to take full advantage of data creation, and optimise data monetisation efforts, their organisations plan to increase the amount of data they retain. Moreover, most expect that retaining more data for longer periods will benefit their businesses by improving the opportunity to monetise data, increasing insights and productivity, and easing the risk of ransomware.
Nevertheless, a similar number reported their organisations are under pressure to lower costs associated with unstructured data. When costs are too high, it becomes a challenge to keep enough data to achieve their expectations or to invest in the tools necessary to manage the data environment.
Data Flow – a Critical Resource
Skip Levens, marketing director of media and entertainment at Quantum, talked to Digital Media World about these opposing pressures and new ways to handle them. On the surface at least, keeping data is associated with storage costs, security risks, the need to train or hire new staff and purchasing new equipment.
“Staying on top of the rising tide of data in an organisation can feel like a scramble,” Skip said. “But you will reach a real ‘pivot point’ when you take your business from just staying afloat to actually swimming efficiently in all that data. Almost all data today is taking on the characteristics of rich media content, with valuable metadata. How it flows around your business as you apply your particular expertise becomes a critical resource that can have enormous payoffs, but it must be harnessed.”
Enterprises are already familiar with how this works in some contexts – for example, video surveillance footage may still be relevant or even vital for years. But as more kinds of organisations begin producing video – for corporate content, advertising or other purposes – focussing on data conservation gains importance. From a video production perspective, b-roll footage from one project could eventually make a later project easier to complete, so long as you can find it and access it easily.
Looking at the range of tools on the market now for mining and accessing video data, Skip believes the future of such developments is exciting. “In the production world, Quantum develops tools like CatDV and StorNext that give users the means to take advantage of inherent and added file metadata in video content in a straightforward manner – including accessing, cataloguing, navigating and transferring video into editing software for processing. Ultimately, teams can use such tools to work more efficiently and make sure raw footage will still be useful for future projects.”
He notes that StorNext has been making this work possible for some time, by linking applications, experts and various types of storage suited to each step – from fast NVMe storage to large, distributed storage like public cloud – into a single namespace view where all tools and applications work together. “By adding a suite of tools to manage and monitor the entire environment and shift content in and out of the workflow as needed, you avoid layers of management complexity and can focus on building a storage workflow that serves the needs of the project or processing underway.
“Most recently, we’ve improved the efficiency of how StorNext and CatDV work together. For instance, as StorNext tracks file system changes it can trigger CatDV services to occur automatically rather than having to brute-force a file system – that is, repeatedly cycling through every file looking for changes. That automation makes building and customising data and content workflows much more efficient.”
Programmatic and Iterative
The StorNext / CatDV combination can also help organisations that find themselves facing large pools of unstructured data, gathered over time, and need to determine which portions can be used to generate business value. Skip believes that the economics, resilience and scale of object storage make it practical to keep almost every bit of content collected for later processing and discovery. “You may not have the ability in your organisation to sift through all that content now – but you will,” he said.
“Quantum intends to make these capabilities available to users based on familiar workflow tools like StorNext and CatDV, with NVIDIA GPU and NVIDIA ML libraries to process audio and video files. This system can process a standing library of content as fast as the GPU and servers hosting the ML libraries and processes can run, to generate results quickly and in multiple iterations.”
For example, rather than having to process clips in an editing package that has an audio-to-text capability, or send one clip at a time to a cloud-based system, CatDV can process an entire library quickly. It immediately builds timecode markers for existing content, making it far more searchable and able to very precisely locate all instances in the library matching a certain word or pattern.
ActiveScale is Quantum’s own object storage, architected for both active and cold data. Its software (above) keeps performance consistent at all scales, to avoid impacting data availability, durability and security.
This programmatic, iterative kind of approach is typical of working with unstructured data. It means that, rather than introducing a new suite of tools, specialised applications can be based on tools already in use to manage content as part of the creative workflow. Skip said that simply connecting an AI tool to an asset management system may help enrich a single piece of content, but an even better approach is to model a content expert’s behaviour, such as identifying a single interesting point in a video, and then train the engine to replicate that behaviour in multiple iterations over entire libraries of content.
He noted, “Finally, since we are already using familiar storage management and media handling automation, we can then automate actions like building up clips flagged by the ML tool that a human expert can review on demand, or even produce content ‘superclips’ automatically.
“For a long time, video has been considered the most challenging form of unstructured data, leading to the development of extremely robust tools and techniques to go from a mass of raw video content to a finished piece of content that, as described above, can be programmatically iterated on and delivered in highly efficient and repeatable workflows.
“As video becomes the predominant, and most effective means of communication and customer interaction, these same techniques can be easily adopted by organisations that aren’t traditional media and entertainment content producers. It’s something the media and entertainment field knows well, but the conversation needs to start outside this sector – more organisations are relying on video production to meet their business goals.”
As organisations expand their video production capacity to support corporate videos, advertising, social media content, or even internal communications, designated systems will be essential to ensuring each video file is used to its full potential without overburdening video content teams. www.quantum.com