English French German Italian Japanese Korean Portuguese Spanish

Moving Live Production to the Cloud - from the Ground Up

Nevion live cloud production5a

Right now, the most common approach to configuring virtualisation for media functions is to use specialised hardware platforms, optimised for performance. However, products with a larger proportion of their specialised functionality owed to software have recently become available, and systems based on commercial off the shelf (COTS) IT platforms are now also beginning to appear, allowing the same unit to be built and used for multiple purposes.

Furthermore, broadcasters and producers are already showing interest in moving to the Cloud for live production. Such a move would bring with it several fundamental changes, such as shifting from a hardware foundation that relies on capital expenditure, to a Cloud-software, pay-per-use model based on operating costs. Olivier Suard, Vice President of Marketing at Nevion believes that while such a transition is still in the future, continuing development in public and private Clouds mean that Cloud delivery could soon become more common.

“For many producers, it is inconceivable that Cloud systems could be applicable to live production,” he said. “The Cloud, as we know it now, is something that works reasonably well for non-time-critical asynchronous transactions, like file sharing or transcoding. If the processes of uploading, processing and downloading can take minutes or even hours, that’s fine.”

This attitude, combined with the Internet’s inherent variable speed, best effort service and potentially loose security, means it doesn’t seem suitable for live production requirements at this point. Olivier explains here what the current limitations are, and the progress being made to overcome them.

Living with Latency

The broadcast industry traditionally places a great deal of importance on low latency, which in a live or real-time environment is critical because low latency means people and equipment can be synchronised more easily and closely - switching camera angles at the right time, for example. However, it is one area in which Cloud is currently falling short. Broadcasters are still unable to achieve the same quality, both in terms of processing and of transport latency, in the Cloud as they can through the specialised on-premises hardware and dedicated networks they use today.

Nevion live cloud production3

Olivier said, “Although the variable latency of the Internet can't be escaped, for some real-time processing this may be acceptable as the industry has become more accommodating of delays in recent years for the sake of better production values – think for example of augmented reality in studios. True real-time delivery relies on low-latency computation environments, but may be sacrificed in favour of higher resolutions and realistic rendering.

“Another cause of latency is the processing in the Cloud itself. Software running on CPUs struggles to process in real-time the huge volume of data required for video processing. However, it is possible to overcome this latency challenge by using field-programmable gate array (FPGA) acceleration that some public Cloud service providers are adopting. FPGAs are designed to be configured by a customer after manufacture, and reach high I/O bandwidths.”

In particular, FPGAs can be programmed for many different types of workload by customising the parallel nature of their processing to carry out certain calculations or processes simultaneously. As long as the software has been written for compatibility with those FPGAs, the processing performance should reach satisfactory levels.

Network Reliability for QoS

The quality of service is also an on-going issue for the use of public Clouds for live production. Because the transport of packets over the Internet is less reliable than over a private IP network, there is a risk that not every IP packet will reach its destination.

Nevion live cloud production2

In 2017, the Video Services Forum (VSF) continued development of a new standard called RIST (Reliable Internet Stream Transport), which is an emerging negative acknowledgement approach to professional video transport over the Internet. 'Negative' refers to the fact that the receiver only tells the sender if a packet has not arrived as opposed to confirming the receipt of every packet, which is time-consuming and costly.

RIST assumes that the network will lose packets at some point along the delivery chain. In order to intelligently retransmit the packets that have gone missing, a sequence number is associated with each packet as it goes out. At the receiver end, specialised algorithms look for gaps in the sequence numbers. Only if packets become lost does the receiver send a message back to the transmitter, instructing it to resend the lost packets.

The first draft of this standard, which has now been completed and successfully demonstrated, may be able to ensure that a live video stream arrives over the Internet at the correct time, in the correct package.

Bandwidth and Security

Security and bandwidth have generally been the main concerns when organisations adopt Cloud-based systems. Olivier notes that, consequently, Cloud vendors have invested in security and now typically make secure infrastructures and encryption available, so that it has become far less of a risk. Bandwidth, on the other hand, has been a different kind of challenge.

“Delivering professional quality video requires a lot of bandwidth and I/O processing capabilities,” said Olivier. “As yet, public Cloud providers are unable to run real-time uncompressed video in and out of their infrastructure. Even though it hasn't been a barrier to processing smaller streams, such as audio and OTT content, it continues to limit the media functions that can run successfully in real-time on a public Cloud.”

Nevion live cloud production4

Media Function Orchestration

If Public Cloud delivery is to become a normal procedure for a broadcaster, Olivier commented that signal and media function orchestration will be fundamental to the proper functioning of the entire workflow,albeit in a new form.

“In a baseband world, workflow is largely determined by the physical location and connectivity of equipment and the core router,” he said. “That is, the hardware is manually configured and re-configured to accommodate specific media flows. In IP, that connectivity is logical – the physical connectivity of equipment is typically in place, while the control layer is used to determine how the media flows between the pieces of equipment.

“However, as media functions become virtualised, workflows involve connecting instances of software across or even within software-defined platforms - for example, linking the SDI-to-IP adaptation to the JPEG 2000 encoding. If the software for those media functions is running in the Cloud, it may even involve spinning up and tearing down instances of the media functions, based on the processing capacity required.”

This suggests a totally new role for orchestration systems - they need to become software and virtualisation-aware, otherwise the virtualisation benefits cannot be realised.

Going Private

While the discussion about Clouds usually focuses on public Clouds, Olivier sees private Clouds as a viable alternative approach for live production. A private Cloud supplies functionality as a service over a private network infrastructure. In its simplest form, it is not much more than a data centre on the broadcaster’s premises, where signal processing and transport equipment like embedders, encoders and multiplexers is pooled.

“This could be called an On-Premises Cloud, which is already being used today,” he said. “The dedicated nature of this infrastructure ensures that a lot of the current limitations of public Clouds, such as performance, reliability and security, can be addressed. It’s worth noting that, in a private Cloud, the equipment itself need not be running on COTS hardware at all. It could also be bespoke hardware or software-defined platforms – which still deliver the best performance for broadcasters, especially for video processing.”

Nevion live cloud production

Currently, the equipment is usually owned and managed by the broadcaster – as is conventional, in fact. However, the ownership of the equipment and the management of the infrastructure can also be handed over to a service provider, who then supplies the equipment functionality as a service to the broadcaster.

“In fact, with the current performance of dedicated IP networks, it is totally possible to locate a live-signal processing data-centre quite some distance away from the broadcasters’ facilities, effectively in a 'Cloud of Real-Time'” said Olivier.

Cloudy Future

Oliver doesn't doubt that, for most real-time broadcast media transport, processing and monitoring, the best approach right now is still to use software-defined platforms, built on hardware optimised for performance.

Private Clouds are now also a viable option, both on-site and off-site, using specialised software-defined platforms, or COTS hardware for some applications. However, technology is evolving fast and it’s only a matter of time before widespread COTS and public Cloud usage become a reality for real-time broadcast production.  www.nevion.com