MainConcept – Juggling bitrate, latency and quality in broadcast

MainConcept – Juggling bitrate, latency and quality in broadcast

IABM Journal

Representing Broadcast & Media Technology Suppliers Worldwide
Articles covering a range of key topics and themes
In depth analysis of the latest trends

MainConcept – Juggling bitrate, latency and quality in broadcast

Fri 19, 04 2024

MainConcept – Juggling bitrate, latency and quality in broadcast

Frank Schönberger, Senior Product Manager, MainConcept

The broadcast industry is an incredibly exciting and dynamic place to be right now. Digital transformation is driving media companies to rethink their workflows and innovate, and as the industry continues its transition to IP infrastructure, many are adopting new technologies. Streaming has taken the broadcast industry by storm and has now reportedly overtaken traditional TV and cable viewing. But for a seamless viewing experience, content needs to be delivered with low latency, so viewers see the action on screen, as it happens. Getting the right balance can be challenging to achieve.

Latency matters

 Latency, the time delay between the moment content is captured and when it is displayed to the viewer, can result in buffering, interruptions, and delays in the video playback. Low latency is particularly important for live events such as sports, online gaming and betting, awards ceremonies, news, and concerts. Take sports for example: fans want to experience the highs and the lows of the action as it happens – not after a delay when the moment has passed.

Low latency streaming is an important part of delivering a seamless and engaging experience for viewers, but it’s challenging to achieve when delivering content to large audiences. There are many processes introduced between encoding and decoding. While real-time or near-real-time encoding and decoding may be employed, these processes can sometimes be computationally intensive, stacking up the possible delays.

Problem solving with CMAF (Common Media Application Format)

CMAF solves two problems that have been with us for almost 20 years, those being to allow for the decoupling of the packaging format and the manifest’s signaling, and to reduce the matrix of objects that would need to be delivered to clients. It is a standard that emerged from a collaboration between Apple (HLS, .ts) and the DASH-community (MPEG-DASH, .mp4) regarding a common format. CMAF is not a new ABR format (it is not an alternative to HLS or DASH). HLS/DASH are descriptive presentation formats, while CMAF is a common container format that can be referenced between them. Its main aim is to close the gap between HLS and MPEG-DASH, making sure that you could have one segment type to deliver to any platform and to any device.

CMAF alone does nothing to reduce latency. However, one of the nice side effects of the CMAF specification is that it allows for ‘chunking’. Low-Latency/Chunked CMAF (LL-CMAF) is a subset of the CMAF standard that specifically addresses the challenges of delivering low-latency video and audio content over the internet. Chunked CMAF reduces the latency of live streams when paired with certain technologies across the video delivery ecosystem. If you have a use case where you need low-latency delivery at massive scale, CMAF is most likely one of the better approaches.

A difficult balancing act

Scaling the process to deliver live content to a large audience is where bitrate, the amount of data processed per unit of time, comes into play because by compressing the data, you can reduce the amount of information being transmitted. While this has the desired effect of reducing latency, it also impacts quality because bitrate determines the level of detail and clarity in the content. A higher bitrate generally allows for the transmission of a higher quality image while a lower bitrate usually results in a loss of quality, leading to pixelation.

However, striking the right balance is no easy task because a bitrate that is too high will strain the network and cause buffering issues on playback. While compression can achieve lower bitrate, over-compression may result in an unacceptable loss of quality, because finer details are sacrificed. Additionally, maintaining quality when compressing data is also dependent on the efficiency of video encoding and decoding processes.

Taking the strain without compromising quality

Video streaming codecs such AVC/H.264 are instrumental in enabling broadcasters and content providers to optimize bitrate while preserving quality. More advanced codecs, such as HEVC/H.265 and its successor, VVC/H.266, employ increasingly sophisticated compression algorithms compared to older codecs like H.264. These algorithms can identify redundancies and irrelevant information more effectively, resulting in higher compression efficiency and allowing broadcasters to transmit high-quality content at lower bitrates, reducing the strain on network infrastructure, though resulting in the need for more computing power.

When delivering a video stream to users who consume content on a range of devices, screen sizes and resolutions, not to mention under different network conditions, content must be encoded in multiple quality layers. Adaptive Bitrate Encoding (ABR) is used to adjust the quality of a video stream in real-time based on the viewer’s network conditions. The goal of ABR encoding is to deliver the best possible viewing experience by dynamically adapting the bitrate of the video to match the user’s needs. This helps to minimize buffering, provide smoother playback, and ensure a consistent user experience across a variety of network conditions.

However, many of the encoding steps are repetitive, and this is inefficient. In a typical workflow, each encoder goes through the same, basic steps of motion estimation, image analysis and encoding, using the same input image. These are redundant and often unnecessary steps, so if some of these tasks can be combined, processing efficiency is improved and the encoding process for both live and VOD workflows can be simplified and streamlined.

Looking ahead

Broadcasters and video providers must continually adapt to meet the evolving expectations of viewers. And as users seek out ever more engaging and immersive viewing experiences, achieving low-latency, ultra-low latency and near-real time delivery of content is only going to become more important. Delivering the best possible viewing experience means balancing bitrate, latency, and quality so viewers can access content on a variety of devices and platforms.

By embracing cutting-edge solutions and staying at the forefront of industry advancements, broadcasters can navigate these challenges and deliver broadcast-grade content that engages audiences worldwide with a premium viewing experience.

 

Search For More Content


X