• July 18, 2023
  • By Joel Daly Vice President Product Management at Telestream
  • Blog

Top Three Reasons Why Quality Monitoring is Critical for Cloud-Native OTT Streaming

Article Featured Image

The shift to the cloud represents a momentous leap for an industry previously teeming with skeptics. According to a 2022 Deloitte report, cloud migration has reached a tipping point due to technological advances allowing server and storage components to be brought together in commodity appliances, and user preference accelerated from the shift to dispersed workforce operations imposed by Covid-19. Fast-forward today, and viewers have far less tolerance for subpar quality, with a 2022 study by Conviva highlighting that over 75% of online video viewers will stop watching a video within four minutes of encountering poor quality, including 33% who would switch to another platform.

However, the migration from on-premise infrastructure to the public cloud introduces a new set of challenges for businesses. This transition brings about increased complexities in deployment and a need for higher-skilled engineers to provide the necessary support. Thus, video engineers or operations teams at a service provider further contemplate the migration of live workflows from linear to adaptive bitrate (ABR) streaming.  Content owners focus on expanding direct-to-consumer streaming capabilities, and managed service providers’ main objectives are assisting customers in delivering ABR content to their subscribers. Essentially, they’re all exploring the extensive benefits of migrating cloud resources while simultaneously grappling with the associated complexities and skill gaps within their teams.

To meet the increased customer expectations, Quality of Experience (QoE) becomes the key differentiator to building customer trust and retention. Therefore, comprehensive video quality monitoring services are critical for processing, playout, or distribution. As more content and video service providers look to stay competitive, cloud adoption is helping to bolster their ability to stream broadcast-grade content over the Internet and raising their confidence in realising the potential of IP technology without sacrificing QoE.

In essence, without adequate visibility through monitoring, valuable engineering time and resources would be wasted attempting to pinpoint the root causes of quality problems. Therefore, monitoring at each workflow stage is imperative to ensure optimal video and audio quality and streamline troubleshooting processes. Here are the top three reasons why quality monitoring is critical for cloud-native OTT streaming.

Quality Monitoring Always Starts with the Source Contribution Feeds

It is best practice to monitor compressed IP video and audio content before and after processing, including encoding or transcoding stages. This monitoring ensures that no impairments are introduced during these processing steps. Therefore, monitoring the quality of the upstream source becomes essential because any degradation in quality at this stage will impact everything downstream.

To exemplify this, consider the use case where content is transported from the ground to the cloud using SRT (Secure Reliable Transport) wrapped contribution feeds. This content undergoes encoding and transcoding within a cloud-native Kubernetes service. Without proper monitoring visibility, it becomes challenging to determine if the source content is of good quality throughout its journey from the ground to the cloud. Additionally, monitoring is crucial to identify any potential impact on quality during the cloud processing phase. Furthermore, diagnosing quality issues within the Kubernetes service, which spans different clusters, nodes, and regions, would be difficult without monitoring visibility.

Ensuring Quality of the Source Contribution Feeds Is Only ‘Half the Battle’

Monitoring solely the source and post-processed content is insufficient to ensure quality throughout the entire end-to-end video workflow. Once the content has been processed upstream, it must traverse cloud-native IP networks in the form of SRT-wrapped unicast transport streams. This distribution is orchestrated through a cloud-native Kubernetes service and ultimately pushed to external third-party Content Delivery Networks (CDNs).

With comprehensive monitoring, it becomes easier to ascertain whether the cloud IP networks possess the necessary robustness to transport content seamlessly without packet loss. Additionally, monitoring is crucial in verifying that once the content is pushed to third-party CDNs, it remains accessible and readily available for subscribers to watch, free from buffering issues.

Once again, the absence of visibility through monitoring leads to wasted engineering time and resources spent on attempting to identify the root causes of quality problems. Therefore, monitoring should encompass the entire end-to-end workflow, including cloud IP networks and external CDNs, to ensure subscribers' smooth, high-quality video delivery experience.

It’s Easier To Find and Fix Issues from A Single Integrated Dashboard

So far, we have discussed the significance of monitoring at the source and downstream. However, when there are numerous monitoring points throughout the entire chain, another challenge arises: how can an organisation identify real-time quality issues within their Kubernetes environment, encompassing multiple clusters and data centres? Furthermore, how can these issues be swiftly diagnosed and remedied?

Additionally, it is crucial to determine the time frame of the problem. Is it a recent occurrence, or has it persisted for an extended period? This aspect holds tremendous importance. It is impractical and time-consuming to access numerous individual monitoring points or manually aggregate the data. Instead, a streamlined approach is needed to enable management through an integrated interface—a solution that is both simple and user-friendly.

The objective is to have a centralised and comprehensive monitoring system that provides real-time insights into quality issues within the Kubernetes environment, spanning various clusters and data centers. This system should offer quick and accurate diagnostics to resolve any identified problems promptly. Furthermore, it should provide historical data analysis, enabling the determination of when issues originated and how long they have persisted.

By adopting an integrated interface that simplifies monitoring and data management, the complexity of tracking quality problems across multiple points in the chain can be mitigated. This approach eliminates the need to navigate countless interfaces and facilitates efficient and effective quality management throughout the workflow.

[Editor's note: This is a contributed article from Telestream. Streaming Media accepts vendor bylines based solely on their value to our readers.]

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Streaming Wars: End-to-End Quality Assurance and Video Analysis Gives Streaming Service Providers the Upper Hand

The battle for audience retention continues to intensify in the highly competitive streaming realm. Anupama Anantharaman of Interra Systems highlights the critical role that end-to-end quality assurance, media QC monitoring, and in-depth video analysis play in meeting the escalating demands of today's video consumers and why the role of these processes is becoming more crucial than ever.