Latency vs. Quality for Live Streaming at Scale

How much streaming reliability and quality are worth trading for ultra-low latency, and when is one at a premium over the others?

Eric Schumacher-Rasmussen, Chair, Streaming Media Conferences, and CMO, Norsk, begins the discussion by asking Brian Ring, Senior Director, News & Sports Solutions, Amagi, “What role does latency play in in video quality in terms of audience demand? Are audiences really demanding the kinds of ultra-low and almost real-time latencies that we hear so many solutions providers talking about? Or is picture quality and audio quality still first and foremost, if you’ve got to lose a little on the latency side?”

“Latency is a massively complex topic,” Ring says. “You've got the latency on the ingest, you’ve got the latency on the delivery, which is what a lot of people talk about. Streaming takes 30, 40, and 50 seconds. Then you also have the latency in the middle, the different encoding. If you're doing mixing, there's that. And then from the master control, and I also think production control would have to grapple with this. You then have the latency of like the microseconds, if you will, between when I wanted to have the chyron up on the baseball player who's walking off the field. And so it's again complicated, but it's getting better.”

Ring emphasizes Free Ad Supported Television (FAST) as a good use case for the business effectiveness of low latency. “One of the things that is known in the business is getting the data back,” he says. “The viewership [data] back from all these different platforms is hard to do. So one of the things that I love to talk about is zero data, which is how do you get data when you don't have any? Well, QR code two-screen applications can get you some really high-integrity data in order for those to work and really be a delightful experience. You need to get the latency down to at least over-the-air broadcast. I think it's even better down to three or four seconds.”

Ring says Amagi is in beta with Amazon Web Services (AWS) for low-latency applications. He says that other organizations, such as the NBA, are also in beta with AWS working to support ultra-low latency applications for their app. “People care about it,” he says. “And those people that care are males 30 to 50 watching sports that want to bet. And if it's laggy, you're going to hear about it on Twitter. So I think it's really important. It's one of those things that we just need to do.”

David Hassoun, Chief Technologist, Dolby Cloud Media Solutions, Dolby.io, agrees with Ring that sports betting is currently the primary use case for ultra-low latency. In that area, he says, “Low latency is not an option. It's a key requirement. And that's when we're talking very, very low latency, preferably, closest to real-time as we get, sub three seconds…there's a whole concept of, I don't want to hear my neighbors cheering next door when I'm still waiting to see if they're going to take the shot.”

However, Hassoun also notes that the costs for reaching low latency may not always be worth the risks, at least at this point, since it is still not a priority for most end users. “I just don't think oftentimes it's worth the risk that it takes with the infrastructure, the challenges,” he says. “I think the key point [with] low latency in the media workflow coming in on your ingest and so forth, that's become critical. And now there are new technologies in different ways that are being applied, and that's incredibly valuable, but that's not what end users see.”

Peter Wharton, Chief Strategy & Cloud Officer, TAG Video Systems, asks Hassoun, “Does creating these low latency distributions add to the cost too? Does it mean more bandwidth or more expense?”

“It can,” Hassoun says. “Because it's something different and new, you have to retool even the client side work, the encoder packager work, those are elements that can come into play, let alone what happens over the wire and what transpires there, what the CDNs are going to be handling in different forms and technology. There are all different implications.”

Adam Miller, CEO, Nomad Technologies, highlights the trade-offs and compromises that much be faced when aiming for low latency. “The old joke that's out there, which is how fast do you have to be to outrun a bear?” he says. “You have to be faster than your friend, right? It's about all you have to do! I find it's the same analysis when you get to low latency because if somebody says, ‘I need low latency,’ what are [they] comparing it to? Broadcast, it's 12 seconds off. So you can be 11 seconds, and you've got low latency. It's all about the perception of what's the reality you're comparing it to. And I think for technologies that are true low latency, like WebRTC today…our customers will come to us with use cases and say, ‘I'm trying to fly drones, and I need to see the drone footage, and 300 milliseconds is all I can afford.’ And we can support that. Then you get, ‘But I'm only going to send that out to a hundred people because the cost is going to be significant.’ It’s just the way it works today with our technology. WebRTC pricing is going to be astronomical because it's individual streams today, as opposed to the CDN costs of non-low latency. So you have to do a tradeoff. Do you want to hit a million people, or do you want low latency? And today, unfortunately, you can’t get both. So compare it to what the other side is doing, how fast you need it to be, and that can help you make a good decision.”

Hassoun also notes that it may take a significant industry player such as the NBA to shift the industry towards broader adoption of low latency. If something like the NBA sets a new low latency standard, “Now we have a new benchmark,” he says. “’Well they're doing it, crap, we’ve got to get to sub seven seconds’ or whatever it may be. Those are the things I see actually invoking these changes…that we all end up having to kind of rally a bit more around if it gets enough attention and it has enough value.”

“Well, that's a good point,” Miller says. “The technology's just got to be there. “So far for [Nomad], the lowest non-web RTC…we can get maybe three seconds off that and be mostly reliable, but you're really tuning things to get there. So in my world, sub-three seconds, it's a jump, right? You go three if you're good at it, and then you go down to 500 milliseconds, and you switch to entirely different technology stacks. So, in reality, you kind of have to pick your limit.”

Corey Smith, Senior Director, CBS Sports Digital, Paramount, says that while low latency is excellent, the problem is “all the different client streaming technologies from a transport perspective.” From a DRM perspective, he says, “You can't really protect your content to some degree without having to introduce some latency in the pipeline, and that's a real bummer because now, all of a sudden, you can't really compete with the traditional linear distribution providers because your content is going through this encode/decode on the rights management side.” There is a need, he says, for a focus on low-latency technology that is well-supported “across multiple device ecosystems and it can utilize the same encryption algorithm.” Until that is achieved, he says, “We're going to constantly be chasing our tails, playing whack-a-mole on what low latency really means.”

Learn more about low latency streaming at Streaming Media East 2023.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

How to Deliver Resilient Streams at Scale

Guaranteeing a satisfying end user experience, whether you're delivering content live or VOD, requires resiliency, ensuring that the stream doesn't break down regardless of the scale, bursts, or other fluctuations in delivery demands. And the challenges are different for live and VOD, with live proving significantly more challenging in most instances. TAG Video Systems' Michael Demb, DAZN's Bob Hannent, and the CDN Alliance's Mark de Jong discuss the key challenges and how to address them in this clip from Streaming Media Connect 2023.