How to Implement Low-Latency HLS (LL HLS)
The promise of Apple's Low Latency HLS is lower latency than standard HLS with backwards compatibility with non-LL HLS aware players. The promise of the Mux video service is "video, in seconds." As you'll see in this tutorial, both companies hit their mark, with Mux's LL HLS exceptionally easy to implement with a latency of 4-7 seconds, a bit higher than expected, but consistent with other companies providing the same services.
According to the company's website, "Mux Video is an API that enables developers to build unique live and on-demand video experiences." The company doesn't offer a GUI but makes it simple to experiment as you'll see below. Though Mux has been providing live transcoding services for a while now, their LL HLS service was still in beta at the time of this writing.
Technically, Mux is a cloud transcoding service; you create the live stream and deliver it to Mux, and the service transcodes the video and provides a URL you use to deliver the streams to your target viewers. Creating the streams is a two-step process; first, you create the encoding instance, then you deliver a single stream to the instance from your live encoder.
In this tutorial, I'll review this process, test the latency of the streams that we produced, and introduce you to some valuable resources for familiarizing yourself with the current performance envelope for LL HLS. For the record, Mux changes $0.04/minute for encoding the full encoding ladder and $0.0012/minute for delivering the streams.
Let's jump right in.
Getting Started with LL HLS and Mux Video
To create the live stream in Mux, you make the following POST request which is directly available from the Mux documentation (Figure 1). You see that the reduced latency tag is set to true, which enables low-latency HLS.
Figure 1. Here's the code that enables LL HLS.
To start the service directly from the Mux website, you paste the code into the Create a New Live Stream POST body editor and click Run Request, which produces the API call (Figure 2). Obviously, this only works if I'm logged into my account since the code is generic and doesn't identify the account in any way.
Figure 2. Initiating the API call
Once the live stream starts, you get several critical pieces of data from the Live Stream descriptor shown in Figure 3. First, it provides the RTMP options and stream key to enter into your live streaming encoder to deliver the stream to Mux (Figure 4). Second, it provides the playback IDs for playing back the content.
Figure 3. Information about how to deliver your video to Mux and how to play the transcoded files
I used OBS Studio 27.1.3 for my tests, loading in a Josiah Weaver concert video from way back that had embedded time codes to measure latency. To connect OBS to Mux I plugged the server address and stream key into the Stream Settings tab as shown in Figure 4.
Figure 4. Inserting the Mux address and stream key into OBS
Mux provides high-level instructions for the encoding parameters for the input stream, recommending the H.264 main profile with 1080p 30 fps video configured at 5 Mbps with a keyframe interval of 2-seconds (Figure 5). OBS automatically selected the veryfast preset which of course you can upgrade to fast, medium or higher if you're encoding on a sufficiently fast computer, but I just used as is. I also left the tuning at zerolatency and x264 options as shown.
Figure 5. Setting the encoding parameters.
Then I started the concert video playing in OBS and pressed the Streaming Button and I was up and running (Figure 6). You can see the video playing on the right in the live stream descriptor field shown in Figure 3, which was obviously taken after starting up the live stream. If you study the bottom right of Figure 6, you'll note the CPU utilization was 14.4%, a sure sign that I could have selected a higher quality preset, though not relevant for these tests.
Figure 6. OBS is sending the live stream to Mux.
Once you start the stream, Mux starts transcoding, automatically creating an encoding ladder optimized by Mux. By design, Mux doesn't let you adjust or even see the specific encoding controls for the ladder; a plus if you favor simplicity over complexity, but a minus if you're an encoding professional who likes to tinker.
Testing Latency and Playback
Getting up and running couldn't have been easier. Now it was time to measure latency. I started at the THEOplayer LL HLS test page, which has several valuable features. First, the site includes live streams from multiple vendors as you can see in Figure 7, so you can test the latency from multiple providers as discussed in more detail below.
Figure 7. The THEOplayer LL-HHL test page lets you test latency and performance from multiple providers.
Second, the site measures the latency of the streams that you submit for playback and lets you explore the tradeoff between low latency and stream robustness. You see this in Figure 8. The current statistics to the right of the video window show the latency and buffer size, which for most services, including Mux, averaged between four and eight seconds. This is latency in default mode.
Figure 8. Exploring the relationship between buffer size and latency
If you click the Enabled checkbox below Managed Fixed Latency, you can adjust the parameters shown with sliders and explore the impact on latency and stream robustness. Most important for this discussion is the target latency, which the player will attempt to achieve by reducing the video buffer. For the sake of completeness, the Window control sets the tolerance window for latency above the target, here .25 seconds. Seek sets the tolerance window after which the player will seek to achieve the target latency, while Rate sets the amount of speed adjustment that the player will make to achieve the targeted latency. These are the controls you can set in the player to tune to the desired latency and response when this latency isn't achieved.
In Figure 8, target latency is set to 1.5 seconds and you see that actual latency was 3.6 seconds. However, by tracking the buffer and latency levels in the graph beneath the player, you see that the buffer hit rock bottom when latency was around 2 seconds, causing a brief playback stoppage. This illustrates the relationship between latency and robustness; that is, lower latency means lower robustness and vice versa.
For the sake of comparison, with Manage Fixed Latency disabled, Akamai's latency averaged around 7.2 seconds on my 280Mbps connection, with Wowza around 7 seconds, Synmedia around 6.9 seconds, Nimble Streamer around 5.5 seconds, the canned Mux stream around 6.0 seconds, and Flussonic around 7.5 seconds. The stream I produced using the Mux service was around 5.5 seconds without any adjustments. The only anomaly was Broadpeak, which showed a latency of 1.4 in the top screen but a latency of over 4 seconds in the bottom graph. The numerical and graph scores of all other services roughly matched, so I don't know what to make of the Broadpeak results.
Other LL HLS Solutions
I tested latency in other players by taking a screenshot that included both OBS and the player and comparing the time codes. Players that have been optimized for LL HLS, like JW Player and HLS.js averaged between five and six seconds, as shown in Figure 9 below.
Figure 9. Video in the program window on the left, in the player window on the right, showing latency of just under 6 seconds for the HLS.js player.
Interestingly, the HLS.js demo webpage, which provides a ton of useful information, showed latency at 3.634 seconds as you can see five lines from the bottom in Figure 10, while the actual measured latency was closer to 6 seconds. It appears that to achieve accurate latency measurements you need access to both the encoder and player as we did for this tutorial.
Figure 10. The HLS.js demo page provides lots of data, but its latency measure appeared incorrect.
On the other hand, players that had not been optimized for LL HLS, like the Native HLS Playback Chrome extension, showed latency as high as 26 seconds, which tends to prove that LL HLS is backward compatible on non-LL HLS players, though at normal latency. The Mux-produced stream played perfectly in Safari on an iPhone 13 Pro running iOS 15.1.1, with a latency of just over 6 seconds (Figure 11).
Figure 11. Latency on an iPhone running iOS 15 was just over six seconds.
So, what did we learn? The Mux LL HLS solution is competitive from a latency perspective, is exceptionally easy to use, and quite inexpensive. While LL HLS doesn't appear capable of providing short enough latency for truly interactive applications, the latency is certainly low enough to match or beat most live sports productions shown on TV, and for other non-televised productions.
[This article first appeared in the 2022 Streaming Media Industry Sourcebook European Edition.]
We talk a lot about the technical side of latency, but what about the business side? As a content or service provider, what will it cost me if I don't invest in reducing latency? And does it vary by use case? Streaming Video Alliance's Jason Thibeault, nanocosmos' Oliver Lietz, and NENT Sport Nordic's Martin Bergstrom discuss the business impact of reducing latency in this clip from Streaming Forum Connect 2021.
Streaming Video Alliance's Jason Thibeault and NENT Sport Nordic's Martin Bergstrom discuss the impact of ascendant cloud workflows on streaming latency in this clip from Streaming Forum Connect 2021.
Company will use the funds to accelerate development of its CDN product Vindral in the next 18 months
If you need low latency, here's how to pick the technology solution that's right for you. But it's not a one-size-fits-all affair.
It's not easy, but it can be done, as long as you have the right tool and know the tricks. Here are eight steps to success using Softvelum Nimble Streamer.
Companies and Suppliers Mentioned