Why the Edge Matters in Streaming

Article Featured Image

Two quotes are often repeated in our industry (and this author is probably as guilty of it as anyone). The first is a nod to the past becoming the present: “The more things change, the more they stay the same,” which is from Jean-Baptiste Alphonse Karr, a French writer and former teacher turned education reformer, who also loved fishing. Those of us who have been in the industry for more than a decade have often seen the same ideas—some brilliant, some mediocre, and some ahead of their time—touted under different names as if these old ideas were capable of, to put it in PR speak, vaulting the company anew into the realm of “innovative and pioneering streaming leaders.”

The other quote is a bit more esoteric to our industry: “When the times are interesting, you can never have too many swords,” wrote novelist George R.R. Martin in A Feast for Crows, which, in streaming speak, means HBO’s Game of Thrones—aka “the streaming series that broke the internet” a few years ago. Of course, we in this industry don’t actually say, “You can never have too many swords,” but rather, “You can’t have too many points of presence” (or PoPs for short).

The thought process behind both of these quotes converged this year as the industry met in Boston for Streaming Media East 2022, the first time we’d all been together in person since Streaming Media West 2019 was held in Los Angeles just before Thanksgiving 2019. The term that summed up Streaming Media East this year was “edge computing,” a nascent idea that draws from streaming architectural concepts from the late 1990s but also has its roots firmly planted in the middle of the pandemic era we’ve just passed through.

Why Does Edge Computing Matter?

On the one hand, you have those who say edge computing is no­thing more than a revisiting of the debate between core-focused CDNs and PoP-focused CDNs. On the other, you have a few key clarion calls, partly as a way to differentiate between CDN and edge computing and partly as a way to remind the industry that PoPs (aka, edge computing) might just be an idea whose time has come.

A select few even argue that edge computing is a necessity, as streaming moves its global play—at least in the parts of the world that have the internet (more on that in a mo­ment)—from an on-demand, file-based workflow to a live delivery workflow that potentially lives up to the live-global-event-streams-at-scale promise.

Defining the Edge

Before jumping into differing edge computing approaches, let’s first see what the computing industry uses to define the edge. The first stop is IDC, a data analyst firm. It defines the edge as “the multiform space between physical endpoints … and the core” and defines the core as the “backend sitting in cloud locations or traditional datacenters” (which I’ll cover a bit more later in this article).

Another angle on edge computing comes from HP Enterprise, which sells big iron, including switches, routers, and robust workstations. From its perspective, edge computing is “a distributed, open IT architecture” that decentralises processing power. It says edge computing requires that “data is processed by the device itself or by a local computer or server, rather than being transmitted to a data centre.”

These two definitions say that edge computing isn’t meant to occur at the data centre. While this is a key part of some edge computing definitions, is it really practical?

Shrinking the Data Centre

With so much of today’s internet traffic flowing through large data centres—some estimates say that 20% of all of the world’s data traffic is concentrated in a few counties in northern Virginia, with 80% of that being IP video traffic—how are we going to move from the centralised delivery model of data centres that are a million or more square feet in a single-floor cavernous facility to the edge, where pro­ces­sing can occur closer to the homes of even the remotest rural areas of the U.S.? That very challenge—borne of witnessing the digital divide in rural versus urban America, especially during the pandemic’s Zoom classroom phase, when many students could not get connectivity at home—is why some colleagues and I started a new nonprofit research institute. The 501(c)(3) is called Help Me Reconnect, and its purpose is to explore models of reconnecting those communities that lost basic internet functionality when dial-up was discontinued.

As we’ve explored options for connectivity, working in parallel with the emerging economies-focused Help Me Stream Research
Foundation, we’ve come to realise that the data centre isn’t going away, but it will shrink considerably. A visit to the headquarters of Tulix in Atlanta to see its data centres solidified the model in my mind. In years past, Tulix had a sizable presence in the AT&T building in the heart of the city. But with the increase in fibre bandwidth and the advancement of processing within a single data server, the company opted to buy a much smaller historic building in which to house its dual data centres. Ironically, given the advancement of transport and processing technologies, Tulix is able to deliver more traffic from its smaller facility adjacent to Georgia Tech than it had done a decade ago in the AT&T building.

To further test the model of what we’re calling “small town data centres” that fit the size of the Tulix data centre—roughly 15,000–20,000 square feet, or a small fraction of the size of a traditional data centre—the Help Me Stream Research Foundation partnered with Tulix and Streaming Media to ask questions around edge computing and data centres in the State of Streaming Spring 2022 survey.

The survey and keynote presentation at Streaming Media East, along with the subsequent report, detailed a key point: More than 65% of respondents said their companies were looking to utilise smaller, regional data centres for their edge computing strategies, instead of the much larger data centres.

So, with both industry sentiment and OpEx savings trending toward smaller data centres, we’re embarking on our first nonprofit-built data centre, which is in the heart of a small town in rural Appalachia. By year’s end, we should be able to reveal more details, but if we can crack the model in one small town—and, going forward, use existing buildings constructed in the early 1900s, many of which are rapidly deteriorating but are at the heart of towns that draw work and shopping crowds at multiples of their nominal population—it may prove that the model of shrinking the data centre to fit near smaller population pockets is a viable financial and operations strategy.

How Small Can We Go?

The whole premise of shrinking the data centre, however, assumes that there’s a fairly sizable data pipe (or three or four) coming into the location of a small-town data centre. That necessity can’t be understated, since the move toward live streaming delivery with a 20–30 second delay is rapidly giving way to near-real-time streaming in which there’s often less than 2 seconds of delay, at scale, from ingest to delivery.

Yet there’s an even smaller model being contemplated—and not just by Internet of Things (IoT) proponents touting drone video surveillance or intelligent washing machines—that shrinks the “data centre” to the size of a few suitcases.

Steve Miller-Jones, VP of product strategy at Netskrt, says his company is interested in “getting content to the far reaches of the internet, the hard-to-reach parts of the internet,” which includes not just remote or rural locations but also moving targets such as airplanes, buses, and trains. He notes that, on the whole, the size of the audience is often misunderstood. “Where there’s intermittent connectivity, the population is often just statistically small,” he says, adding, “In an individual instance, they are [small], but over time, they’re not. There’s also an imbalance between what’s available to the user in that environment and how that user gets upstream, this imbalance between fronthaul and backhaul.”

As a result, both in Miller-Jones’ case at Netskrt and our own at Help Me Reconnect, that disparity of connectivity can be traced to clear economic impact. “We see that these kinds of markets are typically seen as being out of reach and not part of your typical CDN landscape, because it’s not really who they’re trying to reach,” says Miller-Jones. He notes that one of Netskrt’s customers runs a long-haul rail service in England, and the goal of delivering quality live streams to passengers on this service is to mentally shorten the journey.

After all, if you’re walking up and down the aisles of the train trying to get decent connectivity for your mobile smartphone or tablet, you’re also more likely to be frustrated by the length of the journey. Conversely, if you have solid connectivity and can immerse yourself in watching a live cricket match or football game—or even bingeing a good on-demand series—the time feels like it slips away much more quickly.

“If you’re doing things like cache management at nighttime,” says Miller-Jones, “or using time when the train is in locations that are well-connected, like stations or depots, and you’re expecting the power to stay, it turns out it doesn’t necessarily do that, because many times, the trains are shut off at night. So, you have to think about how you would ever manage a cache in these kind of environments.”

Why focus on trains? They have sizable captive audiences to access. On U.K. domestic services alone, with 3,500 trains and an average
journey that lasts just under 2 hours, that’s significant access to nearly 2 billion passenger trips annually on the domestic rail. So, it makes sense why solving the challenge of edge computing in this scenario means access to a significant and previously untapped audience that’s probably more inclined to consume content as they hurtle across the countryside.

Why the Edge Computing Model Just Might Work

To better understand the edge-case issue of users who are often both remote and on intermittent networks, it’s helpful to know how typical CDN delivery works. As a content provider looks to deliver content to end users—especially those who might be on a mobile device using a cellular network (smartphone) or Wi-Fi (tablet)—the content provider will typically use a CDN. CDNs often sit at interconnect points (peering points) located within an autonomous system, which is a larger group of networks with a unified routing policy. Each autonomous system is assigned a number (an ASN).

Miller-Jones notes that content owners often have “a number of CDNs that they’re using to deliver content to that end user,” and there are instances in which “one CDN might be straddling the ASN boundary,” meaning that part of the deployment is at a peering point. Another CDN might sit within an ASN boundary, which, Miller-Jones says, “implies that there’s probably a general difference in deployment technique, capacity management, and the sort of general technology environment that the CDNs are working with.” That difference is what can cause significant latency issues, and even though there are moves afoot to address commonalities between CDNs (such as the CDN Alliance), there’s also a gap that widens between the CDN footprint and the last mile.

Going back to the example of train pas­sengers, while they themselves might see that they’re connected to Wi-Fi and expect typical broadband speeds, the train itself is often connecting to a wireless cellular network. So, the media players on the mobile devices on the train, connected via Wi-Fi, expect that they can request Wi-Fi speeds of media consumption. Meanwhile, because the connection is going out across a wireless cellular data provider—which itself sits in an ASN optimised for delivery across that telco’s network—the content provider (CDN) sees the requests as coming from a cellular network and treats them as a typical mobile phone connection. If it works, then the train passenger at least gets some video delivery, although probably not at the speed they’d prefer. Yet, more often than not, it almost works, meaning that the passenger anticipates they’ll be able to watch something, but the connection is intermittent at best.

Netskrt’s approach is to provide an edge CDN that sits between the traditional CDN and the OTT content providers. Essentially, the company places a core cache on the train and then finds ways to fill it intermittently. “It can be as simple title management and a cache out in the train,” says Miller-Jones, “which we communicate with and prioritize content to sort of fill the cache on the train constantly with what is being promoted by the OTT providers and what is being consumed by their audience.” What’s still left is the small-data-rate necessities that need to move via the
upstream link, such as DRM or authentication, while the actual content itself is already cached within the train.

There’s technical magic behind the scenes, of course, because the algorithms have to anticipate what’s going to be played next. If the anticipated result is wrong, the cache will be filled with content that’s not likely to be viewed. But, according to Miller-Jones, the use of “DNS delegation or 3 0 2 delegation or just host-name targeting in the trains to target the requests into the cache” is often effective.

Edging Closer to a Rural-Urban Balance

In conclusion, let’s go back to the rural problem, in which local homes might have plenty of Wi-Fi and maybe even decent downlink connectivity to allow viewing of cached on-demand content, but almost no ability to backhaul (meaning limited ability to participate in Zoom classes, business meetings, or FaceTime with relatives). How big of a market is this, and should major content providers care about the need to move the edge out into remote or rural areas?

According to World Bank estimates, on a global scale, approximately 44% of people live in remote or rural locations. While each of the individual pockets of connectivity is small, and the backhaul for live-event streaming from these rural areas may be expensive, in aggregate, this is not a small population that can be ignored from either a societal or economic standpoint.

“If we come back to how we think about capa­city planning for large events or capacity planning going out 5 or 10 years in the future,” says Miller-Jones, “the statistical relevance sphere is about large populations that are well-connected. But there’s 44% of the world that, from a single event standpoint, may not be significant, but it is relevant to our content provider and their access to audience, their increase of subscribers, churn rate, and even revenue.”

And that brings us back around to the idea of shrinking data centres. Like the central Appalachian region that Help Me Reconnect is first targeting, there are many rural or remote places that are tucked into the mountains, down in hollers, or even out on remote deserts. None of these are ideal places to put a data centre of any size, but they are very much incompatible with the idea of mammoth data centres that consume significant amounts of energy and water for cooling. Yet many of these also have existing buildings that could be retrofitted to truly deliver edge computing where it’s needed most: to the vastly underserved 44% of the population who has yet to be able to robustly engage in the digital world most of us take for granted. It’s time to change that, one edge device at a time.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues