OTT Moves Toward Microservices as Speed Becomes an Issue

Article Featured Image

Historically, the amount of investment into innovation in streaming technology was limited to the very largest software vendors in the space. Microsoft, Adobe, and Real were, for some time, the only game in town. Their technologies were often tied to specific chip architectures and operating systems. The only option operators had was to deploy these technologies onto dedicated machines. Once deployed, the only way to offer any redundancy was to simply double up the infrastructure. And, of course, all of that led to a complex balance of maintaining availability while not sinking vast sums into unused infrastructure.

According to the traditional broadcast mindset, this was “how things had always been done.” And for many years, the industry had the advantage of dictating what the end-user devices,and the networks that reached those devices, would be.

But today’s smartphone generation does not care about corporate strategy. It cares only about being able to get hold of the content they want to consume, and if that content, ready for their device, is not easy to find through a well-known operator’s offering, then the user has no hesitation in looking for that content on an alternative (sometimes less-than-legal) platform.

Operators that want to run alongside those users and keep them engaged have eventually faced a choice: either create more new infrastructure and leave the legacy infrastructure to decay, then rinse and repeat, or move to a virtualised model where the right infrastructure can be dialed up in a few moments and retired as soon as it becomes economically inefficient.

But not all virtualised models are the same.

Image/Virtual Machine

The most familiar virtualisation model is the replication of a machine image into a virtual host (one where multiple separate machine images may be running). Essentially a full computer image, including all the software and operating system (a virtual machine, or VM), is transferred to the virtual host, which then launches that computer with its own isolated access to memory and CPU drivers, and then, through the virtual host, access to the underlying hardware in a way that “shares” that hardware resource with other VMs operated by the virtual host.

This approach is simple to understand for a user familiar with his or her own desktop machine: “Take a snapshot of the desktop, transfer to the virtual host, boot and remotely log in, then carry on.”

The logical simplicity of this VM model is not without benefits. Each VM contains its own operating system (OS), and this gives the developer flexibility and freedom to define their own environment. For this reason, a virtual host can usually support a mix of machines, with one being a Linux machine, another being a Windows Server, and yet another being a different version of Linux or potentially something else altogether, etc. This provides flexibility.

However, this approach is also very resource-inefficient. VMs are large—OS kernels are rarely small—and each VM kernel must launch its own resource, which runs on its own. In practice, no matter how much you optimise that VM kernel, it will still consume significant cycles of CPU and orders of GB of available memory.

In addition, while there are unsupported open source options, virtual hosts (sometimes called “hypervisors”) are a very expensive software when provided with support.

From a network operations aspect, moving the VM to the virtual host may take some time, and each boot up may take some further seconds, if not minutes, to occur. That’s time that is likely to cause some stress during a critical live sports broadcast if that unit’s boot time is critical. So even today, over-the-top (OTT) operators delivering mission-critical live broadcasts using VM-based virtualisation still typically run everything with at least 1:1 redundancy if not 1:1+n.

Linux LXC

In February 2007, kernel-based virtual machines (KVMs) were introduced to the Linux kernel mainline release, meaning that any installation of that kernel forward could be set up to be a virtual host for VMs.

By 2008, a slightly different approach was emerging, and although there were numerous virtualisation efforts going on, Linux Containers (LXC) typified where thinking was moving and, to some extent, popularised the term “containers.”

LXC offers many of the benefits of virtualisation, but with some limits: the underlying OS is a specific “shared” Linux kernel. This single underlying OS replaces both the role of the virtual host in the VM paradigm and, importantly, that of the guest VM image’s OS.

All the guest image contains is the differences (deltas) from the LXC host distribution that it is launched on. That would include, for example, any session-specific information and the application itself.

These collections of deltas have become known as “containers.” In contrast to a VM, a container can be transferred over a network much more quickly to the host, or can even be locally cached for increased launch efficiency.

In a network environment where you can ubiquitously dictate the network fabric’s virtual host kernel (such as in an ISP, a cloud, or a telecoms network), this opens the architectural opportunity to create libraries/registries of containers, each set up in a specific way or each capable of fulfilling very specific functions, capable of collectively delivering the function of a native monolithic application (a container can launch about as quickly as a native application, since there is no OS or kernel to boot), and yet with all the benefits of virtualisation such as process and memory isolation, scalability, and increased availability.

Many larger organisations were already using service-oriented architecture approaches in order to scale. Small functions could be launched and composed into workflows that delivered complete network applications equivalent to that of a traditional monolithic application. With the newfound granular ability to scale only those parts of the workflow that mattered, this offered a new approach, particularly for any operator focused on network and distributed computing. Function could be specifically scaled up to meet demand, and with some density of resources that the traditional monolith could not offer.

Over time, and particularly as containers have popularised ideas about highly distributed function, people have started to talk a lot about microservices architectures (MSAs). While the vernacular term is commonly “microservices,” I want to generally use the abbreviation of MSA from here on to keep the big picture of architecture in our minds.

It’s important to note that while containers have become a popular new way to package and deploy microservices, they were not, in and of themselves, enabling anything particularly new in the space.

It is also worth noting that containers are very useful from the point of view of an infrastructure provider (such as a public cloud provider or telco), since a container approach allows third-party access to its infrastructure in a heterogenous way and in a way where it can provision containers and better share the resource among multiple use-cases.

However, MSA delivered without containers may be simpler to deploy for an operator that is using their own infrastructure and is not worried about multi-tenant isolation or container resource quotas, and who may want direct access to specific hardware functions in the infrastructure, such as field-programmable gate arrays (FPGAs) or graphics processing units (GPUs). So in some situations, containers can actually add complexity to the MSA, and while they are almost synonymous in current marketing-speak, it is important to understand the distinction at a technical level.

Before we look at some MSA strategies and technologies that are being adopted by the OTT and streaming industry, let us quickly summarise understanding so far:

Understanding Microservices Architecture

Microservices each compose some element of a wider application and communicate over a network (be it some fancy service bus or a nice, simple HTTP interface), and this approach has been around a while with the following goals:

  • Smaller, self-contained services become easier to test than a large monolith.
  • The isolation between services means that one going bad is far less likely to take down anything else.
  • Provided it stays backwards-compatible, it allows each service to evolve and move forward relatively independently of the others. For example, Amazon’s recommendation engine doesn’t end up tied to the lifecycle of the checkout process.
  • Breaking up the monolith makes it simpler to scale the aspects of the overall application that are the bottlenecks “simply” by launching more instances of them on additional hardware.

Although “simply” in that last bullet point is making light of the that fact that this can frequently actually be a very complex thing to do (due to state management—for example, scaling the database of what resource is doing what for who is not as simple as “just launching a second instance”), it is still without a doubt easier than it would be in the monolith world.

Common Strategies

This architectural approach sounds like a simple adjustment of thinking, but in practice, it has taken a long time for the traditional broadcast culture to take an interest in the benefits above.

Recent rapid consumer adoption has given a new urgency to all levels of content delivery optimisation, and MSA has suddenly not only become relevant but become central to all forward-thinking, scalable, and efficient strategies for meeting that explosion of demand.

However, given that software strategies can be implemented in an almost infinite number of ways, there are numerous “pools” of approaches emerging in the real world of MSA implementation.

This is a roundup of some of the key players that are active in the OTT space. It is far from exclusive, but we have included one or two links to help you explore further.

ETSI NFV/SDN AND MEC

ETSI is the European Telecommunications Standards Institute, which has an initiative running called Network Functions Virtualisation/ Software Defined Networking (NFV/SDN). The idea is that if carriers are increasingly going to start to deploy reprogrammable fabric within the telecoms network, then the fabric must run an increasingly diverse range of functions. To deploy network architectures as flexibly as possible, alignment between vendors and operators, especially in terms of the application programming interface (API) between all the components, is a useful standards track to follow. Likewise, the Mobile Edge Compute (MEC) initiative, renamed the Multi-Access Edge Compute about 18 months ago, is looking at taking that standardisation out to the radio access networks of cellular and Wi-Fi operators, ensuring there is an ecosystem that telcos and vendors, and, subsequently, their clients, can use to build the networks of tomorrow using MSA as a first principle.

Adoption and adherence to ETSI NFV/SDN comes in fits and starts on the show floor at events like NAB and IBC. Moving from demo or trial to production requires significant buy-in from those teams that need to provision fabric capable of running microservices out deep in edges.

However, natural churn as edge technology is replaced or updated means that ETSI’s forward thinking is a strong reference point for vendors, developers, and operators alike.

DOCKER

Developing against Linux LXC directly can be a reasonably involved effort. Docker gave Linux LXC usability for the general user. When Docker was publicly released, open source, in 2013, the timing was just right. Docker made it straightforward to set up an environment into which you could deploy a Linux container, and notably the Docker name, philosophy, and even logo are all about extending the paradigm of containers and the underlying logistics of distributing them. Docker moved on from LXC in 2014, introducing its own execution driver libcontainer which is not architecturally tied to Linux (see this blog post for more about that).

Container execution is not the only part of the MSA model. Microservices on their own are easy to think of as atomically small functions wrapped in a container. I have heard it said that “containers on their own are useless, it is the orchestration and scheduling that make the strategy valuable”—hence “MSA” is more accurate than “microservices.” Without the ability to compose those microservices into a useful function for your operation, they are essentially unbuilt LEGOs!

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Struggling With Capacity and Congestion in the Digital Video Age

Can the internet keep up with the ballooning demands video is placing on it? And how can planners manage growth while keeping costs down? The correct answers are not always the obvious ones in this hall of mirrors.

Children’s OTT: Standing Out in a Congested Market

OTT services targeting kids need to keep four things in mind: content, convenience, user experience, and safety.

Trends That Will Shape 2018 Include Post-OTT Convergence and AI

Nagra details five trends it says will shape how companies deliver-and how consumers purchase-TV in the coming year.

OTT Revenues to Nearly Double in 5 Years, Finds Juniper Research

The subscription services are fueling their rise with lavish original content budgets, outspending traditional broadcasters.