Service Velocity Taking Hold in the Online Video Space

While we’re just beginning to hear about service velocity in the online video space, the term appears as far back as 2007 (in the context of networks and telecoms) with citations of the term from academic papers referenced within cable industry patents from early 2008.

However it is only in the past couple of years that I have been aware of the term with relevance to service deployment for streaming media, and there is a reason for this for which I will explain both the cause and the effect below, and which is yet another string for the bow of my broader arguments about how we should anticipate significant macro-change in the industry over the next 2 to 5 years

Let me first explain the service velocity concept by quoting a 2012 article in which Carl Weinschenk, senior editor of Broadband Technology Report, gave a good definition:

Service velocity, as the name aptly implies, is the set of skills and infrastructure that enables service providers to offer the spectrum of sales, deployment, repair, upgrading, and other requisite capabilities in a speedy manner.

The idea is fairly straightforward: Operators who anticipate where business will come from will be able to offer it more quickly.

The traditional approach seems reasonable: When a prospect materializes—either by contacting the operator or after being contacted by the MSO's sales staff—an assessment is done to determine if and how the business can be reached and if it makes sense for the operator to do so.

The problem with this approach grows as operators aim at bigger and more sophisticated potential customers. Since those prospects are bigger, they likely are being courted by other providers as well. If so, they most likely can provide services more quickly—i.e., with higher service velocity—than carriers that are starting from scratch and who also have to spend some time determining if they even want the business.

Gen1 vs. Gen2

Back in 2012 we were in the midst of the discovery what I call “Gen 2” virtualization. By this I mean that those network service operators who had traditionally thought only of building their networks from dedicated hardware building blocks (Gen 1) were starting to accept that many elements of service could be abstracted from a common commercial off the shelf (COTS) underlying hardware environment—essentially x86 computers. The deployment of hosted servers was no longer a combined hardware and software responsibility, and hardware infrastructure as a service (IaaS) providers were inviting operators to deploy their software services within managed hosted networks of computer resources. This meant that in the event of a hardware failure, rather than having no continuity, the service operator could replicate a copy of the software from the failed machine (or another source) to a new machine, and all functionality would quickly be restored. In the meantime, the hardware operators could send out an engineer to replace the physical unit and commission it back into the pool of resources that the service operator could draw from as required.

Gen 1 physical appliances, where the underlying compute resource was tailored to the application running on that box, were decoupled, and the resulting model has been called “cloud” ever since.

What is key to note, however, is that while various operational aspects of deploying services (most obviously resilience and scaling) benefitted from this type of virtualization, in many ways the time to implement functions of the network became somewhat shortened. There was no longer a need to “roll trucks” to install specialized appliances in remote locations on the network: the idea was that a “vanilla” cloud hardware resource was already available, and an application could be launched as quickly as the image of that computer could be uploaded to the hardware and booted up.

While it’s been somewhat scary for a large network operator to jump in and change all its dedicated routers for COTS servers running routing software, there have been several advantages for small businesses wanting to create distributed networks of database servers or storage services and so on. First and foremost they could reduce their trips to data centers—the overhead for a small business of travelling across a city or a continent to maintain hardware was effectively amortized into the cost of services fees charged by the cloud operator. While the upward scale available to even a one-man business is vast, and is obviously a function of the operating costs of a scaling (and hopefully profitable) business, and that brings great opportunity to small businesses, at the same time an even more important reason that “cloud” has been successful is that while peaks of demand can be met, huge sunk capital no longer has to be invested in technology which is ONLY used for peaks. In nearly all businesses the daily ebb and flow of traffic through e-commerce platforms and the like means that the majority of fixed infrastructure remains unused for most of the time. The larger the scale of the enterprise, the larger the efficiency that can be brought by moving infrastructure costs to a “use-based” model.

Within the Gen2 environment, typically the complete application function of a “traditional” appliance has been imaged and optimized to run on higher powered, abundantly available compute resources deployed in the cloud’s data centers. This means that where one traditionally had a video encoder farm to produce all the different source formats of a video needed to reach all your desired target devices, now you could generate a single top-level mezzanine source, submit this to your cloud infrastructure, and activate as many servers (running your encoder image) as you needed to for all the formats. You have quickly delivered the service with the advantage that you only need to turn on all the servers a few minutes before they are needed, let them boot, confirm they are ready, and away you go.

Once the encoding task is complete, you could turn off all the cloud servers and in a pay-as-you-go IaaS Cloud (such as AWS or Azure), and you would also be turning off your costs until the next time they are needed. (Note that the traditional Gen1 infrastructure, even if turned off, would still cost warehousing or office space, security, manpower to turn it back on when needed, and so on).

The Gen2 model, where images of the “old” appliances are run on ephemeral resources, is a great “Lego brick” introduction to virtualization. Most engineers and architects can understand that, and indeed we have seen many engineers who until 4 or 5 years ago had very little regard for distributed computing and virtualization embrace the cloud wholeheartedly as they learned to scale their applications. What is of note here is that this means that, in the application world, scale brings the ability for even a very small company to compete for opportunities with large, heavily invested network operators.

Quickly to return to Weinschenk’s definition of ‘Service Velocity’: - ‘Since those prospects are bigger, they likely are being courted by other providers as well. If so, they most likely can provide services more quickly - i.e., with higher service velocity - than carriers that are starting from scratch and who also have to spend some time determining if they even want the business.’

The Gen2 model has given application developers a service velocity that can compete with that of a much larger organization, since small businesses usually have much shorter decision-making cycles and agility in their own leadership. The traditional advantage that an operator had when deciding to offer a service like TV or voice online was monopolistic, allowing the operator to set the pace of service velocity to almost completely de-risk even vast capital investment. Now applications and services that deliver large subscriber revenues, can be delivered worldwide and launched in a few hours.

This vast change in service velocity, coupled with deregulation in the telecoms sector over the past 20 years, has created a vast thriving market for OTT online operators to take money from the underlying telco operators subscribers.

Looking back at the 90s, and even into the mid 2000s, telcos were still very much expecting to provide the shopping portals, the walled garden models of content, and other managed services. These managed on-net services are nearly always so close (just a press of a remote control) to equivalent OTT providers offering a better/cheaper/optimized service that managed services have to leverage the advantage of the operators’ ownership of the access network for the subscriber (so for example offering higher resolution images or better variety/easier discovery etc.), or else the subscriber will simply shop-OTT.

Net neutrality also has been cause for consideration. While everyone gets the idea that the market must be competitive, only one ISP services any particular end user. That access circuit has extra physical install costs, and the ISP—and the ISP alone—has an opportunity to offer “advanced managed network services” over that connection. That is a headache for both the telco that invested in that infrastructure expecting to upsell with managed services to provide ROI to its own shareholders, but also for the regulators who would like the telcos to continue investing in developing access networks more widely, when the teclos find themselves threatened by the risk of opening very unequal markets where only those with large capital funds can deploy managed services within the operator network footprints.

What does this all have to do with service velocity?

Let’s recap:

  • Gen1—The “traditional” network with intelligent appliances at the edges and a dumb pipe in the middle.
  • Gen2—The “virtualized” appliance where the application is no longer fixed permanently at the edges of the network but can now be moved to the resource most suitable for the optimized delivery of the service, and the use of hardware resources is strictly tied to real demand for the delivery of service.

Gen3 Service Velocity

So what could Gen3 be?

While there are many applications that can run on networks, ranging from gaming to banking to TV, there are (currently) only subsets that can run within networks. Almost exclusively, these are network protocols of one type of another.

Let’s drill into some of the more familiar network applications, and then broaden into some applications that cross the line between the network and the pure application space.

First, a couple of obvious ones: the domain name system (DNS)—it’s what turns www.streamingmedia.com into a number that the routers can coordinate to route a request— and internet protocol – the very essence of how data gets from one place to another in a mixed mesh of networks of networks.

There are others, too. MPLS is a way that IP can be deployed on various forms of fixed line networks, and can be used by operators to designate priorities for traffic to ensure that low-latency banking data can be shipped faster than, say, a background software update.

MPEG-TS is one that will be closer to the Streaming Media audience, and is just one of many layers of coordinated media and network protocols that are implemented at either end of a data transmission to ensure that a video signal can be transmitted and received.

In the Gen1 and Gen2 world, we have often taken blocks of these protocols (e.g. IP/HTTP/HLS) and using application-specific appliances or virtual appliances, we have processed a group of these to acquire a video signal, and then passed these along a “conveyor belt” of appliances to reprocess them until they are ready for the end user to consume.

That conveyor belt has become much more agile in the Gen2 world. We can daisy chain a limitless number of virtual appliances together to translate almost any combination of protocol sources to any combination of output protocols, leading to a world of virtual encoding, transmuxing, etc.

But in pure computational terms these compute units/appliances are inefficient. If I want to “stack” three tiny network application protocols, and for some reason it needs three separate Gen2 appliances, then I may need to boot three large operating systems on three compute resources to complete my task.

Gen3 computing architectures look beyond that model. They assume not only COTS hardware, but also a common OS, live and running on the host computer. Instead of needing to provision new hardware and network addresses, then distribute the OS image, boot that, clear security, and finally launch the network application you need for that task, a Gen3 model would expect to run the network application on the already available COTS resource, and to launch that application almost instantaneously, not least because it is significantly smaller than the “full Gen2” image previously used.

Historically we have seen those that used to be data warehouses gradually convert to providing cloud hosting. Traditionally these data centers were large customers of network operators, or in some cases great markets for sale to common customers.

The network itself, however, has always had a Gen1 architecture. Core exchanges and routing infrastructures terminated very specific network types, which meant that the networks underpinning all these hosted services were themselves very inflexible.

While the network interface cards themselves always require specific interfacing, the routing core that these interfaces connect to has, over the years, begun to look increasingly like a traditional COTS computer core—largely a testament to the progress of commodity chipsets. So a £1 million Cisco router of 5 years ago probably has significantly less processing power than a contemporary Dell desktop. Accordingly, those looking to increase the service velocity in the currently-Gen1 network operator space are sharply focused at the moment on how the actual network itself can be virtualized.

Given that this means managing a distributed cloud of resources, and Gen2 is already being eclipsed by the more agile applications developers’ thirst for Gen3 architecture (because it is cheaper, more dynamic, and more scalable again than Gen2) these network operators are almost all exploring Gen3 virtualization models. They want maximum availability and maximum service velocity. That’s because where they see an opportunity to offer a managed service, they want to create and expand into that market, leveraging their ownership of the network and before the regulator prevents them from doing so, in order to protect the OTT market that competes in the same space, but has no ability to manage the network.

What may emerge, and I believe this will be the case, is that operators will initially take advantage of this massively increased service velocity to “show the way” and to ensure the capability is well-defined, and then will open up the ability to offer managed network services on-net to third parties, in a continuation of the IaaS model seen currently only available on COTS in data warehouses.

CDNs, IoT platforms, retail and fintech will all see advantages they can leverage within the managed services environment, and to be able to run applications deep over operator networks will create a new generation of network services, many of which we cannot imagine today, but all of which will become available almost as soon as they are invented, at scale and with quality of service and reliability that will surpass our expectations today.

So expect the very ground the streaming media sector is built on to start moving significantly over the next few years. Traditional alliances may change dramatically, some for the better and some for the worse. Operators may rapidly deploy their own CDN models, or they may open up to invite those inexperienced with managed networks to come and evolve with them.

What ever the outcome, as they fully virtualize, this inexorable evolution will radically change the service velocity that the powerful network operators bring to market, and this will have deep and far-reaching effects.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues