Does Streaming's Future Success Depend On Discoverability?
As streaming has grown in demand and popularity over the last several years, a problem has begun to take shape. But don’t get me wrong, it’s a good problem: too much content. It’s not just the content on a single provider. Netflix, Disney+, WarnerDiscovery, and Peacock all have deep and rich libraries of content. But it’s the combination of services and their deep libraries that create the underlying issue.
Let’s do a little math. If the average consumer has three streaming subscriptions and each streaming platform has 50,000 titles, the amount of available content is almost unimaginable. How does someone figure out what to watch–not only in a single service, but across all three services?
Solving this problem will take a significant amount of effort. In fact, many streaming operators already provide some sort of recommendation engine. But that does not address the bigger picture of how content gets recommended across service providers. The answer, though, might not be one of functionality, but of data.
The core issue with unifying content discovery across service providers is that each service provider may represent title metadata differently, which creates a core challenge for search. If a user types in a specific set of criteria, and the metadata is different between services, it won’t return the most appropriate titles. But searching for content is only half the battle. In many cases, viewers may not know what they want to watch, and when that’s the case, the service providers must recommend content to them. And that brings us back to the metadata. If that data is different between providers, then each service is recommending content differently, based on their own algorithms and their own approach. The result with this problem of search and recommendation is that viewers may ultimately reduce their streaming services to those that can more adequately provide them with content best suited to their needs and wants.
To solve this problem, then, requires a two-part solution. The first part is standardizing the data. One technology company, Vionlabs, is attempting to do this but with an added twist: relating the metadata to viewer sentiment. By analyzing content frame-by-frame, they can determine the emotion to which that frame–and ultimately the content–is best attributed. This allows them to more granularly categorize content for optimized recommendations. With this kind of sentiment analysis baked into a standardized set of title content (and technology that content platforms can use to do their own analysis), recommendations can be more finely tuned. And if every content provider is using the same kind of classifications and metadata, recommendations are the same across all platforms.
The second part is for streaming platforms to open up their content library for search. Of course, this is already starting to happen, but in some cases, content pages are behind paywalls and access credentials, making them impossible to include in search results. Ultimately, this doesn’t make much sense, because if the content information was exposed in search results, it could convince new subscribers to join.
Of course, the lack of content recommendations and search isn’t going to stop streaming from growing, but it will change viewer behavior. Rather than three, four, or five services, they may end up with just two or even one, meaning it will be difficult for services to find and retain subscribers. As marketing costs go up, content production could suffer, resulting in less unique and original content over time.
The future of streaming is about unification built on top of a single open standard of content metadata. Right now, it’s a matter of seeing the forest for the trees. Individual streaming operators see only themselves and their content (the trees), but neglecting to see their services from the viewers’ perspective (the forest) will ultimately keep industry conflict high and viewer satisfaction low, inhibiting the complete transition from broadcast to streaming.