At IBC, an AI Startup Hacks the Brain for Audience Insights

Article Featured Image

A golden rule of narrative art is to know the audience, but the industry is on the cusp of taking this to its logical extreme by hacking the emotional responses of viewer's brains.

“We are taking the tools that Netflix has built and many that it hasn’t yet thought of in order to share with everybody else,” explained Yves Bergquist, CEO of artificial intelligence startup Corto, in an IBC keynote.

Netflix famously derives much of its content commissioning and recommendations insights from data culled through audience behavioural analysis, but Bergquist says this barely scratches the surface of what's possible.

“What would happen if you could truly understand the cognitive relationship between content and audiences?” he asked rhetorically. “There will be a massive revolution in how stories are told and an explosion of value in media companies. Media companies know how to tell stories today, but they can be far more financially successful tomorrow if they could always tell stories that are right.”

Corto is building a comprehensive knowledge engine to help media and entertainment companies develop deep “genomics”-type insights into how content resonates with audiences.

“We will be able to extract every possible genomic type of information about a viewer and what they are watching,” he explained. “We can map how every single emotion is represented in a film and map the emotional journey of each character in every scene. We can output a numerical value to correlate against box office returns and TV ratings.”

More than that, the science can track viral pathways showing which social communities are driving buzz in the mainstream.  

In related projects at the Entertainment Technology Center in LA, where Bergquist is also the director of AI and Neuroscience, a team is working on applying machine intelligence to model algorithmic structures of narrative in film and TV, as well as building neurobiological models of audience emotions using MRI.

“We are working to extract every element from a piece of content—colors, music, edit cuts, frame composition, white balance, and so on. If we create a semantic representation of every component of content we may be eventually be able to ask questions which are impossible to comprehend now, like ‘What is the meaning of green?’ and understand why a certain tonality of song works in context of characters in a particular scene.

“Next year we will use MRI scans to measure brain activity to infer what emotional response a character or narrative has. That really is the ultimate. There is no greater level of granularity beyond this.”

He added, “It will happen.”

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

The State of Video and AI 2018

The machines aren't taking over; they're just helping video publishers achieve their goals more efficiently and effectively.

Trends That Will Shape 2018 Include Post-OTT Convergence and AI

Nagra details five trends it says will shape how companies deliver-and how consumers purchase-TV in the coming year.

AI and Machine Learning Push Video Quality to New Heights

Artificial intelligence and machine learning, along with deep learning and neural networks, are solving OTT challenges from encoding quality to closed captioning.