The Future of AI-Driven Products: Don't Believe the Hype

Article Featured Image

I was absolutely delighted that at IBC this year, all the marketing teams who had wanted to emblazon “AI-driven” all over their booths had been clearly tied up and locked in a cupboard, while the technical product managers and programme managers (by and large) quickly changed the term on all their printed materials to read “machine learning.”

Thank goodness for that, else I would have been arriving on their stands and loudly asking awkward questions at the top of my voice to highlight quite how much vapourware they were marketing.

When it comes to buzzwords, the term “AI-driven” is a great candidate for most hyped of 2018.

While I am no authority on the subject, unlike almost anyone in any of the companies that are trying to hype their products with the term, I did actually study artificial intelligence at Sussex University in the early ’90s.

Admittedly I didn’t complete the degree (going pro as a webcaster took over about halfway through), but in the time I was there—and in the years before, when as a kid I used to hang out in some of the university’s open lectures with some of the pioneering visionaries in the field—I did get a fair amount of exposure to the topic. Anyone for duodenary intelligence modelled on hypothetical 12-armed jellyfish as an alternative to humans and their two-handed binary? (Kudos to Professor Boden for that one!)

In the late ’80s—before I got my degree—I recall my attempt to recreate the theme of my favourite film of all time, War Games, where I adapted some routines from a book Exploring AI on Your BBC Micro and tried really hard to bring the Cold War to a standstill by making my BBC computer play Noughts and Crosses (or, as it’s called in the U.S., Tic-Tac-Toe), and that very simple set of routines perfectly exemplifies what AI is still today. (And yes, I failed to bring about the collapse of the Cold War—at least I don’t think it was me—but it did result in generating an “out of memory” error on the BBC micro quite quickly.)

The fact is, there is no genuine Artificial General Intelligence (AGI), nor is there any self-awareness in any existing systems available today, and we are many years away from it.

There is no breakthrough in AI driving “smart speakers” or “face recognition” or “the robots taking over our jobs.” It is not “trend inference,” nor is it “chat bots.” It is not “working out sports statistics by analysing video” nor is it “changing network routing in response to end device feedback.”

All those things are byproducts of the emergence of what just 3 years ago was called “Big Data” being combined with existing machine learning and expert system approaches which frankly haven’t changed much for years apart from having the big data plugged in along with numerous deep learning and neural network approaches for mining that data.

Some experts may term these specific solutions “weak AI” or “narrow AI,” and I might just about tolerate either phrase were I to see it used in marketing material.

Narrow AI attempts to solve specific problems. Even Siri, Alexa, Deep Mind, and IBM’s Watson are strictly narrow or weak AI models. They are not any form of general AI.

Narrow AI systems are considered “brittle” because they can fail if questions put to the systems are outside of the limits of the application. Narrow AI systems can unexpectedly break down if the scope of the task moves even slightly outside of their design.

Many years ago we had no expectation that standalone narrow AI systems would be used in production systems, because we knew of their limits. The datasets that an application could analyse were very, very limited, and so we would never have relied on them.

The number of people that a voice dictation application on a standalone machine could take a voiceprint from was essentially one: the user of the local machine.

Fast-forward to today, and Alexa and Siri can compare voice prints from every user that uses their platforms in a logically centralised big data set. This means that they appear to work without training, where, in fact, from the moment you turn these devices on, they are actually already well-trained by tens of millions of users.

This is expert system engineering combined with big data and machine learning algorithms which can quickly analyse that data to approximate matches and additionally store new incremental patterns or data. Machine learning is fine as a way to describe this (in my own pedantic book of technical terminology).

But we should be cautious to talk about this as “AI-driven” to the general public, since Siri and Alexa are a long way from offering AGI.

The general public (and even industry players) has a vision that the AI in “AI-driven” means there is a AGI being tasked with a specific focus. But this is far from the truth.

Try asking Watson to stop playing Go and make you a coffee (the Wozniak test). Or blow Deep Mind up completely by asking it to stop playing chess and interpret the meaning behind a poem (another of Boden’s favourites).

And therein lays the problem with mixing people’s expectation of what AI means through hyped and careless marketing.

If you shout “Help me!” to Siri, it really has no ability to actually help you nor to intelligently work out what help you might need. At best, all it can do is offer you a range of very limited basic options which you have to choose from using your own intelligence.

The same goes for face recognition, and indeed many other types of video or data analysis relevant to our industry: we see pattern matching in sports video, but that is essentially no different to voice print matching in digital assistants. It is just massive datasets being established and patterns being derived. They still need primary keying (teaching); just as “Hello Siri” gets the system going, so too does a video analytics software need seeding by an intelligent human. There is no AGI behind all this. There has not been any great evolution of computing in the past decade that merits the use of AI in marketing.

Fundamentally, these technologies are no different to my late ’80s Tic-Tac-Toe game where I had to teach the emotionally dead, uncreative, and unintelligent computer what a winning state was within an extremely limited set of game rules.

In fact, let me return to that Tic-Tac-Toe game for a moment. The way that programme worked was quite simple: initially it would play against me, the operator (where later I made it play itself). I would place an X, and the computer would place an O at random. This would repeat until I had (at first at least) won. The winning state was stored in an array.

In each subsequent game, the computer would refer back to the dataset of winning states and evaluate if the current game play was comparable to any of those winning states—a simple process of pattern matching. Should there be a winning state in the array which matched the currently played layout, then the computer would fill one of the remaining spaces on the board with an O or an X that matched an archived winning state.

Very quickly the archive of winning states filled up, and the computer would seem to exhibit “intelligence” after just a few games when it would quickly block my obvious forthcoming wins using its accrued dataset, where it had initially just randomly placed a piece on the board.

To be fair, it was an elegant bit of code, and it was almost creepy seeing a 32KB computer quickly reduce my ability to easily win at first to a state where after just a few rounds, there was a constant draw.

The exercise in the book was designed to help understand attempts to emulate learning, but I always preferred the term that my tutors at university used: the programme “adapted” its behaviour as it learned and emulated human responses.

As in evolutionary theory, random changes at each generation may or may not help a species survive; there is no way to know the potential outcome at the point the change happens. In the same way, random game play in each Tic-Tac-Toe game may or may not help the computer win: we simply don’t know until the game comes to a conclusion.

This trial-and-error method was fairly limited on a non-networked 32KB 8-bit computer. Fortunately, Tic-Tac-Toe is not a complex scenario. This model was explicitly a very narrow AI.

Fast-forward with exactly the same approach to a 4GB 64-bit computer that is part of a global network of similar machines, and the amount of trial and error that can be performed in a matter of seconds is almost ludicrously different to a late ’80s 32KB computer. As these networked datasets become distributed and diverse, they have needed new search algorithms, such as backpropagation, generative adversarial networks, and one I find very interesting called long short-term memory, but these are essentially only new ways to map or search datasets quickly.

The underlying trial and error of “Is this a win or not?” is continuing and has been essentially unchanged for decades. The machines cannot intelligently, deterministically, or intuitively go directly to a result. They have no gut feeling, nor do they have creativity, nor a will to survive. Most critically, they most certainly do not have a sense of self-awareness.

They are still a very narrow AI.

When I finally managed to rig the Tic-Tac-Toe routine in the book to make my computer play itself, I jumped around the room punching my fist in the air knowing that my creative desire to realise this had driven me to find a way. And trust me, at 15 years old, I was no expert— there was a lot of intuitive guesswork going on! But I was the one bringing the general intelligence to the application, not the computer.

In the meanwhile, the computer would record each winning state with absolutely no expression of success, no sense of achievement, and no ability to intuitively improve its game. It didn’t have the intelligence to want to win.

There is little doubt in my mind that we are a huge way from a computer creatively thinking “You know this Tic-Tac-Toe example ... I wonder if I can make it play itself, and if I could would it be satisfying or in someway make me happier?” And that critical creative step is what would start to make me think computers are achieving true AGI.

We are simply not there. At all.

While bigger data, faster computers, and better networks are putting these neat capabilities more and more in the public eye, we, as intelligent and creative beings, can see where this trend might lead us. But despite the hype, artificial intelligence per se is still pretty much the same as it has been for decades. It is still weak or narrow. Where those limits are described and their deployments are scoped properly to handle their brittleness, they are clearly becoming useful.

But I do not see marketing shouting, “Driven by weak AI” or the “underpinned by the narrow AI capabilities we offer.”

I see instead, “Look at our AI-driven product.” That is a scary misuse of the term to win some marketing profile in the eyes of the uneducated, and it sets some terrible expectations.

There are many discussions about the ethics of AI making headline news because the industry is so prolifically hyping the term. The image of Terminators killing us all is creating fear. The sales teams are selling into that fear: a fear of AI, and a fear of not having AI. It’s silly, it’s daft, and, to be honest, it is likely to set the industry back.

So please do not claim your service or product is “AI-driven” when it is, in fact, a great example of machine learning or delivers an amazing expert system.

Be proud of the machine learning and the expert system, but don’t try to dress it up as the arrival of HAL from 2001: A Space Odyssey.

We are still some decades (if not more) away from any AGI as the public would understand the term “AI” to mean.

Frankly, careless use of the term “AI-driven” in your marketing today makes you look unintelligent.

[This article appears in the Winter 2018 issue of Streaming Media Magazine European Edition as "Artificial Unintelligence."]

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues