AI - Quo Vadis?

by Martin Treder

The game Go is reported to be one of the most complex games on earth. In 2016, DeepMind’s AlphaGo computer defeated South-Korean Lee Sedol, the reigning world champion. People saw it as a big step in the development of Artificial Intelligence (AI).

I just read an article stating: “The important outcome from Sedol’s defeat is not that DeepMind’s AI can learn to conquer Go, but that by extension it can learn to conquer anything easier than Go – which amounts to a vast number of things.”

Sounds reasonable? Well, it may not be.

The main problem I have with this statement is the word “easier”. Its comparative degree suggests that we are talking about a one-dimensional scale. In other words, it suggests that any two intelligence-based performances can be sorted by “ease” (at least if one of the two is “playing Go”).

But how do you define “easy”: Is it easier to defeat the reigning world champion in Go, or is it easier to convince a kidnapper to release all captives and to give up? Is it easier to win Jeopardy!, or to convince someone of a different view during a discussion? We seem to face multiple dimensions of human brain performance here.

Ask a computer like AlphaGo to calculate the best movements of a football player in real-time, and even the author of these lines would turn out to be superior (which says a lot). Whoever has ever watched RoboCup, the world championship in robotics football knows what I mean…

I don’t know how quickly AI is going to progress. However, one indicator is the investment of money in research. My recent discussions with companies suggest that many of them are moving from the phase of “We need to be part of it by all means” to the phase of “Show me the value”. The real money will go to AI research that comes with a business case. (Please don’t focus too much on those public relations-driven “Innovation Hubs” of big corporations where several researchers can play around without commercial pressure, as  long as they publish nice marketing stories from time to time.)

So, put yourself in the shoes of a company. How would you think and act?

Any adequately managed company will inevitably ask “How can make AI us more successful?”, considering the usual stakeholder triarchy of customers, shareholders and employees. Even universities and non-profit organisations are more and more driven by the need to work towards creating tangible value. Let’s be honest: It will take a huge, long-time investment to have true Artificial Intelligence (in the sense of copying the human brain) add value to our society (or even to single companies). So, what drives companies?

Firstly, companies generally prefer solutions where you have a dedicated specialist for each task. This finding is not specific to AI. Nobody would want to invest in a vehicle that can transport bulky goods, win a Formula 1 race and deliver a luxurious cabriolet feeling at the same time. Three specialised solutions will usually do a better job in their specific areas, at lower overall costs. Nobody would perceive the fact that you need three solutions for three different tasks as something embarrassing, for good reasons.

Look at robotics: The media is full of humanoid robots that can smile and shake hands. Yet most research is still being done on specialised robots which can execute one single task in a near-perfect way (and which requires at least substantial re-configuration before being able to perform another job). The industry has been working on this for decades, and their success is impressive.

Secondly, developing something further that is far behind existing capabilities (in this case, current human abilities) has a bad business case. Return on investment lies far in the future, and it is uncertain, after all. The odds are that even an improved solution remains inferior to available alternatives.

That is why I am reasonably sure that development in AI will continue to focus on getting better in specific areas - mostly where AI already delivers performance superior to that of humans today, through specialised solutions. And re-using ingenious mechanisms of the human brain, such as a neuron’s way of working, will contribute to the success of this approach.

At the same time, at least in the foreseeable future, I don’t expect any substantial progress in developing clones of human brains that don’t need to be re-programmed for new challenges.

Does this sound pessimistic? I don’t mean it that way! I am a great believer in AI’s growing capabilities in complementing the abilities of human beings. But why should we use AI to mimic something that already exists?

Just as God has not created us godlike (which is easy to believe after watching the evening news), we should not waste our precious brainpower on developing human-like devices. There are too many other opportunities for AI to add tangible value.

This article was contributed by Martin Treder, author of The Chief Data Officer Management Handbook.