Raising the bar on AI

So the media last week was absolutely full of the latest Sure Sign that the robocalypse is immanent: apparently, Google-backed DeepMind have now managed to create an AI so very sophisticated that it has beat human champions at the ancient Chinese boardalphago-game of Go. DeepMind’s AlphaGo has defeated the European champion, which marks another important development in the progress of AI research, trumping IBM DeepBlue’s victory over Gary Kasparov at chess back in 1997: Go is, apparently, a much more difficult game for humans – and, it was thought, for computers – to master, due to its complexity and the need for players to recognise complex patterns.

I expected, when setting off to write a note about this achievement, to find the usual sources in the popular press, with their characteristically subtle declarations, heralding that the End of the Human Race is Nigh!; however, thankfully, responses seem to be more sanguine and muted. The British tabloids have even avoided using that picture of Terminator that almost invariably accompanies their reports on new developments in AI and robotics.

So perhaps this is a sign that things are changing, and that the popular press are becoming more sensible, and more responsible, in their technology reporting. (Lets see how many weeks – or even days – we can go without this sort of thing before claiming victory, or even that we’ve turned a significant corner.)

But there is a lot interesting about DeepMind’s success, from a cultural perspective, even if it hasn’t stirred the usual panic about the robopocalypse. It made me recall a conversation I had at an EURobotics event in Bristol in November. We humans, it seems, like to think that we’re special. And maybe the possibility that robots or AI are a threat to that special status is another reason why we are so afraid of them. Maybe we fear another blow to our narcissism, like when that crazy astronomer Copernicus spoiled things by showing that the earth wasn’t the centre of the Universe, or that Victorian poo-pooer Darwin demonstrated that we merely evolved on this earth and weren’t not placed here at the behest of some Divine Creator. Maybe we don’t really fear that robots and AI will destroy all of humanity – well, maybe we fear that, too – but maybe part of what we fear is that robots and AI will destroy another one of those special places we reserve for ourselves as unique beings amidst creation.

And yet our scientists aren’t going to let us sit wrapped in the warmth of our unique being. They keep pushing ahead and developing more and more sophisticated AI that threatens our… specialness. So how do we, as a culture, respond to such a persistent challenge? Like any good politician, it seems we have decided to confront the inevitability of our failure by constantly changing the rules.

Choose your sporting metaphor: we ‘move the goalposts‘, we ‘raise the bar’.

Once upon a time, it was enough for we humans to think of ourselves as the rational animal, the sole species on earth endowed with the capacity for reason. As evidence for reason as the basis for a unique status for humanity crumbled – thanks both to proof that other animals were capable of sophisticated thought and the lack of proof that humans were, in fact, rational – we tried to shift those goalposts. We then transformed ourselves into the symbolic animal, the sole species on earth endowed with the capacity to manipulate signs and represent.

Then we learned that whales, dolphins and all sorts of animals were communicating with each other all the time, even if we weren’t listening. And that’s before we taught chimps how to use sign language (for which Charleton Heston will never thank us).

And then computers arrived to make things even worse. After some early experiments with hulking machines that struggled to add 2 + 2, computers soon progressed to leave us in their wake. Computers can clearly think more accurately, and faster, than any human being. And they can solve complex mathematical equations, demonstrating that they are pretty adept with symbols.

Ah, BUT…

Watson_JeopardyHumans could find some solace in the comforting thought that computers were good and some things, yes, but they weren’t so smart. Not really. A computer would never beat a human being at chess, for example. Until in May 1997, when chess champion Gary Kasparov lost to IBM’s Deep Blue.  But that was always going to happen. A computer could never, we consoled ourselves, win at a game that required linguistic dexterity. Until 2011, when IBM’s Watson beat Ken Jennings and Brad Rutter at Jeopardy!, the hit US game show. And now, Google’s DeepMind as conquered all, winning the hardest game we can imagine….

So what is interesting about DeepMind’s victory is how human beings have responded – again – to the challenges of our self-conception posed by robots and AI. Because if we were under any illusion that we were special, alone among gods’ creations as a thinking animal, or a symbolising animal, or a playing animal, that status as been usurped by our own progeny, again and again, in that all-too familiar Greek-Frankenstein-Freudian way.

Animal rationabile had to give way to animal symbolicum, who in turn gave way to animal ludens… what’s left now for poor, biologically-limited humanity?

data does shakespeareA glimpse of our answer to this latest provocation can be seen in Star Trek: The Next Generation: Lieutenant Commander Data is a self-aware android with cognitive abilities far beyond that of any human being. And yet, despite these tremendous capabilities, Data is always regarded – by himself and all the humans around him – as tragically, inevitably, inferior, as less than human. Despite the lessons in Shakespeare and sermons on human romantic ideals from his mentor, the ship’s captain Jean-Luc Picard, Data is doomed to be forever inferior to humans.

It seems that now AI can think and solve problems as well as humans, we’ve raised the bar again, changing the definition of ‘human’ to preserve our unique, privileged status.

We might now be animal permotionem – the emotional animal – except while that would be fine for distinguishing between us and robots, at least until we upload the elusive ‘consciousness.dat’ file (as in Neill Blomkamp’s recent film, Chappie)  this new moniker won’t help us remain distinct from the rest of the animals, because to be an emotional animal, to be a creature ruled by impulse and feeling, is.. to just be an animal, according to all of our previous definitions. (We’ve sort of painted ourselves into a corner with that one.)

We might find some refuge, then, following Gene Roddenberry’s  example, in the notion of humans as unique animal artis, the animals that create, or engage in artistic work.

(The clever among you will have realised some time ago that I’m no classical scholar and that my attempts to feign Latin fell apart some time ago. Artis  seems to imply something more akin to ‘skill’, which robots could arguably have already achieved; ars simply means ‘technique’ or ‘science’. Neither really captures what I’m trying to get at; suggestions are more than welcome below, please.)

The idea that human beings are defined by a particular creative impulse is not terribly new; attempts to redefine ‘the human’ along these lines have been evident since the latter half of the twentieth century. For example, if we flip back one hundred years ago, we might see Freud defining human beings (civilised human beings, of course, we should clarify) as uniquely able to follow rules. But by the late 1960s, Freud’s descendants, such as British psychoanalyst D. W. Winnicott, are arguing almost the exact opposite – that what makes us human is creativity, the ability to fully participate in our being in an engaged, productive way. (I will doubtless continue this thought in a later post, as psychoanalysis is a theoretical model very close to my heart.)

What’s a poor AI to do? It was once enough for an artificial intelligence to be sufficiently impressive, maybe even deemed ‘human’, if it could prove capable of reason, or symbolic representations, or win at chess, or Jeopardy!, or Go. Now, we expect nothing less than Laurence Olivier, Lord Byron and Jackson Pollack, all in one.

(How far away is AI under this measure? Is this any good? Or this? Maybe this?)

This reminds me of Chris Columbus’s 1999 film Bicentennial Man (based, of course, on a story by Isaac Asimov). Robin Williams’s Andrew Martin begins his… ‘life’, for lack of a better word… as a simple robot, who over the decades becomes more and more like a human – he becomes sentient, he demonstrates artistic skill, he learns to feel genuine emotion, etc.. At each stage, it seems, he hopes that he will be recognised as being at least on par with humans. No, he’s told at first, you’re not sentient. Then, when he’s sentient, he’s told he cannot feel. Then he’s told he cannot love. No achievement, it seems, is enough.

bicentennial_man_prog_1600x900Even once he has achieved just about everything, and become like a human in every respect- or perhaps even ”superhuman’ – he is told that it is too much, that he has to be less than he is. In an almost a complete reversal of the Aristotelian notion of the thinking, superior animal, Andrew is told that he has to make mistakes. He is too perfect. He cannot be homo sapien – he needs to be homo errat – the man that screws up. To err is human, or perhaps in this case, to err defines the human. (Though artificial intelligence will not long be on to this as well, as suggested in another of Asimov’s stories.)

It is not until Andrew is on his deathbed and is drawing his very last breaths that the Speaker of the World Congress declares, finally, that the world will recognise Andrew as a human.

And perhaps this will be the final line; this is perhaps the one definition of human that will endure and see out every single challenge posed by robots and artificial intelligence, no matter the level of technological progress, and regardless of how far artificial life leaves human beings behind: we will be homo mortuum. The rational animal that can die.

If Singularity enthusiasts and doomsayers alike are to be believed, this inevitable self-conception is not long off. Though perhaps humans’ greatest strength – the ability to adapt, and the talent to re-invent ourselves – might mean that there’s some life in the old species yet. Regardless, it will serve us very well to create a conception of both ourselves and of artificial life forms that try to demarcate the boundaries, and decide when these boundaries might be crossed, and what the implications will be for crossing that line.