Lessons from Brexit for robotics research

brexit-referendum-uk-1468255044bIXAs part of the UK’s National Robotics Week, The University of Sheffield hosted the 17th Towards Autonomous Systems (TAROS) conference from 28-30 June. Among the papers and discussions on the development of autonomous robotics research, two sessions on the last day looked at robots in the public eye, and exploring the issue of responsible research in robotics. Speakers at these panels included Sheffield Robotics’ Director Tony Prescott, Amanda Sharkey from Sheffield’s Department of Computer Science, and Bristol’s Professor Alan Winfield . The sessions looked broadly at the ethical issues confronting robotics research, but a particularly useful discussion, lead by Hilary Sutcliffe, Director of MATTER, examined how robots are regarded in the public imagination, and the vital need to confront these sometimes negative perceptions as we move forward with responsible research.

When I first started researching the public perception of robotics, I was tasked with answering a specific question: ‘Why are we afraid of robots?’ The – very legitimate – concern among many roboticists was that their research and innovations would crash upon the rocks of public resistance. Realising that robots are held in deep suspicion by a public fed a constant stream of dystopian science-fiction and newspapers that can’t get enough of THAT picture of the Terminator, many people researching robotics and AI were desperate to avoid the fraught battles faced by research into genetically modified organisms: dubbed ‘Frankenstein foods’, hostility in the UK and the EU more generally has been such that GMOs have been largely rejected by the public caught in the war of competing interests and the cacophony of voices, from scientists, corporations,  environmentalists, the media and politicians.

When we look at the case of GMOs, we are not seeking to determine whether GMOs are ‘good’ or ‘bad’, but merely how the debate was shaped by particular interests, how the re-assurances of scientists and experts working in the field can be met by skeptical audiences and, especially, how public fear has been driven decision-making.

Hilary Sutcliffe and MATTER have been working with the University of Sheffield across a number of departments and faculties to create an agenda for future responsible research and innovation; more than merely putting plasters over public concerns, we are trying to genuinely work with both internal and external stakeholders, bringing together researchers, industry and specific public interests, to include all concerns at every level of decision-making and research initiatives. The case of GMOs has long been one of keen interest to us as we try to learn lessons from the past and move forward with new ideas about how to do things differently. But the case of Brexit – the British vote to leave the European Union – has provided a new set of questions, and some new lessons, and  made clear that there are a new set of challenges facing public attitudes to innovation.

This is a summary of some of Sutcliffe’s ideas, to which I have added some more specific and historical contexts, the core of my own research into the cultural and social influences and impacts of robotics, in an attempt to understand not only where we are but how we got here.

Anxieties about loss of control

Sutcliffe pointed out that ‘protests are proxies for other issues’. Brexit wasn’t just about the people of Britain being unhappy with the direction of the EU, or with the EU’s  regulation of trade, the European Convention on Human Rights, or even, arguably, immigration to Britain from the rest of the EU. The vote in favour of Brexit, and the debate that lead up to that vote, was clearly about wider concerns among the British population: a feeling of a loss of control, of being ignored by a distant, elite class of politicians and ‘experts’ who too often have little time for the concerns of the public.

boris1The principle slogan for the pro-Brexit campaigners was ‘Take Control‘ or ‘Take Back Control‘, and it clear that a primary motivation for people voting in favour of Brexit was a reclaiming of some imagined powers that have been lost. Whether or not there has been a genuine loss of power or control is not the point: for now, let us set aside criticisms that the Leave campaign flatly lied, by mis-represented the powers claimed by the EU and the extent to which the European Commission, for example, controls aspects of life in Britain. (Though make no mistake – these were terrible, consciously manipulated falsehoods and the Leave campaign exploited both fear and ignorance in propagating these lies.) We also need to set aside, for now, the challenge that people did not have ‘control’, i.e. political authority, of their lives before Britain joined the EEC in 1973.

Because whatever the validity of the feelings, sentiments and ideas that give rise to anxiety, it is important for us to remember that the anxiety itself is real. We can and should counter fears of immigration with the genuine statistics about the levels and impacts of immigration (especially since so many in Britain, and particularly Leave voters, overestimate the levels of immigration in their country), just as we should reassure people that the increasing automation of workplaces will mean more job opportunities elsewhere (or, perhaps even more leisure time) in a society that can be more, not less, equal. But to provide these arguments while dismissing the public’s fears, and assuming that reasonable arguments will easily triumph over the irrationality of anxiety is both to underestimate the power of storytelling and the imagination and to patronise the genuine concerns of whole sections of our societies that have genuine worries that the rosy utopia promised by politicians, policy-makers and scientists might not come to pass.

While it is clear that this loss of control, real or imagined, can be traced to the vicissitudes of global (late-)capitalism,  that attribution to such a nebulous, complex force, complicates the argument, and so this genuine problem often escapes blame for much of what it has wrought upon modern societies because it is hard to explain and, most problematically, it is faceless. It is much easier to identify and blame a bogeyman – and it is much easier for politicians and the media to ‘sell’ bogeymen as responsible for stealing control away from people and their traditional communities. The greatest of these bogeymen, the one that bears the brunt of people’s frustration, is the immigrant, as the Brexit debate demonstrated. The face that has to bear the blame is dark-skinned, with a beard or a hijab (and not at all representative of the dominate ethnic make-up of the EU itself).

However, after the dreaded ‘Schrödinger’s immigrant’, little in modern life threatens to take control away from people and traditional communities as much as robots. Like immigrants, robots are imagined to be mysterious beings with inexplicable motivations, and though we may at first welcome them and the promise they offer we will soon rue our decision as they move to steal our jobs and leave our communities empty and bereft of the power to sustain themselves. The robot even has a face that we can identity and fear: just as so many stories about immigrants – including UKIP’s shameless poster of refugees trying to enter Eastern Europe – are accompanied by pictures of Muslims in order to associate all immigration with the image most feared by the British public (the radical, mysterious, religious terrorist), so many stories about robots are accompanied by pictures of the Terminator, associating all robots in the minds of the British public with an mercilessly rational killing machine bent on the extermination of the human race.

(This is why, for example, robots – with their humanoid appearance, not to mention how well they can be cast as villains in a Hollywood plot – cause more anxiety in the population than the much more immediate threats of, for example, climate change.)

The danger of hype

Part of the problem faced by the Remain camp was that their arguments, warning of the negative consequences of Brexit (many of which have already come to pass, incidentally, or look soon to do so) were easily branded as ‘Project Fear’ by Brexit supporters (ignoring the irony, of course, of the Leave campaign’s own scaremongering, particularly around the issue of immigration). By thus portraying the Remain camp’s warnings, pro-Brexit campaigners were not only able to largely discredit these potential consequences, but also to increase hostility towards Remain more generally.

The lesson from this for robotics is simply that hype breeds distrust. Big, shouty  arguments, made in CAPITAL LETTERS! seem to be losing their effectiveness. Again, if we look at this historically, we might notice how we have become of society not only inured against sensationalism – the grandiose promise of New! Improved! and the dire warnings of the consequences of our clothes being anything less than spotlessly white – but that we may be becoming a society increasingly angered by this hysterical consumerism.

As academics, we are sometimes our own worst enemies in this regard, by hyping either what we can do (e.g. ‘if you provide us with the £80K grant, in two years we will have a fully cognizant humanoid robot’), or by overstating the threat is posed by what is on the horizon (e.g. we’ve often wondered how much money we could attract if we ran a Sheffield Centre for Robots Will Kill Us All, Run! Run for Your Lives!)

Lack of faith in ‘expertise’

This apparent immunity to hype and hyperbole is compounded by an apparent confusion as to where reliable information can be found. Time and again in my crusades on social media people expressed a desire to find ‘objective’ facts. There was a frustration that the government were not providing such objectivity, and were instead, in leading the campaign to Remain, mired in the same games of spin and hyperbole as the Leave campaign. (But while the government, and by extension the Remain campaign, were subject to blame and frustration for their campaign, the Leave campaign, much more demonstrably based on lies and falsehoods, were excused and indulged their fallacious arguments. It is as if such deceitful behaviour was expected of the pro-Brexiters, and the anger directed at the pro-EU leaders was less about what they said and more about their perceived betrayal of the electorate, for daring to take sides at all. This allowed the Brexiters to portray themselves – laughably, given their funding, leadership and core demographic – as ‘anti-establishment’, and somehow therefore blameless for all of the feelings of confusion, powerlessness and the anxiety that was behind much of the Brexit support among the electorate.)

Sutcliffe noted that Brexit demonstrated that people simply aren’t listening to ‘experts’ anymore. Michael Gove, one of the most visible cheerleaders for Brexit, gave perhaps the soundbite of the campaign (on so many different levels),  with his assertion that ‘I think people in this country have had enough of experts‘. The distrust of politicians and corporations, it seems, seems to have infected other professions, including economists and scientists (despite the courageous efforts of groups like Scientists for the EU). In this respect alone, academics seeking to engage the public about robotics research are facing an uphill struggle as we try to counter the mythologies that have so grabbed the popular imagination, from  Frankenstein-styled genocidal killing machines to completely empathetic, mass-unemployment inducing humanoids with vastly superior intellects.

The loss of faith in expertise is a complex issue that has a long-festering history, little of which is easily remedied. Cultural theorists have pointed out for decades how, since the catastrophes of WWII, we have moved to a ‘postmodern condition‘, a general mistrust of ‘metanarratives‘ – any argument that poses as ‘The Truth’. (To what extent academics themselves, with a healthy ‘hermeneutics of suspicion‘ or promoting a radical relativism, has sown the seeds of this popular uprising is a topic to be investigated in more depth another time.)

This questioning of science also has historical foundations in the fear of hubris, long-nurtured in the Romantic imagination in the Frankenstein mythology and what Tony Stark – yes, Iron Man – describes in Avengers: Age of shouldUltron as the ‘man was not meant to meddle medley’. The public (such as ‘they’ can be described as a homogeneous entity) has had over 200 years to get used to the idea that scientists’ ambition knows no bounds, and that their arrogance will lead not only to their own downfall but, eventually, inevitably, to the downfall of the entire human race. Scientists, we have been told over and over again in novels, in film, and now in video games, are so enamoured of technology, of what is possible, that they never stop to ask if they should.

And while these fears were at the heart of the GMO debates, no ‘experts’ suffer more from this fear of hubris than roboticists, who are very literally building the machines that will, of course, destroy the entire race.  We see parallels in the way that the Faustian & Frankenstein mythology has been employed so widely in the popular conception of both GMO, ‘Frankenstein-foods’, and robotics: time and again, for example, even our most recent films about robots are just versions of this mythology, from Metropolis to Terminator to last year’s Ex_Machina and Chappie.

(There has been a noticeable shift, which is interesting to note, too, from the fear of the hubris of the individual scientist – as evident in Frankenstein and Metropolis – to mistrust of corporations, such as in Chappie, which might give scientists some hope that public opinion is moving, or can be swayed, in their favour.)

Nostalgia for simpler times

Sutcliffe noted how both the anxiety about the loss of personal control and mistrust of expert opinion, can be seen as a nostalgia for simpler times, for an era when people enjoyed the comfort of metanarratives and felt that they had a more direct stake in the direction of their lives. What historical analyses demonstrates, however, is that such an era is a fantasy. Communities may have enjoyed the illusion of stability under earlier, less sophisticated forms of capitalism, but they were always ultimately subject to the whims of the market and the decision-making of industrial powers.

And while it may have taken time for the postmodern ideas of academics to seep down into the popular imagination, mistrust of those in power, or perceived to be in power, is a process that started with the hermeneutics of suspicion undertaken in the nineteenth century (thinking of Marx and Freud, for example). We could go even further back, and note  how the lionisation of rebellion, opposition to authority and anti-establishment sentiment were to be found at the very heart of the Romantic project, at the very birth of our new, brave (isolated) modern individual. (Consider, for example, Blake’s re-conception of Milton’s Satan as a Romantic hero). All of this happened long before even the Second World War which, according to the ‘official’ postmodern line, marked the ‘death of metanarratives’ – so we’ve had at least 50 years, if not 250 years, to get used to these ideas.

The lessons, and what to do

There is perhaps little that we can do, immediately, to overturn 200 years of post-Romantic ideology (which would involve nothing less than a complete reconception of what it means to be human), however, there are some lessons from Brexit that we can more readily and easily implement with regard to robotics research and innovation

  • Get a good soundbite! It may sound trivial, but in our media age this is a potent weapon in the war for hearts and minds. GMOs were permanently tarred with the label ‘Frankenfoods‘, and the Brexit campaign used the simple, if wholly vague and inaccurate, slogan of #TakeControl.
  • Develop a strong narrative. There is little roboticists can do to counter such terrific stories as Frankenstein, Metropolis, The Terminator and the scores of other films, books and video games with such mass appeal without slipping into the hyperbole of utopias, and so falling victim to hype. For every Andrew from Bicentennial Man, there are a thousand Terminators (and we all know who would win that fight). You do not need to be a relativist to understand and accept the importance of the popular imagination in constructing reality. Do not underestimate the power of a really good story.
  • Be careful what you promise. Sutcliffe cites Richard Jones and the ‘economy of promises’: the balance between ‘optimism’ and ‘hype’ from research proposals to headlines in popular tabloids.
  • It is vital, therefore, to communicate your vision carefully, authentically and early. It is important, to be engaging from the outset, to break up counter-narratives and prevent them from catching hold of the popular imagination, from becoming ‘common sense’. Try countering common beliefs such as ‘GMO foods harm the environment’ or that ‘The are too many immigrants in this country’. Once a belief takes hold, and has been supported by a large number of people, certain laws governing cognitive dissonance makes it very hard to shift opinions.

As we’ve seen from Brexit, because of the coming together of these factors, of history, of popular mood and cultural climate, if you wait to engage the public and other stakeholders until there is a clearly defined problem that you need to counteract, it’s probably already too late.

But messages also need to be communicated authentically: many people seem unable to discriminate between competing claims of truth (e.g. can’t judge what is ‘true’ between an academic study supported by strong evidence and a screeching headline at the top of an opinion piece in an ideologically-interested tabloid). In the absence of skills that allow such distinctions to be made, people seem more prepared to believe that voice that is more akin to their own. Nigel Farage and his folksy populism is more readily perceived as ‘authentic’ than David Cameron’s convoluted and carefully focus-grouped statements. Boris the Bumbler is regarded as more honest than Ed Miliband’s carefully argued ideas. ‘Authenticity’, or the perception of authenticity, matter. People feel as though they are capable of sniffing out bullshit, even if they are, in fact, not.

This makes a strong case for more and varied people to enter into STEM subjects, so that we have more and varied voices speaking on their behalf. And we need to learn how to speak in a language that is understood by wider audiences. This is not the same as ‘dumbing down’, and it is important not to be seen to be patronising those stakeholders with whom we need to engage. But if we do not learn how to articulate ourselves in a language that can be understood, then  our messages will always fall on deaf ears, and we will always lose to those that are more effective at communicating.


New survey on public attitudes towards robots: comfortable or confused?

SO, the British Science Association has released a survey on the British public’s attitudes toward robotics and AI. Their headlines:

BSA w headline

  • 60% of people think that the use of robots or programmes equipped with artificial intelligence (AI) will lead to fewer jobs within ten years
  • 36% of the public believe that the development of AI poses a threat to the long term survival of humanity.

Some other highlights:

  • 46% oppose the idea of robots or AI being programmed with a personality

We would not trust robots to do some jobs…

  • 53% would not trust robots to perform surgery
  • 49% would not trust robots to drive public buses
  • 62% would not trust trust robots to fly commercial aircraft

but would trust them to do others:

  • 49% want robots to perform domestic tasks for the elderly or the disabled
  • 48% want robots to fly unmanned search and rescue missions
  • 45% want robots to fly unmanned military aircraft
  • 70% want robots to monitor crops

There are also results showing some predictable divisions along the lines of gender (only 17% of women are optimistic about the development of robots, whereas 28% of men are) and age (of 18-24 year olds, 55% could see robots as domestic servants in their household, 28% could see having a robot as a co-worker, and 10% could even imagine a robot being a friend).

A reply has come from the UK-RAS Network (the ESPRC-funded organisation representing academic bodies working in robotics and autonomous systems) that explains while there is need to examine these issues and carefully plan our future, there’s really nothing to worry about. They cite a European Commission report that shows there is no evidence for automisation having a negative (or a positive) impact on levels of human employment, and point to genuine benefits of robots in the workplace, suggesting how robots ‘can help protect jobs by preventing manufacturing moving from the UK to other countries, and by creating new skilled jobs related to building and servicing these systems.’

The popular press also seems to have seized upon the issue of robots and AI replacing human labour – though a lot of this in recent weeks has been in response to other studies and speeches. The Daily Mail, however, can always be relied upon to strike fear into the heart of its readers, and they haven’t disappointed. Though their rather restrained headline on the BSA study seems innocent, ‘Do you fear AI taking over? A third of people believe computers will pose a threat to humanity and more fear they’ll steal jobs‘, the article (again) resuscitates StephenDaily Mail again Hawking’s and Elon Musk’s dire warnings about the future threat posed by AI. In case this wasn’t sufficiently terrifying – and it really isn’t – The Mail slaps up another one of THOSE TERMINATOR PICTURES to accompany the article (right), with the helpful caption that ‘There are mounting fears among the public about the threat posed by artificial intelligence.’ Well, honestly, I’m sure no one can imagine why.

(Sigh.) Some needs to sit down with The Daily Mail’s photo editor and have a nice, long, very slow, chat.

But what does this survey tell us? Simply, that there is still a problem with people’s perceptions of robotics and AI that must be addressed, and it seems that we are not even heading in the right direction. A Eurobarometer survey on the public’s attitudes to robotics conducted in late 2014 shows that 64% then had a generally positive view of robots (which, if added to the 36% in the BSA survey that believes robots and AI are a threat to the future of humanity, just about accounts for everyone). In that 2014 study, however, just 36% of respondents thought that a robot could do their job, and only 4% thought that a robot could fully replace them, so clearly this is area of heightened concern. A 2013 Sciencewise survey reported almost exactly the same general results: 67% held a generally positive view (though  this survey reports that 90% would be uncomfortable with the idea of children or elderly parents being cared for by a robot, so compared to the 49% that want robots to help take care of the disabled and elderly in the latest study there might be some progress there… or else people are just so desperate to deal with an increasingly ageing population that they’re perfectly happy to dispense with their elderly relatives by dumping them with psychotic, genocidal toasters.) However, a 2012 Eurobarometer report told us that  as many as 70% of Europeans were generally positive about robots.

These comparisons are very rough and cannot tell us much without more rigorous analyses (and the BSA hasn’t provided a link to the full survey). But it shows that there has been little movement in attitudes towards robotics, and in fact an increase in anxiety that robots will displace more humans in the workforce . Without more specific scrutiny, it’s hard to say what we’ve got here. It could well be the case that what we have is very unremarkable. But though it may be encouraging to see that a majority of Europeans are consistently generally positive in their perception of robots and AI, there is still a sizeable minority that could prove very disruptive to the development of future applications of robotics and AI, whose anxieties cannot – and should not – be ignored.

One way to alleviate a great deal of these concerns, particularly regarding the loss of jobs, is to explicitly undertake to address what is emerging as the vital question in the public imagination: what this increasing automisation means for our societies? Because it is not in any way inevitable that more working robots and AI means more poverty for unemployed humans. We get to choose what the consequences are of this mechanisation; and these decisions will be taken by human beings, not left to the whims of sentient robots, or even the indifference of disembodied market forces. If we decide to divide the advantages of such automisation more equally (for example, with the introduction of a Universal Basic Income), then it could be a very good thing indeed. (It is worth remembering that two thirds (or more) of us don’t like their jobs anyway, so more robots could mean less drudgery and freedom for a disaffected workforce.)

Again, without more scrutiny, it is difficult to judge what these numbers mean. It seems to suggest that the public are very ambivalent about the forthcoming developments in robotics and AI: if 46% oppose the idea of robots or AI being programmed with a personality, then it could mean that around 54% of people could be perfectly fine with emotionally engaged robots. If half of us don’t want robots driving public buses (49%, according to the BSA survey), half might be happy for the them to do so.

We might look at this study and say that we are ambivalent about robots and AI – that means, not ‘indifferent’ (as ambivalent is often, incorrectly, taken to mean now), but that we have mixed feelings. However ,this could be a terrible misreading of the numbers. What if people aren’t deeply ambivalent, but radically schizophrenic? If 50% are reporting that they are worried, the other 50% might not be; they might even be very enthusiastic about the possibilities.

Again, there is no evidence in this study to support this notion, necessarily. There is clearly a need for more research into the specific concerns  – and their sources – in order to properly address these issues, and to understand these anxieties more thoroughly (which will need a very different sort of study). However, the cultural record offers some some unique insights. Because what films, for example, show us is that we are not at all indifferent to robots and AI, or ambivalent. There is no middle ground: when it comes to robots and AI, we are deeply terrified OR wildly optimistic; we seem to be convinced that robots will either spell certain doom for the human race or our last, our greatest, hope for salvation from all of the terrible things that threaten us (including, inevitably, other robots and ourselves).

Let’s look again at the Terminator. (And why not? since so many seem unable to leave it alone we might as well make good use of it.) The first, 1984 Terminator, for many embodies what it is we fear about robots: the relentless, unstoppable, rational monster, the sole purpose of which is to destroy of human life. But already in the next film, Arnold Schwarzenegger is the Good Guy, posing as the only hope to save John Connor and our entire species, and subsequent instalments – including the aptly-named Terminator: Salvation and the latest Terminator: Genisys [sic] – build on this theme. In our cultural imaginations, robots are both to be feared and embraced, or are either genocidal psychopaths or benevolent messiahs.

Such diametrically opposed perceptions – such dread or aspiration – do not facilitate the sort of reasoned, rational debate that will be necessary to properly assess both the challenges and the opportunities that real robots and AI represent, outside the pages and reels of science fiction. And yet we are fed a steady diet of such vicissitudes. In my next post I’ll look at another example, when I finally get around to a full review of the latest Avengers offering, The Age of Ultron.

Raising the bar on AI

So the media last week was absolutely full of the latest Sure Sign that the robocalypse is immanent: apparently, Google-backed DeepMind have now managed to create an AI so very sophisticated that it has beat human champions at the ancient Chinese boardalphago-game of Go. DeepMind’s AlphaGo has defeated the European champion, which marks another important development in the progress of AI research, trumping IBM DeepBlue’s victory over Gary Kasparov at chess back in 1997: Go is, apparently, a much more difficult game for humans – and, it was thought, for computers – to master, due to its complexity and the need for players to recognise complex patterns.

I expected, when setting off to write a note about this achievement, to find the usual sources in the popular press, with their characteristically subtle declarations, heralding that the End of the Human Race is Nigh!; however, thankfully, responses seem to be more sanguine and muted. The British tabloids have even avoided using that picture of Terminator that almost invariably accompanies their reports on new developments in AI and robotics.

So perhaps this is a sign that things are changing, and that the popular press are becoming more sensible, and more responsible, in their technology reporting. (Lets see how many weeks – or even days – we can go without this sort of thing before claiming victory, or even that we’ve turned a significant corner.)

But there is a lot interesting about DeepMind’s success, from a cultural perspective, even if it hasn’t stirred the usual panic about the robopocalypse. It made me recall a conversation I had at an EURobotics event in Bristol in November. We humans, it seems, like to think that we’re special. And maybe the possibility that robots or AI are a threat to that special status is another reason why we are so afraid of them. Maybe we fear another blow to our narcissism, like when that crazy astronomer Copernicus spoiled things by showing that the earth wasn’t the centre of the Universe, or that Victorian poo-pooer Darwin demonstrated that we merely evolved on this earth and weren’t not placed here at the behest of some Divine Creator. Maybe we don’t really fear that robots and AI will destroy all of humanity – well, maybe we fear that, too – but maybe part of what we fear is that robots and AI will destroy another one of those special places we reserve for ourselves as unique beings amidst creation.

And yet our scientists aren’t going to let us sit wrapped in the warmth of our unique being. They keep pushing ahead and developing more and more sophisticated AI that threatens our… specialness. So how do we, as a culture, respond to such a persistent challenge? Like any good politician, it seems we have decided to confront the inevitability of our failure by constantly changing the rules.

Choose your sporting metaphor: we ‘move the goalposts‘, we ‘raise the bar’.

Once upon a time, it was enough for we humans to think of ourselves as the rational animal, the sole species on earth endowed with the capacity for reason. As evidence for reason as the basis for a unique status for humanity crumbled – thanks both to proof that other animals were capable of sophisticated thought and the lack of proof that humans were, in fact, rational – we tried to shift those goalposts. We then transformed ourselves into the symbolic animal, the sole species on earth endowed with the capacity to manipulate signs and represent.

Then we learned that whales, dolphins and all sorts of animals were communicating with each other all the time, even if we weren’t listening. And that’s before we taught chimps how to use sign language (for which Charleton Heston will never thank us).

And then computers arrived to make things even worse. After some early experiments with hulking machines that struggled to add 2 + 2, computers soon progressed to leave us in their wake. Computers can clearly think more accurately, and faster, than any human being. And they can solve complex mathematical equations, demonstrating that they are pretty adept with symbols.

Ah, BUT…

Watson_JeopardyHumans could find some solace in the comforting thought that computers were good and some things, yes, but they weren’t so smart. Not really. A computer would never beat a human being at chess, for example. Until in May 1997, when chess champion Gary Kasparov lost to IBM’s Deep Blue.  But that was always going to happen. A computer could never, we consoled ourselves, win at a game that required linguistic dexterity. Until 2011, when IBM’s Watson beat Ken Jennings and Brad Rutter at Jeopardy!, the hit US game show. And now, Google’s DeepMind as conquered all, winning the hardest game we can imagine….

So what is interesting about DeepMind’s victory is how human beings have responded – again – to the challenges of our self-conception posed by robots and AI. Because if we were under any illusion that we were special, alone among gods’ creations as a thinking animal, or a symbolising animal, or a playing animal, that status as been usurped by our own progeny, again and again, in that all-too familiar Greek-Frankenstein-Freudian way.

Animal rationabile had to give way to animal symbolicum, who in turn gave way to animal ludens… what’s left now for poor, biologically-limited humanity?

data does shakespeareA glimpse of our answer to this latest provocation can be seen in Star Trek: The Next Generation: Lieutenant Commander Data is a self-aware android with cognitive abilities far beyond that of any human being. And yet, despite these tremendous capabilities, Data is always regarded – by himself and all the humans around him – as tragically, inevitably, inferior, as less than human. Despite the lessons in Shakespeare and sermons on human romantic ideals from his mentor, the ship’s captain Jean-Luc Picard, Data is doomed to be forever inferior to humans.

It seems that now AI can think and solve problems as well as humans, we’ve raised the bar again, changing the definition of ‘human’ to preserve our unique, privileged status.

We might now be animal permotionem – the emotional animal – except while that would be fine for distinguishing between us and robots, at least until we upload the elusive ‘consciousness.dat’ file (as in Neill Blomkamp’s recent film, Chappie)  this new moniker won’t help us remain distinct from the rest of the animals, because to be an emotional animal, to be a creature ruled by impulse and feeling, is.. to just be an animal, according to all of our previous definitions. (We’ve sort of painted ourselves into a corner with that one.)

We might find some refuge, then, following Gene Roddenberry’s  example, in the notion of humans as unique animal artis, the animals that create, or engage in artistic work.

(The clever among you will have realised some time ago that I’m no classical scholar and that my attempts to feign Latin fell apart some time ago. Artis  seems to imply something more akin to ‘skill’, which robots could arguably have already achieved; ars simply means ‘technique’ or ‘science’. Neither really captures what I’m trying to get at; suggestions are more than welcome below, please.)

The idea that human beings are defined by a particular creative impulse is not terribly new; attempts to redefine ‘the human’ along these lines have been evident since the latter half of the twentieth century. For example, if we flip back one hundred years ago, we might see Freud defining human beings (civilised human beings, of course, we should clarify) as uniquely able to follow rules. But by the late 1960s, Freud’s descendants, such as British psychoanalyst D. W. Winnicott, are arguing almost the exact opposite – that what makes us human is creativity, the ability to fully participate in our being in an engaged, productive way. (I will doubtless continue this thought in a later post, as psychoanalysis is a theoretical model very close to my heart.)

What’s a poor AI to do? It was once enough for an artificial intelligence to be sufficiently impressive, maybe even deemed ‘human’, if it could prove capable of reason, or symbolic representations, or win at chess, or Jeopardy!, or Go. Now, we expect nothing less than Laurence Olivier, Lord Byron and Jackson Pollack, all in one.

(How far away is AI under this measure? Is this any good? Or this? Maybe this?)

This reminds me of Chris Columbus’s 1999 film Bicentennial Man (based, of course, on a story by Isaac Asimov). Robin Williams’s Andrew Martin begins his… ‘life’, for lack of a better word… as a simple robot, who over the decades becomes more and more like a human – he becomes sentient, he demonstrates artistic skill, he learns to feel genuine emotion, etc.. At each stage, it seems, he hopes that he will be recognised as being at least on par with humans. No, he’s told at first, you’re not sentient. Then, when he’s sentient, he’s told he cannot feel. Then he’s told he cannot love. No achievement, it seems, is enough.

bicentennial_man_prog_1600x900Even once he has achieved just about everything, and become like a human in every respect- or perhaps even ”superhuman’ – he is told that it is too much, that he has to be less than he is. In an almost a complete reversal of the Aristotelian notion of the thinking, superior animal, Andrew is told that he has to make mistakes. He is too perfect. He cannot be homo sapien – he needs to be homo errat – the man that screws up. To err is human, or perhaps in this case, to err defines the human. (Though artificial intelligence will not long be on to this as well, as suggested in another of Asimov’s stories.)

It is not until Andrew is on his deathbed and is drawing his very last breaths that the Speaker of the World Congress declares, finally, that the world will recognise Andrew as a human.

And perhaps this will be the final line; this is perhaps the one definition of human that will endure and see out every single challenge posed by robots and artificial intelligence, no matter the level of technological progress, and regardless of how far artificial life leaves human beings behind: we will be homo mortuum. The rational animal that can die.

If Singularity enthusiasts and doomsayers alike are to be believed, this inevitable self-conception is not long off. Though perhaps humans’ greatest strength – the ability to adapt, and the talent to re-invent ourselves – might mean that there’s some life in the old species yet. Regardless, it will serve us very well to create a conception of both ourselves and of artificial life forms that try to demarcate the boundaries, and decide when these boundaries might be crossed, and what the implications will be for crossing that line.

Engaging Robots

On Monday, 2 March, we at Sheffield Robotics had the chance to brag about some of the many activities we do to bring robots to the people – known in the modern institution as ‘public engagement’ – at the University of Sheffield’s Public Engagement Symposium.

Our talk was presented by Ana MacIntosh and Michael Szollosy, who took the opportunity to highlight some of Sheffield’s many activities where we take robots out of the lab and out into the world. We talked about our recent demonstration of swarm robots at the Science Museum in London, our New Age of Robotics lecture series at the University’s Festival of the Mind (which is now available on iTunes), school visits, our associations with MIRO (watch this space) and the Sheffield Showroom cinema  and a whole bunch of other activities, including this very blog (and twitter feed) which, after all, is dedicated to the critical review and analyses of robots as they exist in the public imagination.

It was a terrific day, listening to the various speakers and discussing how other departments at the University approach the challenges and opportunities of public engagement. One thing that was re-enforced for us was that to do public engagement you must make certain you engage the public, meaning that you must have ideas and research that people are genuinely interested in, and that you have to take your ideas spiegeltent600and research to places that people want to go, such as the Festival of the Mind and its excellent Spiegeltent (pictured here) in downtown Sheffield. So, many thanks are due to the entire public engagement team at the University for an enjoyable, educational symposium, and particularly to Professor Vanessa Toulmin and Greg Oldfield, who so brilliantly lead and support public engagement activities throughout the University.

And on Friday later that week (6 March), we had the opportunity to see some further public engagement first-hand, as Dr. Roderich Gross offered a lecture on swarm robotics at the Sheffield Festival of Science and Engineering. What was terrific about that talk – other than the exciting news about developments in swarm robotics (more about which another time) – was that it was attended by such a tremendous range of people; young and old, men and women, all from a diverse range of backgrounds.

In other words, ‘the public’.

And continuing our involvement with the Sheffield Festival of Science and Engineering, Sheffield Robotics is taking part in Discovery Night 2015, on Friday, 13 March, where at Firth Court we will offer a Robot Foundry, displaying our range of robots for all ages to enjoy and learn. It’s all completely free, and completely open to the public. If you are in and around the Sheffield area, please come down and visit us!

New report on public attitudes to robotics

Very exciting news to all here at Dreaming Robots — and especially so with our workshop on Societal Impacts of Living Machines at the forthcoming Living Machines 2013 conference in London — is the recent publication of this report on what the public thinks of robotics and autonomous systems from Sciencewise.

The Executive Summary reads thus:

Robotics and Autonomous Systems (RAS) were identified as a key growth area by Chancellor George Osborne in his speech to the Royal Society in 2012 and confirmed by David Willetts in the report Eight Great Technologies. Public opinion towards RAS tends to be broadly optimistic, stating that these technologies are good for society and could solve problems but there are many issues associated with them (such as requiring careful management or having the potential to impact on employment). Some issues prove to be particularly controversial such as the use of robots in warfare or for the care of children or the elderly.

We’ll be pouring over the report in the next few days, so stay tuned to Dreaming Robots for more information and analysis.