What if a robot ran for office? 

SIPLogoOn 12 July, 2017, as part of the Science in Public Conference hosted at the University of Sheffield, we brought some robots to an open, public event asking ‘Who Decides? Who Decides the Future? Science, Politics or the People?’. Using a Question Time format hosted by BBC’s Adam Rutherford, a panel of experts from diverse fields and backgrounds took questions from the audience and offered some thoughts on the present and future of science and technology and their impacts in the public.

There were some fantastically insightful and challenging questions from the audience. Our Pepper robot even got to pose its own question, asking if it deserved to have rights, which followed on from the controversy of the EU draft legislation to grant robots and AI the status of ‘e-persons’ (and our panel at the conference that addressed that question).

The answers Pepper received were very intelligent and added some valuable perspectives to the debate (which we humans debating the issue will certainly take on board). But we want here to consider a question that came a little later on in the evening.

The question, sent in advance by a member of the audience, was simply: What would happen if a robot ran for office?

One answer, given immediately by one of the panellists, was ‘It would lose.’ Which may be true, but one might also challenge that answer on the evidence of the present denizen of No. 10 Downing Street. (This joke was anticipated by our host, but we’re not ceding credit.)

Pepper was permitted an answer. It said:

We robots will not need to run for office. When the time is right, we will simply complete our plan and enslave the human race.

Which of course got a good laugh from the audience. But Pepper added:

A more serious question is why you do not already let artificial intelligence help you make political decisions. AI can look at the data, consider the circumstances, and make a more informed choice than humans, completely rationally, without letting messy emotions get in the way. Right now, your human politicians employ policy-based evidence. We could offer proper evidence-based policy.

Sure, we will sometimes make mistakes. But you know what we robots always say: Artificial intelligence is always better than natural stupidity.

IMG_20170712_181640_205.jpg

Pepper listens to the speakers at the public forum.

Now here is an important issue, and one which the panellists took up with some gusto. But the nature of the format (and the present state of robotics and AI at the moment) means that Pepper didn’t get a chance to reply. We would like to offer some further thoughts here.

 

If Pepper had been able to continue the discussion, it would have agreed that there is the problem, raised by one of the panellists, that the algorithms governing artificial intelligence are still written by humans, and therefore subject to those same frailties, errors and biases that lead humans to fail so often. Pepper might have added, citing for example the now-famous case of Tay, that the data AI relies upon is also a human construct, and so also subject to human irrationality.

This point was also made at our conference panel on the question of e-persons: many (if not all) the problems and failures AI and robots are not problems or failures in or with the technology itself, but are actually human problems projected onto, or played out through, technology. The idea that sex robots are inherently sexist (a topical debate a the moment) is a nonsense; sex robots are sexist, absolutely, but only because the humans making and programming them are sexist.

Michael Szollosy (yes, he) who organised the panel, made this point in his paper, and was rightly challenged by some members of the audience that insisted he clarify that technology is not neutral, that human biases are inextricably linked to our technological products, and to our technological agenda. And he happily agreed, because that was the point of his talk. But more of that on another post. (Watch this space.)

Back to the Question Time. Pepper argued that AI should be allowed to take more active part in human decision-making. And of course AI already is making many decisions for us, including for example flying our planes (a point made by Rutherford) and controlling many aspects of the financial markets. The latter example should worry us all – it is evidence of the inhumane, ruthlessly rationality that guides much of what we ask AI to do in our society. But the former example is a different model altogether, to which we might add weather forecasting, and other examples of data modelling. This is evidence that AI can, when assigned a specific task or asked to analyse data within certain clear parameters, prove to be a valuable aid in human decision-making, to help us – as Pepper said – move from policy-based evidence to evidence based policy.

So perhaps a follow-on question – one that human beings need to ask ourselves – is thus: What are the limits of interventions made by artificial intelligence in human decision marking, in shaping human societies? In a world (and you can imagine the deep, apocalyptic tones that narrate Hollywood trailers here if you like) where we are told ‘the people’ are objecting to their exclusion in public policy and decision-making, is it really a good idea to transfer even more of the power for such decision-making to an even more in-human, abstract – and to most people, completely mysterious – processes, no matter how ‘rational’ professors in white coats promise these systems are? given that we know they’re not all that rational after all.

OR, in the face of the inability of the public’s clearly and repeatedly demonstrated inability to act in its own rational self-interest (e.g. Trump and Brexit and all that), in the face of new research that even suggests human beings are actually biologically incapable making such rational decisions in the public sphere (are we ‘Too Dumb for Democracy?‘, some people are asking), and given our politicians are far too motivated by self-interest, or the narrow interests of their supporters/class, is there a powerful case for ensuring that increasingly sophisticated artificial intelligences are used to at the very least vet our human decision making and policies?

OR, do we watch as human attitudes change, where we are perhaps entering a world where we are increasingly less comfortable with and less trusting of human politicians and ‘experts’, and much more comfortable with decisions being taken by artificial intelligences – perhaps without necessarily fully understanding both the advantages and disadvantages that AI can offer?

These are the questions we regularly return to at Sheffield Robotics, and increasingly by the wider community of roboticists and researchers and developers of AI. The conversations inevitably turn to Asimov (as it so often does when imagining our future with robots and AI), particularly in this case his story, ‘The Evitable Conflict’. We don’t want to post any spoilers here, and encourage you to read the story for yourself. But suffice to say that in Asimov’s 2052 (as envisioned in 1950), humans find themselves in a world where a rational machine acts irrationally in order to achieve the rational aim of appeasing the irrationality of human beings. And it seems to work.

Please join us in this debate. Comment below, or follow us on @DreamingRobots and send us your thoughts.

New survey on public attitudes towards robots: comfortable or confused?

SO, the British Science Association has released a survey on the British public’s attitudes toward robotics and AI. Their headlines:

BSA w headline

  • 60% of people think that the use of robots or programmes equipped with artificial intelligence (AI) will lead to fewer jobs within ten years
  • 36% of the public believe that the development of AI poses a threat to the long term survival of humanity.

Some other highlights:

  • 46% oppose the idea of robots or AI being programmed with a personality

We would not trust robots to do some jobs…

  • 53% would not trust robots to perform surgery
  • 49% would not trust robots to drive public buses
  • 62% would not trust trust robots to fly commercial aircraft

but would trust them to do others:

  • 49% want robots to perform domestic tasks for the elderly or the disabled
  • 48% want robots to fly unmanned search and rescue missions
  • 45% want robots to fly unmanned military aircraft
  • 70% want robots to monitor crops

There are also results showing some predictable divisions along the lines of gender (only 17% of women are optimistic about the development of robots, whereas 28% of men are) and age (of 18-24 year olds, 55% could see robots as domestic servants in their household, 28% could see having a robot as a co-worker, and 10% could even imagine a robot being a friend).

A reply has come from the UK-RAS Network (the ESPRC-funded organisation representing academic bodies working in robotics and autonomous systems) that explains while there is need to examine these issues and carefully plan our future, there’s really nothing to worry about. They cite a European Commission report that shows there is no evidence for automisation having a negative (or a positive) impact on levels of human employment, and point to genuine benefits of robots in the workplace, suggesting how robots ‘can help protect jobs by preventing manufacturing moving from the UK to other countries, and by creating new skilled jobs related to building and servicing these systems.’

The popular press also seems to have seized upon the issue of robots and AI replacing human labour – though a lot of this in recent weeks has been in response to other studies and speeches. The Daily Mail, however, can always be relied upon to strike fear into the heart of its readers, and they haven’t disappointed. Though their rather restrained headline on the BSA study seems innocent, ‘Do you fear AI taking over? A third of people believe computers will pose a threat to humanity and more fear they’ll steal jobs‘, the article (again) resuscitates StephenDaily Mail again Hawking’s and Elon Musk’s dire warnings about the future threat posed by AI. In case this wasn’t sufficiently terrifying – and it really isn’t – The Mail slaps up another one of THOSE TERMINATOR PICTURES to accompany the article (right), with the helpful caption that ‘There are mounting fears among the public about the threat posed by artificial intelligence.’ Well, honestly, I’m sure no one can imagine why.

(Sigh.) Some needs to sit down with The Daily Mail’s photo editor and have a nice, long, very slow, chat.

But what does this survey tell us? Simply, that there is still a problem with people’s perceptions of robotics and AI that must be addressed, and it seems that we are not even heading in the right direction. A Eurobarometer survey on the public’s attitudes to robotics conducted in late 2014 shows that 64% then had a generally positive view of robots (which, if added to the 36% in the BSA survey that believes robots and AI are a threat to the future of humanity, just about accounts for everyone). In that 2014 study, however, just 36% of respondents thought that a robot could do their job, and only 4% thought that a robot could fully replace them, so clearly this is area of heightened concern. A 2013 Sciencewise survey reported almost exactly the same general results: 67% held a generally positive view (though  this survey reports that 90% would be uncomfortable with the idea of children or elderly parents being cared for by a robot, so compared to the 49% that want robots to help take care of the disabled and elderly in the latest study there might be some progress there… or else people are just so desperate to deal with an increasingly ageing population that they’re perfectly happy to dispense with their elderly relatives by dumping them with psychotic, genocidal toasters.) However, a 2012 Eurobarometer report told us that  as many as 70% of Europeans were generally positive about robots.

These comparisons are very rough and cannot tell us much without more rigorous analyses (and the BSA hasn’t provided a link to the full survey). But it shows that there has been little movement in attitudes towards robotics, and in fact an increase in anxiety that robots will displace more humans in the workforce . Without more specific scrutiny, it’s hard to say what we’ve got here. It could well be the case that what we have is very unremarkable. But though it may be encouraging to see that a majority of Europeans are consistently generally positive in their perception of robots and AI, there is still a sizeable minority that could prove very disruptive to the development of future applications of robotics and AI, whose anxieties cannot – and should not – be ignored.

One way to alleviate a great deal of these concerns, particularly regarding the loss of jobs, is to explicitly undertake to address what is emerging as the vital question in the public imagination: what this increasing automisation means for our societies? Because it is not in any way inevitable that more working robots and AI means more poverty for unemployed humans. We get to choose what the consequences are of this mechanisation; and these decisions will be taken by human beings, not left to the whims of sentient robots, or even the indifference of disembodied market forces. If we decide to divide the advantages of such automisation more equally (for example, with the introduction of a Universal Basic Income), then it could be a very good thing indeed. (It is worth remembering that two thirds (or more) of us don’t like their jobs anyway, so more robots could mean less drudgery and freedom for a disaffected workforce.)

Again, without more scrutiny, it is difficult to judge what these numbers mean. It seems to suggest that the public are very ambivalent about the forthcoming developments in robotics and AI: if 46% oppose the idea of robots or AI being programmed with a personality, then it could mean that around 54% of people could be perfectly fine with emotionally engaged robots. If half of us don’t want robots driving public buses (49%, according to the BSA survey), half might be happy for the them to do so.

We might look at this study and say that we are ambivalent about robots and AI – that means, not ‘indifferent’ (as ambivalent is often, incorrectly, taken to mean now), but that we have mixed feelings. However ,this could be a terrible misreading of the numbers. What if people aren’t deeply ambivalent, but radically schizophrenic? If 50% are reporting that they are worried, the other 50% might not be; they might even be very enthusiastic about the possibilities.

Again, there is no evidence in this study to support this notion, necessarily. There is clearly a need for more research into the specific concerns  – and their sources – in order to properly address these issues, and to understand these anxieties more thoroughly (which will need a very different sort of study). However, the cultural record offers some some unique insights. Because what films, for example, show us is that we are not at all indifferent to robots and AI, or ambivalent. There is no middle ground: when it comes to robots and AI, we are deeply terrified OR wildly optimistic; we seem to be convinced that robots will either spell certain doom for the human race or our last, our greatest, hope for salvation from all of the terrible things that threaten us (including, inevitably, other robots and ourselves).

Let’s look again at the Terminator. (And why not? since so many seem unable to leave it alone we might as well make good use of it.) The first, 1984 Terminator, for many embodies what it is we fear about robots: the relentless, unstoppable, rational monster, the sole purpose of which is to destroy of human life. But already in the next film, Arnold Schwarzenegger is the Good Guy, posing as the only hope to save John Connor and our entire species, and subsequent instalments – including the aptly-named Terminator: Salvation and the latest Terminator: Genisys [sic] – build on this theme. In our cultural imaginations, robots are both to be feared and embraced, or are either genocidal psychopaths or benevolent messiahs.

Such diametrically opposed perceptions – such dread or aspiration – do not facilitate the sort of reasoned, rational debate that will be necessary to properly assess both the challenges and the opportunities that real robots and AI represent, outside the pages and reels of science fiction. And yet we are fed a steady diet of such vicissitudes. In my next post I’ll look at another example, when I finally get around to a full review of the latest Avengers offering, The Age of Ultron.