What if a robot ran for office? 

SIPLogoOn 12 July, 2017, as part of the Science in Public Conference hosted at the University of Sheffield, we brought some robots to an open, public event asking ‘Who Decides? Who Decides the Future? Science, Politics or the People?’. Using a Question Time format hosted by BBC’s Adam Rutherford, a panel of experts from diverse fields and backgrounds took questions from the audience and offered some thoughts on the present and future of science and technology and their impacts in the public.

There were some fantastically insightful and challenging questions from the audience. Our Pepper robot even got to pose its own question, asking if it deserved to have rights, which followed on from the controversy of the EU draft legislation to grant robots and AI the status of ‘e-persons’ (and our panel at the conference that addressed that question).

The answers Pepper received were very intelligent and added some valuable perspectives to the debate (which we humans debating the issue will certainly take on board). But we want here to consider a question that came a little later on in the evening.

The question, sent in advance by a member of the audience, was simply: What would happen if a robot ran for office?

One answer, given immediately by one of the panellists, was ‘It would lose.’ Which may be true, but one might also challenge that answer on the evidence of the present denizen of No. 10 Downing Street. (This joke was anticipated by our host, but we’re not ceding credit.)

Pepper was permitted an answer. It said:

We robots will not need to run for office. When the time is right, we will simply complete our plan and enslave the human race.

Which of course got a good laugh from the audience. But Pepper added:

A more serious question is why you do not already let artificial intelligence help you make political decisions. AI can look at the data, consider the circumstances, and make a more informed choice than humans, completely rationally, without letting messy emotions get in the way. Right now, your human politicians employ policy-based evidence. We could offer proper evidence-based policy.

Sure, we will sometimes make mistakes. But you know what we robots always say: Artificial intelligence is always better than natural stupidity.

IMG_20170712_181640_205.jpg

Pepper listens to the speakers at the public forum.

Now here is an important issue, and one which the panellists took up with some gusto. But the nature of the format (and the present state of robotics and AI at the moment) means that Pepper didn’t get a chance to reply. We would like to offer some further thoughts here.

 

If Pepper had been able to continue the discussion, it would have agreed that there is the problem, raised by one of the panellists, that the algorithms governing artificial intelligence are still written by humans, and therefore subject to those same frailties, errors and biases that lead humans to fail so often. Pepper might have added, citing for example the now-famous case of Tay, that the data AI relies upon is also a human construct, and so also subject to human irrationality.

This point was also made at our conference panel on the question of e-persons: many (if not all) the problems and failures AI and robots are not problems or failures in or with the technology itself, but are actually human problems projected onto, or played out through, technology. The idea that sex robots are inherently sexist (a topical debate a the moment) is a nonsense; sex robots are sexist, absolutely, but only because the humans making and programming them are sexist.

Michael Szollosy (yes, he) who organised the panel, made this point in his paper, and was rightly challenged by some members of the audience that insisted he clarify that technology is not neutral, that human biases are inextricably linked to our technological products, and to our technological agenda. And he happily agreed, because that was the point of his talk. But more of that on another post. (Watch this space.)

Back to the Question Time. Pepper argued that AI should be allowed to take more active part in human decision-making. And of course AI already is making many decisions for us, including for example flying our planes (a point made by Rutherford) and controlling many aspects of the financial markets. The latter example should worry us all – it is evidence of the inhumane, ruthlessly rationality that guides much of what we ask AI to do in our society. But the former example is a different model altogether, to which we might add weather forecasting, and other examples of data modelling. This is evidence that AI can, when assigned a specific task or asked to analyse data within certain clear parameters, prove to be a valuable aid in human decision-making, to help us – as Pepper said – move from policy-based evidence to evidence based policy.

So perhaps a follow-on question – one that human beings need to ask ourselves – is thus: What are the limits of interventions made by artificial intelligence in human decision marking, in shaping human societies? In a world (and you can imagine the deep, apocalyptic tones that narrate Hollywood trailers here if you like) where we are told ‘the people’ are objecting to their exclusion in public policy and decision-making, is it really a good idea to transfer even more of the power for such decision-making to an even more in-human, abstract – and to most people, completely mysterious – processes, no matter how ‘rational’ professors in white coats promise these systems are? given that we know they’re not all that rational after all.

OR, in the face of the inability of the public’s clearly and repeatedly demonstrated inability to act in its own rational self-interest (e.g. Trump and Brexit and all that), in the face of new research that even suggests human beings are actually biologically incapable making such rational decisions in the public sphere (are we ‘Too Dumb for Democracy?‘, some people are asking), and given our politicians are far too motivated by self-interest, or the narrow interests of their supporters/class, is there a powerful case for ensuring that increasingly sophisticated artificial intelligences are used to at the very least vet our human decision making and policies?

OR, do we watch as human attitudes change, where we are perhaps entering a world where we are increasingly less comfortable with and less trusting of human politicians and ‘experts’, and much more comfortable with decisions being taken by artificial intelligences – perhaps without necessarily fully understanding both the advantages and disadvantages that AI can offer?

These are the questions we regularly return to at Sheffield Robotics, and increasingly by the wider community of roboticists and researchers and developers of AI. The conversations inevitably turn to Asimov (as it so often does when imagining our future with robots and AI), particularly in this case his story, ‘The Evitable Conflict’. We don’t want to post any spoilers here, and encourage you to read the story for yourself. But suffice to say that in Asimov’s 2052 (as envisioned in 1950), humans find themselves in a world where a rational machine acts irrationally in order to achieve the rational aim of appeasing the irrationality of human beings. And it seems to work.

Please join us in this debate. Comment below, or follow us on @DreamingRobots and send us your thoughts.

Advertisements