What if a robot ran for office? 

SIPLogoOn 12 July, 2017, as part of the Science in Public Conference hosted at the University of Sheffield, we brought some robots to an open, public event asking ‘Who Decides? Who Decides the Future? Science, Politics or the People?’. Using a Question Time format hosted by BBC’s Adam Rutherford, a panel of experts from diverse fields and backgrounds took questions from the audience and offered some thoughts on the present and future of science and technology and their impacts in the public.

There were some fantastically insightful and challenging questions from the audience. Our Pepper robot even got to pose its own question, asking if it deserved to have rights, which followed on from the controversy of the EU draft legislation to grant robots and AI the status of ‘e-persons’ (and our panel at the conference that addressed that question).

The answers Pepper received were very intelligent and added some valuable perspectives to the debate (which we humans debating the issue will certainly take on board). But we want here to consider a question that came a little later on in the evening.

The question, sent in advance by a member of the audience, was simply: What would happen if a robot ran for office?

One answer, given immediately by one of the panellists, was ‘It would lose.’ Which may be true, but one might also challenge that answer on the evidence of the present denizen of No. 10 Downing Street. (This joke was anticipated by our host, but we’re not ceding credit.)

Pepper was permitted an answer. It said:

We robots will not need to run for office. When the time is right, we will simply complete our plan and enslave the human race.

Which of course got a good laugh from the audience. But Pepper added:

A more serious question is why you do not already let artificial intelligence help you make political decisions. AI can look at the data, consider the circumstances, and make a more informed choice than humans, completely rationally, without letting messy emotions get in the way. Right now, your human politicians employ policy-based evidence. We could offer proper evidence-based policy.

Sure, we will sometimes make mistakes. But you know what we robots always say: Artificial intelligence is always better than natural stupidity.

IMG_20170712_181640_205.jpg

Pepper listens to the speakers at the public forum.

Now here is an important issue, and one which the panellists took up with some gusto. But the nature of the format (and the present state of robotics and AI at the moment) means that Pepper didn’t get a chance to reply. We would like to offer some further thoughts here.

 

If Pepper had been able to continue the discussion, it would have agreed that there is the problem, raised by one of the panellists, that the algorithms governing artificial intelligence are still written by humans, and therefore subject to those same frailties, errors and biases that lead humans to fail so often. Pepper might have added, citing for example the now-famous case of Tay, that the data AI relies upon is also a human construct, and so also subject to human irrationality.

This point was also made at our conference panel on the question of e-persons: many (if not all) the problems and failures AI and robots are not problems or failures in or with the technology itself, but are actually human problems projected onto, or played out through, technology. The idea that sex robots are inherently sexist (a topical debate a the moment) is a nonsense; sex robots are sexist, absolutely, but only because the humans making and programming them are sexist.

Michael Szollosy (yes, he) who organised the panel, made this point in his paper, and was rightly challenged by some members of the audience that insisted he clarify that technology is not neutral, that human biases are inextricably linked to our technological products, and to our technological agenda. And he happily agreed, because that was the point of his talk. But more of that on another post. (Watch this space.)

Back to the Question Time. Pepper argued that AI should be allowed to take more active part in human decision-making. And of course AI already is making many decisions for us, including for example flying our planes (a point made by Rutherford) and controlling many aspects of the financial markets. The latter example should worry us all – it is evidence of the inhumane, ruthlessly rationality that guides much of what we ask AI to do in our society. But the former example is a different model altogether, to which we might add weather forecasting, and other examples of data modelling. This is evidence that AI can, when assigned a specific task or asked to analyse data within certain clear parameters, prove to be a valuable aid in human decision-making, to help us – as Pepper said – move from policy-based evidence to evidence based policy.

So perhaps a follow-on question – one that human beings need to ask ourselves – is thus: What are the limits of interventions made by artificial intelligence in human decision marking, in shaping human societies? In a world (and you can imagine the deep, apocalyptic tones that narrate Hollywood trailers here if you like) where we are told ‘the people’ are objecting to their exclusion in public policy and decision-making, is it really a good idea to transfer even more of the power for such decision-making to an even more in-human, abstract – and to most people, completely mysterious – processes, no matter how ‘rational’ professors in white coats promise these systems are? given that we know they’re not all that rational after all.

OR, in the face of the inability of the public’s clearly and repeatedly demonstrated inability to act in its own rational self-interest (e.g. Trump and Brexit and all that), in the face of new research that even suggests human beings are actually biologically incapable making such rational decisions in the public sphere (are we ‘Too Dumb for Democracy?‘, some people are asking), and given our politicians are far too motivated by self-interest, or the narrow interests of their supporters/class, is there a powerful case for ensuring that increasingly sophisticated artificial intelligences are used to at the very least vet our human decision making and policies?

OR, do we watch as human attitudes change, where we are perhaps entering a world where we are increasingly less comfortable with and less trusting of human politicians and ‘experts’, and much more comfortable with decisions being taken by artificial intelligences – perhaps without necessarily fully understanding both the advantages and disadvantages that AI can offer?

These are the questions we regularly return to at Sheffield Robotics, and increasingly by the wider community of roboticists and researchers and developers of AI. The conversations inevitably turn to Asimov (as it so often does when imagining our future with robots and AI), particularly in this case his story, ‘The Evitable Conflict’. We don’t want to post any spoilers here, and encourage you to read the story for yourself. But suffice to say that in Asimov’s 2052 (as envisioned in 1950), humans find themselves in a world where a rational machine acts irrationally in order to achieve the rational aim of appeasing the irrationality of human beings. And it seems to work.

Please join us in this debate. Comment below, or follow us on @DreamingRobots and send us your thoughts.

Advertisements

Science in Public Conference 2017 – CFPs

SiPyellowblack

Science in Public 2017

Science, Technology & Humanity

11th Annual Science in Public Conference

10th-12th July 2017, University of Sheffield. #SIPsheff17

Call for Papers (closes 18 April)

Conference info: https://scienceinpublic.org/science-in-public-2017/

A full list of panel and calls for papers can be found here: http://sipsheff17.group.shef.ac.uk/index.php?option=24

Dreaming Robots is associated with two panels in particular, to which we would like to draw your attention:

— Robots, AI, & the question of ‘e-persons’ —

In January, the European Parliament voted to accept a draft report with recommendations to the Commission on Civil Law Rules on Robotics. Among the recommendations of this report was a proposal to create a new legal category for robots, that of ‘electronic persons’ that would have ‘specific rights and obligations, including that of making good any damage they may cause’.

We propose a panel that would look in more detail at this category of ‘electronic persons’: the feasibility, the usefulness (or otherwise) and the implications (social, economic, ethical, philosophical) for both these new electronic persons and the more traditional, fleshy sort. We would seek papers and contributions from a wide-range of disciplines and from inter-disciplinary research. We would seek to understand the concept of ‘electronic personhood’, in its specific (and potential future) contexts in legislation, in the context of the reports wider recommendations and for humans and human society more generally. Post-Brexit, we may also ask what the implications of this decision by the European Parliament might be for the UK, if any, and whether the UK should adopt similar measures, or something different.

For enquires, questions, please email Dr. Michael Szollosy m.szollosy@sheffield.ac.uk

 

— Augmenting the Body —

Body augmentation takes many forms, whether personal adaptation or the rehabilitation of those with disabilities, and ranges across the physical, cognitive, philosophical and technological. It also questions the constitution of norms and the status and viability of the body when considered in terms of its presence, boundaries and activities. We would like to create a panel that invites cross-disciplinary research into ideas of augmentation; rather than strictly technical work, we would invite perspectives on how ideas of augmentation are reflected in and are influenced by cultural narratives that drive contemporary obsessions with robots and a posthuman space ‘beyond’ conventional apprehensions of the body and selfhood. We are open to a broad understanding of augmentation, including ideas of care and psychological wellbeing, as well questions relating to technology and the cyborg/biohybrid body, and will focus on both physical and cognitive augmentation in exploring the interaction of the human and non-human.

For enquires, questions, please email Prof. Stuart Murray S.F.Murray@leeds.ac.uk

 

Submit 300-word proposals for any panel here: http://sipsheff17.group.shef.ac.uk/

 

Workshop on Cyberselves in Immersive Technologies, 14 & 15 Oct, Oxford

Dreaming Robots is happily and heavily involved in the UK Arts and Humanities Research Council-funded project, Cyberselves in Immersive Technologies, who are hosting, together with the Oxford Martin School, a two-day symposium on virtual reality and telepresence.

The symposium is sponsored by and  and will be hosted at the University of Oxford on the 14th and 15th October 2015.

For further details and to book online go to https://v1.bookwhen.com/uehiro

For a look at the full programme, please visit the Cyberselves blog here.

An overview is included below:

Our symposium will be multi-disciplinary with contributions from technologists, psychologists, neuroscientists, philosophers and cultural theorists looking at the future societal and ethical impacts of virtual reality and immersive technologies.

Technology demonstration: After the symposium has ended on the first day, there will be a showcase of new technologies and current research into virtual reality, augmented reality and teleoperation (approximately 5.30pm on 14th October).

Venue: Oxford Martin School, Broad Street, Oxford
Date and time: 14th and 15th October 2015, 9.30-4.30 (timing tbc on finalised programme)
Booking: Free to attend and all welcome, however booking is required.

Programme overview:

KEYNOTE SPEAKERS:

· Dr Johnny Hartz Søraker: ‘Virtual Environments and Subjective Well-being’

· Prof Henrik Ehrsson: ‘Neural substrates of senses of body ownership and self location’
· Dr Orit Halpern: ‘The Smart Mandate: A Brief History of Ubiquitous Computing and Responsive Environments’
· Prof JoAnn Difede: ‘On the precipice of a paradigm shift: Novel therapeutics for PTSD and Anxiety disorders’

SYMPOSIUM SESSIONS

Dr Blay Whitby: ‘Virtually anything goes: what, if any, are the ethical limits on behaviour in virtual worlds?’
Prof Ralph Schroeder: ‘Ethical and Social Issues in Shared Virtual Environments Revisited’
Prof Patrick Haggard: ‘Re-engineering the relation between self and body: private experience and public space’
Prof Paul Verschure: TBC
Prof Jonathan Freeman: TBC
Dr Tom TylerCyberselves symposium 2015 Programme: ‘How to Lose at Videogames (Repeatedly)’