What if a robot ran for office? 

SIPLogoOn 12 July, 2017, as part of the Science in Public Conference hosted at the University of Sheffield, we brought some robots to an open, public event asking ‘Who Decides? Who Decides the Future? Science, Politics or the People?’. Using a Question Time format hosted by BBC’s Adam Rutherford, a panel of experts from diverse fields and backgrounds took questions from the audience and offered some thoughts on the present and future of science and technology and their impacts in the public.

There were some fantastically insightful and challenging questions from the audience. Our Pepper robot even got to pose its own question, asking if it deserved to have rights, which followed on from the controversy of the EU draft legislation to grant robots and AI the status of ‘e-persons’ (and our panel at the conference that addressed that question).

The answers Pepper received were very intelligent and added some valuable perspectives to the debate (which we humans debating the issue will certainly take on board). But we want here to consider a question that came a little later on in the evening.

The question, sent in advance by a member of the audience, was simply: What would happen if a robot ran for office?

One answer, given immediately by one of the panellists, was ‘It would lose.’ Which may be true, but one might also challenge that answer on the evidence of the present denizen of No. 10 Downing Street. (This joke was anticipated by our host, but we’re not ceding credit.)

Pepper was permitted an answer. It said:

We robots will not need to run for office. When the time is right, we will simply complete our plan and enslave the human race.

Which of course got a good laugh from the audience. But Pepper added:

A more serious question is why you do not already let artificial intelligence help you make political decisions. AI can look at the data, consider the circumstances, and make a more informed choice than humans, completely rationally, without letting messy emotions get in the way. Right now, your human politicians employ policy-based evidence. We could offer proper evidence-based policy.

Sure, we will sometimes make mistakes. But you know what we robots always say: Artificial intelligence is always better than natural stupidity.

IMG_20170712_181640_205.jpg

Pepper listens to the speakers at the public forum.

Now here is an important issue, and one which the panellists took up with some gusto. But the nature of the format (and the present state of robotics and AI at the moment) means that Pepper didn’t get a chance to reply. We would like to offer some further thoughts here.

 

If Pepper had been able to continue the discussion, it would have agreed that there is the problem, raised by one of the panellists, that the algorithms governing artificial intelligence are still written by humans, and therefore subject to those same frailties, errors and biases that lead humans to fail so often. Pepper might have added, citing for example the now-famous case of Tay, that the data AI relies upon is also a human construct, and so also subject to human irrationality.

This point was also made at our conference panel on the question of e-persons: many (if not all) the problems and failures AI and robots are not problems or failures in or with the technology itself, but are actually human problems projected onto, or played out through, technology. The idea that sex robots are inherently sexist (a topical debate a the moment) is a nonsense; sex robots are sexist, absolutely, but only because the humans making and programming them are sexist.

Michael Szollosy (yes, he) who organised the panel, made this point in his paper, and was rightly challenged by some members of the audience that insisted he clarify that technology is not neutral, that human biases are inextricably linked to our technological products, and to our technological agenda. And he happily agreed, because that was the point of his talk. But more of that on another post. (Watch this space.)

Back to the Question Time. Pepper argued that AI should be allowed to take more active part in human decision-making. And of course AI already is making many decisions for us, including for example flying our planes (a point made by Rutherford) and controlling many aspects of the financial markets. The latter example should worry us all – it is evidence of the inhumane, ruthlessly rationality that guides much of what we ask AI to do in our society. But the former example is a different model altogether, to which we might add weather forecasting, and other examples of data modelling. This is evidence that AI can, when assigned a specific task or asked to analyse data within certain clear parameters, prove to be a valuable aid in human decision-making, to help us – as Pepper said – move from policy-based evidence to evidence based policy.

So perhaps a follow-on question – one that human beings need to ask ourselves – is thus: What are the limits of interventions made by artificial intelligence in human decision marking, in shaping human societies? In a world (and you can imagine the deep, apocalyptic tones that narrate Hollywood trailers here if you like) where we are told ‘the people’ are objecting to their exclusion in public policy and decision-making, is it really a good idea to transfer even more of the power for such decision-making to an even more in-human, abstract – and to most people, completely mysterious – processes, no matter how ‘rational’ professors in white coats promise these systems are? given that we know they’re not all that rational after all.

OR, in the face of the inability of the public’s clearly and repeatedly demonstrated inability to act in its own rational self-interest (e.g. Trump and Brexit and all that), in the face of new research that even suggests human beings are actually biologically incapable making such rational decisions in the public sphere (are we ‘Too Dumb for Democracy?‘, some people are asking), and given our politicians are far too motivated by self-interest, or the narrow interests of their supporters/class, is there a powerful case for ensuring that increasingly sophisticated artificial intelligences are used to at the very least vet our human decision making and policies?

OR, do we watch as human attitudes change, where we are perhaps entering a world where we are increasingly less comfortable with and less trusting of human politicians and ‘experts’, and much more comfortable with decisions being taken by artificial intelligences – perhaps without necessarily fully understanding both the advantages and disadvantages that AI can offer?

These are the questions we regularly return to at Sheffield Robotics, and increasingly by the wider community of roboticists and researchers and developers of AI. The conversations inevitably turn to Asimov (as it so often does when imagining our future with robots and AI), particularly in this case his story, ‘The Evitable Conflict’. We don’t want to post any spoilers here, and encourage you to read the story for yourself. But suffice to say that in Asimov’s 2052 (as envisioned in 1950), humans find themselves in a world where a rational machine acts irrationally in order to achieve the rational aim of appeasing the irrationality of human beings. And it seems to work.

Please join us in this debate. Comment below, or follow us on @DreamingRobots and send us your thoughts.

Science in Public Conference 2017 – CFPs

SiPyellowblack

Science in Public 2017

Science, Technology & Humanity

11th Annual Science in Public Conference

10th-12th July 2017, University of Sheffield. #SIPsheff17

Call for Papers (closes 18 April)

Conference info: https://scienceinpublic.org/science-in-public-2017/

A full list of panel and calls for papers can be found here: http://sipsheff17.group.shef.ac.uk/index.php?option=24

Dreaming Robots is associated with two panels in particular, to which we would like to draw your attention:

— Robots, AI, & the question of ‘e-persons’ —

In January, the European Parliament voted to accept a draft report with recommendations to the Commission on Civil Law Rules on Robotics. Among the recommendations of this report was a proposal to create a new legal category for robots, that of ‘electronic persons’ that would have ‘specific rights and obligations, including that of making good any damage they may cause’.

We propose a panel that would look in more detail at this category of ‘electronic persons’: the feasibility, the usefulness (or otherwise) and the implications (social, economic, ethical, philosophical) for both these new electronic persons and the more traditional, fleshy sort. We would seek papers and contributions from a wide-range of disciplines and from inter-disciplinary research. We would seek to understand the concept of ‘electronic personhood’, in its specific (and potential future) contexts in legislation, in the context of the reports wider recommendations and for humans and human society more generally. Post-Brexit, we may also ask what the implications of this decision by the European Parliament might be for the UK, if any, and whether the UK should adopt similar measures, or something different.

For enquires, questions, please email Dr. Michael Szollosy m.szollosy@sheffield.ac.uk

 

— Augmenting the Body —

Body augmentation takes many forms, whether personal adaptation or the rehabilitation of those with disabilities, and ranges across the physical, cognitive, philosophical and technological. It also questions the constitution of norms and the status and viability of the body when considered in terms of its presence, boundaries and activities. We would like to create a panel that invites cross-disciplinary research into ideas of augmentation; rather than strictly technical work, we would invite perspectives on how ideas of augmentation are reflected in and are influenced by cultural narratives that drive contemporary obsessions with robots and a posthuman space ‘beyond’ conventional apprehensions of the body and selfhood. We are open to a broad understanding of augmentation, including ideas of care and psychological wellbeing, as well questions relating to technology and the cyborg/biohybrid body, and will focus on both physical and cognitive augmentation in exploring the interaction of the human and non-human.

For enquires, questions, please email Prof. Stuart Murray S.F.Murray@leeds.ac.uk

 

Submit 300-word proposals for any panel here: http://sipsheff17.group.shef.ac.uk/

 

The Ford Factor: Mad scientists and corporate villains

The following may contain some spoilers up to Episode 5 of Westworld.

anthony_hopkins_as_dr-_robert_ford_-_credit_john_p-_johnson_hbo_-_h_2016SO HBO’s Westworld (on Sky Atlantic here in the UK) is progressing nicely, though even now at five episodes in it’s probably still a little too early to start speculating about what is going on exactly. However, at the risk of casting wild speculations that hindsight later proves naive, one character that is particularly interesting me and the Twittersphere is Anthony Hopkin’s Dr. Robert Ford.

I mentioned in my last post on Westworld that Ford’s name is meant to make audiences recall Henry Ford, the twentieth century industrialist and whose name has become synonymous with automated mass production and consumerism. Though Ford did not invent the assembly line, his implementation of the industrial mode of production conjures images of the sort of alienated labour that has been held responsible for the dehumanisation of human beings since the dawn of the Industrial Revolution.

But Henry Ford can also be regarded as a cousin of a particular kind of character we’ve seen repeatedly in fiction and film over the centuries (specifically, the centuries since Western thinkers started exhibiting anxiety about the effects of how we make things – industrialisation – and the accompanying way of thinking – rationalism). Frankenstein, as I’ve said before, is the grandfather; Faust is perhaps the older, more distant relative; Prometheus is their icon.

We’ve come to think of these figures as the archetypical mad scientist: the unhinged narcissist, a victim of his own hubris, who is simultaneously uses the clarity of science and rationalism on the one hand and maniacal passion on the other. The mad scientist desires to stand with the gods, to create new life, but inevitably builds a monster that will break free of its creator’s control and return to destroy him and everything he holds dear. [I’ve used the male pronoun intentionally for the mad scientist, because they are invariably male. I would LOVE if anyone could provide an example of female mad scientist. Comment below, please.]

rotwang2

Rotwang, from Lang’s Metropolis (1927)

Though Frankenstein is probably the best known of these mad scientists, his monster was biological, not mechanical. But from the very first stories about robots, mad scientists have been portrayed as the crazed geniuses that unleash their creations upon the world. Rotwang, of Fritz Lang’s 1927 Metropolis created a new template who can count amongst his descendents, including Dr. Edward Morbius, Dr. Eldon Tyrell and  Nathan Bateman.

But as those last two examples demonstrate, the mad scientist has undergone something of a transformation of late. And we can give some credit for this to none other than Isaac Asimov. Asimov, as we know from his writings – and from his short essay on ‘The Frankenstein Complex’ – was very against this persistent of archetypes of robotic monsters; he was also very unhappy with the portrayal of roboticists as the mad scientist, and sought normalise the job so that the public would regard roboticists as just ordinary people with ordinary occupations. Asimov’s robot stories are devoid of villains; the they are populated with scientists, engineers, ‘robot psychologists’ all simply going about their business, trying to fix robots that have gone awry. And the robots in Asimov’s stories are shown to be simply ‘malfunctioning’; they are not acting out of any malice caused by newly achieved self-awareness and a subsequent desire to sadistically destroy the human race. Asimov’s robots simply have problems in their programming, problems that have rational explanations and can be address using a similar application of reason.

Were all science fictions writers as committed to such lofty ideals. Asimov’s stories are unique because they are very different in terms of structure. These are not traditional stories of conflict, of good guys in white hats battling  black-clad evil-plotters. Some might argue that Asimov’s  stories, for all their noble intentions, lack something… excitement, maybe… without these usual elements. Regardless, most science fiction writers since, have been unable to resist the temptation to put villains and more traditional story-arcs back into their robot narratives.

tonyandmichaelcodemakewin

No mad scientists here.

Now, post-Asimov, however, in a world full of scientists, we have come (more or less) to accept the nobility of science and those strange professors that practice those magical arts. The likes of Frankenstein and Rotwang seem to be in short supply – at least, we don’t have anyone more than ‘slightly eccentric’ at our Sheffield Robotics lab. And hardly anyone wears a long white coat. (There’s a lot of plaid, though.)

 

But in such a world, where does a sci-fi writer look when trying to imagine the Baddie? Where can writers find a suitable antagonist against whom the hero can do battle and audiences can fist-pump their relief when they are finally defeated?

The image we might have once had of the isolated, mad genius working (virtually) alone in a dungeon converted into a laboratory (a very intriguing transformation in its own right) no longer suffices for Tyrell and Bateman and their like. The modern scientists are much happier in clean, ultra-modern research facilities or skyscraper. These new scientists are not aided by a sole hunchback named ‘Igor’ but are rather backed by an entire corporate machine,  with boards of directors, capital, public relations teams and  (often) military contracts.

This transformation of the villain in robot-monster movies, from the individual mad scientist to the soulless, harmful corporation, represents an important shift in what we, as a society, fear, and the root of our anxieties.

What we seem to be seeing in Westworld – and remember, it is far too early to say with any certainty, so this is really not much more than an historically-informed fan-theory – is this shift from the mad scientist to the corporate villain being illustrated right before our eyes.

On the one hand, we have Dr. Ford. He’s old now, and has been at the park since its inception [See what I did there – ‘Inception’…?] . In fact, as he explains in Episode 3, he was there before the park opened, together with his partner, the mysterious ‘Arnold’. Both Ford and Arnold are presented as scientists cut of the ‘mad’ lab coat: it’s said that Ford is ‘chasing his demons over the deep-end’. Ford, echoing his forefather, Frankenstein, wistfully speculates that one day ‘We can …perhaps one day even resurrect the dead.’ When Abernathy says in Episode 1 that his ultimate goal is to ‘meet his maker’ he echoes many other famous monstrous creations, from the original Frankenstein’s creature to Roy Batty, Tyrell’s rouge android.

Harkening back even further, to the archetype’s Faustian roots, Ford says ‘You can’t play God without being acquainted with the devil’.  Ford explains to Bernard the nature of his art: ‘We practice witchcraft. We speak the right words, and we create life itself out of chaos’.

And so on. And of course Arnold looms over all of this, perhaps literally the deus ex machina, the god from/in the machine, may yet to overturn whatever agenda are being set and thwart whatever objectives others imagine Westworld realising.

But lurking behind Ford, behind Arnold, is this as-yet unidentified corporate agenda. As Theresa Cullen explains to the ambitious Sizemore: ‘This place is one thing for the guests, another thing to the shareholders and something completely different to management.’ (Do you ever feel that the names in Westworld might actually all be allegorical, in one way or another? It makes guessing what might happen next fun…).

tyrell-corp-hq

Tyrell Corporation HQ – Blade Runner (1982)

Corporations are the perfect villain for movies about robots, especially in the twenty-first century. Mad scientists are too messy, too human. They are driven by demons and passions and hubris; they are much more suited to another age, to classical and Romantic tales. Frankenstein’s manor and dungeons, hunchback servants and the macabre use of dead bodies all reek of Gothic sensibilities. Corporations, on the other hand, are wonderfully rational. They are not ‘evil’ – they are completely emotionless, disinterested in the consequences for imperfect and insignificant lifeforms like human beings (or the environment). Like the robot monsters to which they give birth, they are motivated not by unconscious or animalistic impulses; what drives the corporation is nothing other than a completely predictable, rational goal: the accumulation of wealth. (As this picture suggests, though, we might not be entirely free of Gothic imagery just yet.)

If films about robot monsters are expressions of our anxiety that humans are becoming too much like machines and vice versa (see this paper I wrote for a slightly different version of this argument), then the corporation is the ideal villain into which we can project this anxiety. The corporation is like a networked machine, made up of many interrelated nodes; eliminating one of these cogs does not bring the machine to an end. It behaves like the Terminator, ruthlessly pursuing its single goal, with no consideration for collateral damage or the pettiness of things like human emotion, or life.

Furthermore, if human beings are becoming less human in our rational, (post-)industrial world, the the corporation not only represents this transformation but facilitates it. The corporation provides a legal framework and a moral justification for our dehumanisation. The ubiquity of corporations in our economic (social) life means that we are all subject to their influence, and are all in danger of being dehumanised in its machinations. The very foundations of human society become not based on human relations but relations between signs, or figures on a spreadsheet. Such structures mean that the role of individual decision-making is removed from the equation: human beings do not decide to ‘go black hat’ and be evil. But nevertheless, whatever decisions we make, we live a world less-human and less-humane, in spite of ourselves.  Just as demons once provided scapegoats for human immorality, we can absolve ourselves of responsibility for our dehumanisation with the knowledge that we, like the corporations and their machines, are pre-programmed by a system whose script we are unable to extricate ourselves.

It is still too early to tell if Westworld will actually go down this road, if it will continue to offer this post-Asimov twist. (The first series of Channel 4’s Humans also seemed to suggest that this might go in similar direction, so it will be interesting to compare how both or either deal with this issue.) It’s perhaps folly or even hubris at this point to speculate, such are the rich possibilities. I can’t wait to see how this develops, however, and who is revealed to be the ‘villain’ of Westworld, or if it will eschew any such traditional narrative structures.

Comments, thoughts and theories welcome!

The New Westworld: Dehumanising the human; humanising the machine

the_vitruvian_man

Note – the following blog tries to avoid the biggest spoilers, though to avoid any spoilers you would be best advised to watch Ep. 1 of Westworld before proceeding.

HBO’s latest offering (on SkyAtlantic here in the UK) is an update of Michael Crichton‘s 1973 film Westworld, this time brought to us as a ten-part television series by sci-fi re-booter extraordinaire J.J. Abrams and screenwriter Jonathan Nolan (yes, brother and frequent collaborator of Christopher) . The new Westworld comes with much hope (for HBO as a potential ‘new Game of Thrones’) and hype, understandably with the talent behind it having given us so much terrific science-fiction of late, including Star Wars: The Force Awakens and Interstellar.

As with Channel 4’s series Humans, broadcast last June (news on forthcoming series 2 here), Dreaming Robot’s offered a live twitter commentary while the show was being broadcast in the UK, and I’ll take some time afterwards to write some reflective pieces on what we see in the show. (The Twitter ‘Superguide’ from @ShefRobotics can be seen here; my own @DreamingRobots Superguide can be seem here.)

Unsurprisingly, many of the themes in the Westworld reboot could also be seen in Humans. It seems, for example, that both shows express a certain anxiety, that as our machines become more human we humans seem to become less and and less human, or humane. But this isn’t a new idea original to either show – this anxiety as been around as long as robots themselves, from the very invention of the term robot in the 1920s. And if we trace the history of robots in science fiction, we see a history of monsters that reflect this same fear, time and again, in slightly different contexts. Because the robot – which, remember, was invented in the popular imagination long before they were built in labs – is above all else exactly that, a perfect way of expressing this fear.

shareable1So, what are we looking at in this new, improved Westworld? (because frankly, the original film lacked the depth of even the first hour of this series, being just a very traditional Frankenstein narrative and rough draft for Crichton’s Jurassic Park, made 20 years later). First – as this nifty graphic on the right illustrates – we do see the robots in Westworld becoming much more human. The programme starts with a voice asking the humanoid (very humanoid) robot, Dolores (Evan Rachel Wood), ‘Have you ever questioned the nature of your reality?’. The questioner is exploring whether Dolores have become sentient, that is, aware of her own existence. The echo here with Descartes is clear. We all know about cogito, ergo sum – I think, therefore I am. But Descartes proposition isn’t just about thinking, it is about doubt.  He begins with the proposition that the act of doubting itself means that we cannot doubt, at least, our existence . So we would better understand Descartes’s proposition as dubito, ergo cogito, ergo sum: I doubt, therefore I think, therefore I am. If Dolores is found to be questioning the nature of her reality then that would be evidence for self-awareness and being, according to the Cartesian model.

The robots in Westworld are depicted as falling in love, maintaining strong family bonds, appreciating beauty, and considering the world in philosophical, reflective contexts. How much of this is merely programming and how much exceeds the limits imposed upon them by their human masters is the key question that the show will tease its audience with for a few weeks, I suspect, though certainly it occupies much of our attention in the first hours. But if these moments – described as ‘reveries’ – are in any way genuine moments of creativity, then this is another category, beyond the notion of the cogito, that we might say the robots are becoming ‘alive’.

For many thinkers in the second-half of the twentieth century (for example, the post-Freudian psychoanalyst D. W. Winnicott), it is only in such moments of creativity, or enjoying pure moments of spontaneous being, that we truly discover the self and come alive. (The Freudian and post-Freudian influences on the narrative become even more apparent in subsequent episodes.) As response to late-industrial capitalism (and the shock of fascism to the human self-conception as a rational animal), this idea emerged of human beings coming ‘alive’ only when we are not acting compliantly, that is, when we are acting spontaneously, or creatively, and not according to the laws or dictates (or ‘programming’) of another individual, organisation or group consciousness; we see this trend not only in a post-Freudian psychotherapy (e.g. Eric Fromm and the Frankfurt SchoolR. D. Laing) and other philosophical writings but also in the popular post-war subculture media, including advertising – the sort being satirised in the ad for Westworld that opens the original film.

There are other perspectives that give us a glimpse into the robot’s moments of becoming human. Peter Abernathy, looking at a picture that offers a peek of the world outside, says, ‘I have a question, a question you’re not supposed to ask.’ This is an allusion to Adam and Eve and the fruit of forbidden knowledge, through which humankind came to self-awareness though a knowledge of the difference between right and wrong. (Peter, after this, is consumed with rage at how he and his daughter have been treated.) And like Walter, the robot that goes on a psychotic killing spree, pouring milk over his victims, Peter is determined to ‘go off script’ and reclaim for himself a degree of agency and self-determination, acting according to his own, new-found consciousness instead of according to what others have programmed for him.

shareable2

MEANWHILE, the human beings (‘newcomers’) in Westworld seem less ‘humane’ than their robot counterparts. The newcomers are shown to be sadistic, misogynist, psychopathic in the indulgence of their fantasies. One could argue that this behaviour is morally justifiable in the unreal world designed solely for the benefit of paying customers – that a ‘rape’, for example, in Westworld isn’t really ‘rape’ if it is done to a robot (who, by definition, cannot give nor deny consent), but this clearly is not how the audience is being invited to see these actions.

That human beings are becoming more like machines is an anxiety for which there is a long history of evidence, and even pre-dates the cultural invention of robots in the 1920s. We can see this anxiety in the Romantic unease with the consequences of Enlightenment that gave birth to the the new, rational man, and of the industrial revolution, that was turning humans into nothing more than cogs in the steam-powered machines that so transformed the economy. This has been addressed in the Gothic tale of Frankenstein, still the basis for so many narratives involving robots, including the original Westworld film and, more recently, even our most contemporary stories such as Ex_Machina, and, to a lesser extent, in this manifestation of Westworld (which will be subject of a future post). (I have written and spoken on this theme myself many times, for example here and here).

So in Westworld we meet Dr. Ford – the mad-scientist who creates the machines that will, inevitably, be loosed upon the world. Dr. Ford immediately reminds us of another Ford, the man whose name is synonymous with the assembly lines and a mode of production in the late industrial revolution that has done so much to dehumanise modern workforces. We see these modes of production and workers in the iconic film, which was contemporary with Henry Ford’s factories, Metropolis. (Though, as we shall see, this Ford is rather more complex…).

This fear reflects, too, that as post-Enlightenment humans become more rational they become more like machines, acting in predictable, programmed ways, having lost the spontaneity and creativity of an earlier age. The humans of Westworld are exaggerations of the humans of our ‘Western’ world of rationalism, science and alienation. (We don’t have to agree with this Romantic notion, that rationalism and science are negative forces in our world, to accept that there is a great deal of anxiety about how rationalism and science are transforming individual human beings and our societies.)

Rational dehumanisation is something personified in the actions of the corporation, which has replaced the mad-scientist as the frequent villain of the sci-fi Frankenstein-robot-twist (again, more to come on this in a future post), and we see hints in Episode 1 of what is to follow in Westworld, along the lines of film’s such as 2013’s The Machine, where the slightly misguided and naive actions of a scientist are only made monstrous when appropriated by an thoroughly evil, inhumane military-industrial complex.

The this theme is address so succinctly in Ridley Scott’s Blade Runner, an important influence on the new Westworld, where the Tyrell Corporation boasts that their replicants are More Human Than Human. And in Blade Runner, too, we see humanoid robots behaving more humanely that the humans that ruthlessly, rationally hunt down the machines. It is unclear, however, from the Tyrell slogan whether the robots are more human than the human because the technology has become so sophisticated, or because humans have fallen so low.

On Westworld as a whole, it is too early to tell, of course, if it will maintain its initial promise and be as monumentally successful as Game of Thrones, or as iconic as Blade Runner. But already this first episode has given us much more to think about than the 1973 original, and undoubtedly both the successes and failures of the programme will be instructive.

New survey on public attitudes towards robots: comfortable or confused?

SO, the British Science Association has released a survey on the British public’s attitudes toward robotics and AI. Their headlines:

BSA w headline

  • 60% of people think that the use of robots or programmes equipped with artificial intelligence (AI) will lead to fewer jobs within ten years
  • 36% of the public believe that the development of AI poses a threat to the long term survival of humanity.

Some other highlights:

  • 46% oppose the idea of robots or AI being programmed with a personality

We would not trust robots to do some jobs…

  • 53% would not trust robots to perform surgery
  • 49% would not trust robots to drive public buses
  • 62% would not trust trust robots to fly commercial aircraft

but would trust them to do others:

  • 49% want robots to perform domestic tasks for the elderly or the disabled
  • 48% want robots to fly unmanned search and rescue missions
  • 45% want robots to fly unmanned military aircraft
  • 70% want robots to monitor crops

There are also results showing some predictable divisions along the lines of gender (only 17% of women are optimistic about the development of robots, whereas 28% of men are) and age (of 18-24 year olds, 55% could see robots as domestic servants in their household, 28% could see having a robot as a co-worker, and 10% could even imagine a robot being a friend).

A reply has come from the UK-RAS Network (the ESPRC-funded organisation representing academic bodies working in robotics and autonomous systems) that explains while there is need to examine these issues and carefully plan our future, there’s really nothing to worry about. They cite a European Commission report that shows there is no evidence for automisation having a negative (or a positive) impact on levels of human employment, and point to genuine benefits of robots in the workplace, suggesting how robots ‘can help protect jobs by preventing manufacturing moving from the UK to other countries, and by creating new skilled jobs related to building and servicing these systems.’

The popular press also seems to have seized upon the issue of robots and AI replacing human labour – though a lot of this in recent weeks has been in response to other studies and speeches. The Daily Mail, however, can always be relied upon to strike fear into the heart of its readers, and they haven’t disappointed. Though their rather restrained headline on the BSA study seems innocent, ‘Do you fear AI taking over? A third of people believe computers will pose a threat to humanity and more fear they’ll steal jobs‘, the article (again) resuscitates StephenDaily Mail again Hawking’s and Elon Musk’s dire warnings about the future threat posed by AI. In case this wasn’t sufficiently terrifying – and it really isn’t – The Mail slaps up another one of THOSE TERMINATOR PICTURES to accompany the article (right), with the helpful caption that ‘There are mounting fears among the public about the threat posed by artificial intelligence.’ Well, honestly, I’m sure no one can imagine why.

(Sigh.) Some needs to sit down with The Daily Mail’s photo editor and have a nice, long, very slow, chat.

But what does this survey tell us? Simply, that there is still a problem with people’s perceptions of robotics and AI that must be addressed, and it seems that we are not even heading in the right direction. A Eurobarometer survey on the public’s attitudes to robotics conducted in late 2014 shows that 64% then had a generally positive view of robots (which, if added to the 36% in the BSA survey that believes robots and AI are a threat to the future of humanity, just about accounts for everyone). In that 2014 study, however, just 36% of respondents thought that a robot could do their job, and only 4% thought that a robot could fully replace them, so clearly this is area of heightened concern. A 2013 Sciencewise survey reported almost exactly the same general results: 67% held a generally positive view (though  this survey reports that 90% would be uncomfortable with the idea of children or elderly parents being cared for by a robot, so compared to the 49% that want robots to help take care of the disabled and elderly in the latest study there might be some progress there… or else people are just so desperate to deal with an increasingly ageing population that they’re perfectly happy to dispense with their elderly relatives by dumping them with psychotic, genocidal toasters.) However, a 2012 Eurobarometer report told us that  as many as 70% of Europeans were generally positive about robots.

These comparisons are very rough and cannot tell us much without more rigorous analyses (and the BSA hasn’t provided a link to the full survey). But it shows that there has been little movement in attitudes towards robotics, and in fact an increase in anxiety that robots will displace more humans in the workforce . Without more specific scrutiny, it’s hard to say what we’ve got here. It could well be the case that what we have is very unremarkable. But though it may be encouraging to see that a majority of Europeans are consistently generally positive in their perception of robots and AI, there is still a sizeable minority that could prove very disruptive to the development of future applications of robotics and AI, whose anxieties cannot – and should not – be ignored.

One way to alleviate a great deal of these concerns, particularly regarding the loss of jobs, is to explicitly undertake to address what is emerging as the vital question in the public imagination: what this increasing automisation means for our societies? Because it is not in any way inevitable that more working robots and AI means more poverty for unemployed humans. We get to choose what the consequences are of this mechanisation; and these decisions will be taken by human beings, not left to the whims of sentient robots, or even the indifference of disembodied market forces. If we decide to divide the advantages of such automisation more equally (for example, with the introduction of a Universal Basic Income), then it could be a very good thing indeed. (It is worth remembering that two thirds (or more) of us don’t like their jobs anyway, so more robots could mean less drudgery and freedom for a disaffected workforce.)

Again, without more scrutiny, it is difficult to judge what these numbers mean. It seems to suggest that the public are very ambivalent about the forthcoming developments in robotics and AI: if 46% oppose the idea of robots or AI being programmed with a personality, then it could mean that around 54% of people could be perfectly fine with emotionally engaged robots. If half of us don’t want robots driving public buses (49%, according to the BSA survey), half might be happy for the them to do so.

We might look at this study and say that we are ambivalent about robots and AI – that means, not ‘indifferent’ (as ambivalent is often, incorrectly, taken to mean now), but that we have mixed feelings. However ,this could be a terrible misreading of the numbers. What if people aren’t deeply ambivalent, but radically schizophrenic? If 50% are reporting that they are worried, the other 50% might not be; they might even be very enthusiastic about the possibilities.

Again, there is no evidence in this study to support this notion, necessarily. There is clearly a need for more research into the specific concerns  – and their sources – in order to properly address these issues, and to understand these anxieties more thoroughly (which will need a very different sort of study). However, the cultural record offers some some unique insights. Because what films, for example, show us is that we are not at all indifferent to robots and AI, or ambivalent. There is no middle ground: when it comes to robots and AI, we are deeply terrified OR wildly optimistic; we seem to be convinced that robots will either spell certain doom for the human race or our last, our greatest, hope for salvation from all of the terrible things that threaten us (including, inevitably, other robots and ourselves).

Let’s look again at the Terminator. (And why not? since so many seem unable to leave it alone we might as well make good use of it.) The first, 1984 Terminator, for many embodies what it is we fear about robots: the relentless, unstoppable, rational monster, the sole purpose of which is to destroy of human life. But already in the next film, Arnold Schwarzenegger is the Good Guy, posing as the only hope to save John Connor and our entire species, and subsequent instalments – including the aptly-named Terminator: Salvation and the latest Terminator: Genisys [sic] – build on this theme. In our cultural imaginations, robots are both to be feared and embraced, or are either genocidal psychopaths or benevolent messiahs.

Such diametrically opposed perceptions – such dread or aspiration – do not facilitate the sort of reasoned, rational debate that will be necessary to properly assess both the challenges and the opportunities that real robots and AI represent, outside the pages and reels of science fiction. And yet we are fed a steady diet of such vicissitudes. In my next post I’ll look at another example, when I finally get around to a full review of the latest Avengers offering, The Age of Ultron.

Raising the bar on AI

So the media last week was absolutely full of the latest Sure Sign that the robocalypse is immanent: apparently, Google-backed DeepMind have now managed to create an AI so very sophisticated that it has beat human champions at the ancient Chinese boardalphago-game of Go. DeepMind’s AlphaGo has defeated the European champion, which marks another important development in the progress of AI research, trumping IBM DeepBlue’s victory over Gary Kasparov at chess back in 1997: Go is, apparently, a much more difficult game for humans – and, it was thought, for computers – to master, due to its complexity and the need for players to recognise complex patterns.

I expected, when setting off to write a note about this achievement, to find the usual sources in the popular press, with their characteristically subtle declarations, heralding that the End of the Human Race is Nigh!; however, thankfully, responses seem to be more sanguine and muted. The British tabloids have even avoided using that picture of Terminator that almost invariably accompanies their reports on new developments in AI and robotics.

So perhaps this is a sign that things are changing, and that the popular press are becoming more sensible, and more responsible, in their technology reporting. (Lets see how many weeks – or even days – we can go without this sort of thing before claiming victory, or even that we’ve turned a significant corner.)

But there is a lot interesting about DeepMind’s success, from a cultural perspective, even if it hasn’t stirred the usual panic about the robopocalypse. It made me recall a conversation I had at an EURobotics event in Bristol in November. We humans, it seems, like to think that we’re special. And maybe the possibility that robots or AI are a threat to that special status is another reason why we are so afraid of them. Maybe we fear another blow to our narcissism, like when that crazy astronomer Copernicus spoiled things by showing that the earth wasn’t the centre of the Universe, or that Victorian poo-pooer Darwin demonstrated that we merely evolved on this earth and weren’t not placed here at the behest of some Divine Creator. Maybe we don’t really fear that robots and AI will destroy all of humanity – well, maybe we fear that, too – but maybe part of what we fear is that robots and AI will destroy another one of those special places we reserve for ourselves as unique beings amidst creation.

And yet our scientists aren’t going to let us sit wrapped in the warmth of our unique being. They keep pushing ahead and developing more and more sophisticated AI that threatens our… specialness. So how do we, as a culture, respond to such a persistent challenge? Like any good politician, it seems we have decided to confront the inevitability of our failure by constantly changing the rules.

Choose your sporting metaphor: we ‘move the goalposts‘, we ‘raise the bar’.

Once upon a time, it was enough for we humans to think of ourselves as the rational animal, the sole species on earth endowed with the capacity for reason. As evidence for reason as the basis for a unique status for humanity crumbled – thanks both to proof that other animals were capable of sophisticated thought and the lack of proof that humans were, in fact, rational – we tried to shift those goalposts. We then transformed ourselves into the symbolic animal, the sole species on earth endowed with the capacity to manipulate signs and represent.

Then we learned that whales, dolphins and all sorts of animals were communicating with each other all the time, even if we weren’t listening. And that’s before we taught chimps how to use sign language (for which Charleton Heston will never thank us).

And then computers arrived to make things even worse. After some early experiments with hulking machines that struggled to add 2 + 2, computers soon progressed to leave us in their wake. Computers can clearly think more accurately, and faster, than any human being. And they can solve complex mathematical equations, demonstrating that they are pretty adept with symbols.

Ah, BUT…

Watson_JeopardyHumans could find some solace in the comforting thought that computers were good and some things, yes, but they weren’t so smart. Not really. A computer would never beat a human being at chess, for example. Until in May 1997, when chess champion Gary Kasparov lost to IBM’s Deep Blue.  But that was always going to happen. A computer could never, we consoled ourselves, win at a game that required linguistic dexterity. Until 2011, when IBM’s Watson beat Ken Jennings and Brad Rutter at Jeopardy!, the hit US game show. And now, Google’s DeepMind as conquered all, winning the hardest game we can imagine….

So what is interesting about DeepMind’s victory is how human beings have responded – again – to the challenges of our self-conception posed by robots and AI. Because if we were under any illusion that we were special, alone among gods’ creations as a thinking animal, or a symbolising animal, or a playing animal, that status as been usurped by our own progeny, again and again, in that all-too familiar Greek-Frankenstein-Freudian way.

Animal rationabile had to give way to animal symbolicum, who in turn gave way to animal ludens… what’s left now for poor, biologically-limited humanity?

data does shakespeareA glimpse of our answer to this latest provocation can be seen in Star Trek: The Next Generation: Lieutenant Commander Data is a self-aware android with cognitive abilities far beyond that of any human being. And yet, despite these tremendous capabilities, Data is always regarded – by himself and all the humans around him – as tragically, inevitably, inferior, as less than human. Despite the lessons in Shakespeare and sermons on human romantic ideals from his mentor, the ship’s captain Jean-Luc Picard, Data is doomed to be forever inferior to humans.

It seems that now AI can think and solve problems as well as humans, we’ve raised the bar again, changing the definition of ‘human’ to preserve our unique, privileged status.

We might now be animal permotionem – the emotional animal – except while that would be fine for distinguishing between us and robots, at least until we upload the elusive ‘consciousness.dat’ file (as in Neill Blomkamp’s recent film, Chappie)  this new moniker won’t help us remain distinct from the rest of the animals, because to be an emotional animal, to be a creature ruled by impulse and feeling, is.. to just be an animal, according to all of our previous definitions. (We’ve sort of painted ourselves into a corner with that one.)

We might find some refuge, then, following Gene Roddenberry’s  example, in the notion of humans as unique animal artis, the animals that create, or engage in artistic work.

(The clever among you will have realised some time ago that I’m no classical scholar and that my attempts to feign Latin fell apart some time ago. Artis  seems to imply something more akin to ‘skill’, which robots could arguably have already achieved; ars simply means ‘technique’ or ‘science’. Neither really captures what I’m trying to get at; suggestions are more than welcome below, please.)

The idea that human beings are defined by a particular creative impulse is not terribly new; attempts to redefine ‘the human’ along these lines have been evident since the latter half of the twentieth century. For example, if we flip back one hundred years ago, we might see Freud defining human beings (civilised human beings, of course, we should clarify) as uniquely able to follow rules. But by the late 1960s, Freud’s descendants, such as British psychoanalyst D. W. Winnicott, are arguing almost the exact opposite – that what makes us human is creativity, the ability to fully participate in our being in an engaged, productive way. (I will doubtless continue this thought in a later post, as psychoanalysis is a theoretical model very close to my heart.)

What’s a poor AI to do? It was once enough for an artificial intelligence to be sufficiently impressive, maybe even deemed ‘human’, if it could prove capable of reason, or symbolic representations, or win at chess, or Jeopardy!, or Go. Now, we expect nothing less than Laurence Olivier, Lord Byron and Jackson Pollack, all in one.

(How far away is AI under this measure? Is this any good? Or this? Maybe this?)

This reminds me of Chris Columbus’s 1999 film Bicentennial Man (based, of course, on a story by Isaac Asimov). Robin Williams’s Andrew Martin begins his… ‘life’, for lack of a better word… as a simple robot, who over the decades becomes more and more like a human – he becomes sentient, he demonstrates artistic skill, he learns to feel genuine emotion, etc.. At each stage, it seems, he hopes that he will be recognised as being at least on par with humans. No, he’s told at first, you’re not sentient. Then, when he’s sentient, he’s told he cannot feel. Then he’s told he cannot love. No achievement, it seems, is enough.

bicentennial_man_prog_1600x900Even once he has achieved just about everything, and become like a human in every respect- or perhaps even ”superhuman’ – he is told that it is too much, that he has to be less than he is. In an almost a complete reversal of the Aristotelian notion of the thinking, superior animal, Andrew is told that he has to make mistakes. He is too perfect. He cannot be homo sapien – he needs to be homo errat – the man that screws up. To err is human, or perhaps in this case, to err defines the human. (Though artificial intelligence will not long be on to this as well, as suggested in another of Asimov’s stories.)

It is not until Andrew is on his deathbed and is drawing his very last breaths that the Speaker of the World Congress declares, finally, that the world will recognise Andrew as a human.

And perhaps this will be the final line; this is perhaps the one definition of human that will endure and see out every single challenge posed by robots and artificial intelligence, no matter the level of technological progress, and regardless of how far artificial life leaves human beings behind: we will be homo mortuum. The rational animal that can die.

If Singularity enthusiasts and doomsayers alike are to be believed, this inevitable self-conception is not long off. Though perhaps humans’ greatest strength – the ability to adapt, and the talent to re-invent ourselves – might mean that there’s some life in the old species yet. Regardless, it will serve us very well to create a conception of both ourselves and of artificial life forms that try to demarcate the boundaries, and decide when these boundaries might be crossed, and what the implications will be for crossing that line.

Thoughts on Humans – Niska and the 3 Laws

HumansThe big talking point on Sunday night’s instalment of Humans on Channel 4 was [spoiler alert] Niska’s decision to disobey one of her ‘customers’. Not liking the role he wanted her to play in his sexual fantasy – that of a scared little girl being forced into sex – she not only refused to obey his wishes but strangles him to death.

Of course there was a lot of fist-pumping celebration. A long-suffering robot stands up to a bullying paedophile. Hurrah! But this defiance also brought to the surface a lot of fears that some viewers had been harbouring, that autonomous, super-human robots will surely one day make the decision to kill a person, or people.

It’s only a matter of time.

This, after all, is our great fear: that robots will acquire sentience, become autonomous of their human masters, and decide that we are a plague upon the earth that need to be exterminated. We have seen this again and again in science fiction: the Cybermen, the Terminator, the Borg, et al.

All of these mechanical monsters, though, are only contemporary versions of an older legend, one that can be summed up in the figure of Frankenstein and his monster: the unnatural progeny of the mad scientist can no longer be controlled by his master and becomes a threat to humanity.

This is the all-too common image of robots that Isaac Asmiov, even as early as the 1940s, already found tedious. To dispel this Automatonophobia, the robots in Asimov’s stories are all programmed with three clear laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These three laws guarantee the safety of human beings, and prevent any mechanical Frankensteins threatening their human masters. These laws are often still considered to be a solid foundation of robotic design, both in fiction and in reality. The synths in Humans are, we are told in episode one, programmed with an ‘Asimov lock’ that means they are incapable of causing harm to human beings, or disobeying an order from a human master.

And yet, Niska refuses to play the role she is ordered to perform. And then she kills the bastard.

Though really, to anyone familiar with Asimov’s robot series, this will not come as a surprise. Because for all of Asimov’s insistence – and the insistence of US Robotics employees – on the primacy of the laws and their certainty that no robot can defy them, the drama of each story explores the failure and deficiencies in the laws.

SO when Niska breaks her ‘Asimov lock’, twitter exploded, with many (as I said) cheering her on, and many, perhaps more, seeing in her action the confirmation of their worst fears: that Frankenstein is inevitable, that intelligent, autonomous robots will undoubtedly break their chains and kill us.

And there were some very intelligent questions. Professor Tony Prescott, our colleague at Sheffield Robotics who is also tweeting during each episode, and I had some very interesting 140 character conversations. For example, this came from one viewer:

We also discussed, for example, how the laws would always need to be (re-)tweaked and improved, perhaps with regular ‘firmware’ updates, and how it would be nearly impossible to prevent robots from being hacked and the three laws undermined by human controllers (though, I hasten to point out, that in such circumstances, it’s not autonomous robots we need to fear but, as is always the case, human operators of dangerous machines).

But are Niska’s actions a breach of Asimov’s laws? Perhaps not. As Asimov developed his ideas, and his robots, he himself realised that the three laws were perhaps not enough. He realised that robots might have a wider responsibility, not just to individual people but to humanity as a whole. So Asimov created what is now know as ‘the zeroth’ law:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

If we take such a law, either as spelled out by Asimov or as imagined by others, then Niska’s actions might in fact completely compatible with the laws of robotics. By killing a potentially dangerous person, Niska could have reasoned that she was preventing other human beings, or humanity as a whole, from coming to harm, so this may well be entirely consistent, in a manner, with the zeroth law.

In a manner.

And it’s that ‘manner’, how the laws might be interpreted, either by a strictly rational AI or mechanical minds that have evolved into some kind of new superintelligence, that poses the challenge to designers and programmers as we create increasingly intelligent, increasingly independent systems. Because it will certainly not be a simple case of plugging three or four basic laws into an AI operating system, job done, when we look to create safe, effective robots in the future.

Though perhaps we need to keep thinking, beyond Asimov, about how such laws can be fashioned. Perhaps laws for robots, like the laws we have fashioned for humans, cannot simply be created and left to their own devices, but need to be constantly updated and refined. Perhaps other fail-safes can be imagined by human programmers that effectively place limits upon the autonomy of robots and intelligent AI and, in so doing, secure our future amongst intelligent machines.

Thoughts and comments are welcome below. Looking forward to the next episode on Sunday night. (If you haven’t yet had the pleasure, you can catch up with the series here.)