The New Westworld: Dehumanising the human; humanising the machine

the_vitruvian_man

Note – the following blog tries to avoid the biggest spoilers, though to avoid any spoilers you would be best advised to watch Ep. 1 of Westworld before proceeding.

HBO’s latest offering (on SkyAtlantic here in the UK) is an update of Michael Crichton‘s 1973 film Westworld, this time brought to us as a ten-part television series by sci-fi re-booter extraordinaire J.J. Abrams and screenwriter Jonathan Nolan (yes, brother and frequent collaborator of Christopher) . The new Westworld comes with much hope (for HBO as a potential ‘new Game of Thrones’) and hype, understandably with the talent behind it having given us so much terrific science-fiction of late, including Star Wars: The Force Awakens and Interstellar.

As with Channel 4’s series Humans, broadcast last June (news on forthcoming series 2 here), Dreaming Robot’s offered a live twitter commentary while the show was being broadcast in the UK, and I’ll take some time afterwards to write some reflective pieces on what we see in the show. (The Twitter ‘Superguide’ from @ShefRobotics can be seen here; my own @DreamingRobots Superguide can be seem here.)

Unsurprisingly, many of the themes in the Westworld reboot could also be seen in Humans. It seems, for example, that both shows express a certain anxiety, that as our machines become more human we humans seem to become less and and less human, or humane. But this isn’t a new idea original to either show – this anxiety as been around as long as robots themselves, from the very invention of the term robot in the 1920s. And if we trace the history of robots in science fiction, we see a history of monsters that reflect this same fear, time and again, in slightly different contexts. Because the robot – which, remember, was invented in the popular imagination long before they were built in labs – is above all else exactly that, a perfect way of expressing this fear.

shareable1So, what are we looking at in this new, improved Westworld? (because frankly, the original film lacked the depth of even the first hour of this series, being just a very traditional Frankenstein narrative and rough draft for Crichton’s Jurassic Park, made 20 years later). First – as this nifty graphic on the right illustrates – we do see the robots in Westworld becoming much more human. The programme starts with a voice asking the humanoid (very humanoid) robot, Dolores (Evan Rachel Wood), ‘Have you ever questioned the nature of your reality?’. The questioner is exploring whether Dolores have become sentient, that is, aware of her own existence. The echo here with Descartes is clear. We all know about cogito, ergo sum – I think, therefore I am. But Descartes proposition isn’t just about thinking, it is about doubt.  He begins with the proposition that the act of doubting itself means that we cannot doubt, at least, our existence . So we would better understand Descartes’s proposition as dubito, ergo cogito, ergo sum: I doubt, therefore I think, therefore I am. If Dolores is found to be questioning the nature of her reality then that would be evidence for self-awareness and being, according to the Cartesian model.

The robots in Westworld are depicted as falling in love, maintaining strong family bonds, appreciating beauty, and considering the world in philosophical, reflective contexts. How much of this is merely programming and how much exceeds the limits imposed upon them by their human masters is the key question that the show will tease its audience with for a few weeks, I suspect, though certainly it occupies much of our attention in the first hours. But if these moments – described as ‘reveries’ – are in any way genuine moments of creativity, then this is another category, beyond the notion of the cogito, that we might say the robots are becoming ‘alive’.

For many thinkers in the second-half of the twentieth century (for example, the post-Freudian psychoanalyst D. W. Winnicott), it is only in such moments of creativity, or enjoying pure moments of spontaneous being, that we truly discover the self and come alive. (The Freudian and post-Freudian influences on the narrative become even more apparent in subsequent episodes.) As response to late-industrial capitalism (and the shock of fascism to the human self-conception as a rational animal), this idea emerged of human beings coming ‘alive’ only when we are not acting compliantly, that is, when we are acting spontaneously, or creatively, and not according to the laws or dictates (or ‘programming’) of another individual, organisation or group consciousness; we see this trend not only in a post-Freudian psychotherapy (e.g. Eric Fromm and the Frankfurt SchoolR. D. Laing) and other philosophical writings but also in the popular post-war subculture media, including advertising – the sort being satirised in the ad for Westworld that opens the original film.

There are other perspectives that give us a glimpse into the robot’s moments of becoming human. Peter Abernathy, looking at a picture that offers a peek of the world outside, says, ‘I have a question, a question you’re not supposed to ask.’ This is an allusion to Adam and Eve and the fruit of forbidden knowledge, through which humankind came to self-awareness though a knowledge of the difference between right and wrong. (Peter, after this, is consumed with rage at how he and his daughter have been treated.) And like Walter, the robot that goes on a psychotic killing spree, pouring milk over his victims, Peter is determined to ‘go off script’ and reclaim for himself a degree of agency and self-determination, acting according to his own, new-found consciousness instead of according to what others have programmed for him.

shareable2

MEANWHILE, the human beings (‘newcomers’) in Westworld seem less ‘humane’ than their robot counterparts. The newcomers are shown to be sadistic, misogynist, psychopathic in the indulgence of their fantasies. One could argue that this behaviour is morally justifiable in the unreal world designed solely for the benefit of paying customers – that a ‘rape’, for example, in Westworld isn’t really ‘rape’ if it is done to a robot (who, by definition, cannot give nor deny consent), but this clearly is not how the audience is being invited to see these actions.

That human beings are becoming more like machines is an anxiety for which there is a long history of evidence, and even pre-dates the cultural invention of robots in the 1920s. We can see this anxiety in the Romantic unease with the consequences of Enlightenment that gave birth to the the new, rational man, and of the industrial revolution, that was turning humans into nothing more than cogs in the steam-powered machines that so transformed the economy. This has been addressed in the Gothic tale of Frankenstein, still the basis for so many narratives involving robots, including the original Westworld film and, more recently, even our most contemporary stories such as Ex_Machina, and, to a lesser extent, in this manifestation of Westworld (which will be subject of a future post). (I have written and spoken on this theme myself many times, for example here and here).

So in Westworld we meet Dr. Ford – the mad-scientist who creates the machines that will, inevitably, be loosed upon the world. Dr. Ford immediately reminds us of another Ford, the man whose name is synonymous with the assembly lines and a mode of production in the late industrial revolution that has done so much to dehumanise modern workforces. We see these modes of production and workers in the iconic film, which was contemporary with Henry Ford’s factories, Metropolis. (Though, as we shall see, this Ford is rather more complex…).

This fear reflects, too, that as post-Enlightenment humans become more rational they become more like machines, acting in predictable, programmed ways, having lost the spontaneity and creativity of an earlier age. The humans of Westworld are exaggerations of the humans of our ‘Western’ world of rationalism, science and alienation. (We don’t have to agree with this Romantic notion, that rationalism and science are negative forces in our world, to accept that there is a great deal of anxiety about how rationalism and science are transforming individual human beings and our societies.)

Rational dehumanisation is something personified in the actions of the corporation, which has replaced the mad-scientist as the frequent villain of the sci-fi Frankenstein-robot-twist (again, more to come on this in a future post), and we see hints in Episode 1 of what is to follow in Westworld, along the lines of film’s such as 2013’s The Machine, where the slightly misguided and naive actions of a scientist are only made monstrous when appropriated by an thoroughly evil, inhumane military-industrial complex.

The this theme is address so succinctly in Ridley Scott’s Blade Runner, an important influence on the new Westworld, where the Tyrell Corporation boasts that their replicants are More Human Than Human. And in Blade Runner, too, we see humanoid robots behaving more humanely that the humans that ruthlessly, rationally hunt down the machines. It is unclear, however, from the Tyrell slogan whether the robots are more human than the human because the technology has become so sophisticated, or because humans have fallen so low.

On Westworld as a whole, it is too early to tell, of course, if it will maintain its initial promise and be as monumentally successful as Game of Thrones, or as iconic as Blade Runner. But already this first episode has given us much more to think about than the 1973 original, and undoubtedly both the successes and failures of the programme will be instructive.

Advertisements

Humans and the post-Asimov twist

Following on my discussion of the Asimov’s Three Laws after Episode 2 of Channel 4’s Humans, I want to bring to attention something that has become a new common trope of science fiction that looks at robots. (I hesitate to say ‘cliché’, because that sounds harsh in this context, though one might be forgiven for such criticism.)

Rotwang, the mad scientist from Metropolis

Rotwang, the mad scientist from Metropolis

Asimov, we might recall from the last review, was tired of the consistent portrayal of robotic monsters, and of the Faustian archetype of the mad, narcissistic scientist who creates a dangerous progeny that goes out of control. Asimov wanted to show that robots were simply tools, and could be rationally, predictably programmed, and used to the benefit of humanity; he wanted to show that the engineers and scientists that created them were merely rational, predictable human beings engaged in a specific job, like any other, and not at all interested in world domination.

Despite these noble intentions, and without wishing to overlook that this was actually a radical departure for science-fiction writing seventy years ago, Asimov success has created another trope that has become all too familiar in recent years. For while we are presented with much more sympathetic scientists — good, noble men and women that operate with the best intentions – we are still being victimised by monstrous, genocidal robots. But if scientists are so well-intentioned, how is it that their creations are still causing such bother?

On the one hand, it is because of the whole ‘man was not meant to meddle medley’, as it is described by Tony Stark in the Avengers: Age of Ultron (a full review of which should be forthcoming on these pages, but for now, check out this preview); this story tells us that we shouldn’t play around with forces beyond our control (such as creating life or artificial intelligence), whatever our intentions. That scientists, though not necessarily evil, should spend less time trying to figure out whether they can do something, and more time asking themselves whether they should, a question raised by Dr. Ian Malcolm (Jeff Goldblum) in the first Jurassic Park film (best described as Westworld-with-Dinosaurs).

Much more ubiquitous, however, is the rise of a new villain. Instead of the mad scientist,  Faust, Frankenstein, Rotwang and the like, the new enemy of humanity unleashing the rampaging robotic menace upon the world are those that seek to control the scientists; these are, almost without exception, either the military or the corporate overlords that pay the scientists’ wages.

(To what extent, then, the capitalist mode of production can be cast as the ultimate evil I’ll leave for other sentimental Marxists to speculate. [Or I might do some other time, being a bit of a sentimental old Marxist myself…])

For it is these figures – sometimes left as faceless institutions, sometimes personified in the figure of the warmongering general or the heartless CEO – that are inevitably responsible for unleashing the monsters upon the world, either by stealing their employee’s work (usually before it’s ready), or by deceiving the scientists in some other underhanded way.

We couldn’t offer you spoilers if we wanted to, not sure ourselves how things will pan out in Humans, but it will be interesting, given some of the developments and foreshadowing over the last couple of weeks, to see if or how this post-Asimov theme develops in the show.


And what did we make of last week’s other revelation, Niska having a copy of The Ghost in the Machine, inscribed with the saying Primum Non Nocere?

Just a couple of ideas to ponder as we get ready for Episode 4.

Thoughts on Humans – Niska and the 3 Laws

HumansThe big talking point on Sunday night’s instalment of Humans on Channel 4 was [spoiler alert] Niska’s decision to disobey one of her ‘customers’. Not liking the role he wanted her to play in his sexual fantasy – that of a scared little girl being forced into sex – she not only refused to obey his wishes but strangles him to death.

Of course there was a lot of fist-pumping celebration. A long-suffering robot stands up to a bullying paedophile. Hurrah! But this defiance also brought to the surface a lot of fears that some viewers had been harbouring, that autonomous, super-human robots will surely one day make the decision to kill a person, or people.

It’s only a matter of time.

This, after all, is our great fear: that robots will acquire sentience, become autonomous of their human masters, and decide that we are a plague upon the earth that need to be exterminated. We have seen this again and again in science fiction: the Cybermen, the Terminator, the Borg, et al.

All of these mechanical monsters, though, are only contemporary versions of an older legend, one that can be summed up in the figure of Frankenstein and his monster: the unnatural progeny of the mad scientist can no longer be controlled by his master and becomes a threat to humanity.

This is the all-too common image of robots that Isaac Asmiov, even as early as the 1940s, already found tedious. To dispel this Automatonophobia, the robots in Asimov’s stories are all programmed with three clear laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These three laws guarantee the safety of human beings, and prevent any mechanical Frankensteins threatening their human masters. These laws are often still considered to be a solid foundation of robotic design, both in fiction and in reality. The synths in Humans are, we are told in episode one, programmed with an ‘Asimov lock’ that means they are incapable of causing harm to human beings, or disobeying an order from a human master.

And yet, Niska refuses to play the role she is ordered to perform. And then she kills the bastard.

Though really, to anyone familiar with Asimov’s robot series, this will not come as a surprise. Because for all of Asimov’s insistence – and the insistence of US Robotics employees – on the primacy of the laws and their certainty that no robot can defy them, the drama of each story explores the failure and deficiencies in the laws.

SO when Niska breaks her ‘Asimov lock’, twitter exploded, with many (as I said) cheering her on, and many, perhaps more, seeing in her action the confirmation of their worst fears: that Frankenstein is inevitable, that intelligent, autonomous robots will undoubtedly break their chains and kill us.

And there were some very intelligent questions. Professor Tony Prescott, our colleague at Sheffield Robotics who is also tweeting during each episode, and I had some very interesting 140 character conversations. For example, this came from one viewer:

We also discussed, for example, how the laws would always need to be (re-)tweaked and improved, perhaps with regular ‘firmware’ updates, and how it would be nearly impossible to prevent robots from being hacked and the three laws undermined by human controllers (though, I hasten to point out, that in such circumstances, it’s not autonomous robots we need to fear but, as is always the case, human operators of dangerous machines).

But are Niska’s actions a breach of Asimov’s laws? Perhaps not. As Asimov developed his ideas, and his robots, he himself realised that the three laws were perhaps not enough. He realised that robots might have a wider responsibility, not just to individual people but to humanity as a whole. So Asimov created what is now know as ‘the zeroth’ law:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

If we take such a law, either as spelled out by Asimov or as imagined by others, then Niska’s actions might in fact completely compatible with the laws of robotics. By killing a potentially dangerous person, Niska could have reasoned that she was preventing other human beings, or humanity as a whole, from coming to harm, so this may well be entirely consistent, in a manner, with the zeroth law.

In a manner.

And it’s that ‘manner’, how the laws might be interpreted, either by a strictly rational AI or mechanical minds that have evolved into some kind of new superintelligence, that poses the challenge to designers and programmers as we create increasingly intelligent, increasingly independent systems. Because it will certainly not be a simple case of plugging three or four basic laws into an AI operating system, job done, when we look to create safe, effective robots in the future.

Though perhaps we need to keep thinking, beyond Asimov, about how such laws can be fashioned. Perhaps laws for robots, like the laws we have fashioned for humans, cannot simply be created and left to their own devices, but need to be constantly updated and refined. Perhaps other fail-safes can be imagined by human programmers that effectively place limits upon the autonomy of robots and intelligent AI and, in so doing, secure our future amongst intelligent machines.

Thoughts and comments are welcome below. Looking forward to the next episode on Sunday night. (If you haven’t yet had the pleasure, you can catch up with the series here.)

First thoughts on Humans, Episode 1

HumansWell, that was something, wasn’t it?

It is fair to say that I was very impressed with Channel 4’s new sci-fi offering, Humans. And judging by the fact that it was Channel 4’s biggest ratings success in a decade, so were many of you. The critical response, too, seems overwhelmingly positive. (See here, for example. And here. Here too, but less so, though I like ‘conceptual overload’, as I will soon demonstrate.)

I was so furiously tweeting throughout the programme that I almost missed the show altogether. #Humans was the #1 trending topic for some time on Sunday night.

There were some less impressed, of course, but claims that it’s a ‘poor man’s Ex_Machina‘ or Blade Runner I think are wide of the mark. It might not be as glossy, but Humans doesn’t need to be. Without taking anything away from Alex Garland’s film (a review of which I offered heHumans 2re), Humans has terrific performances, and as a series, will have the room to breathe and examine not only its characters in more depth, but also the ideas, issues and concerns we have about robots at greater length and, hopefully, with more ambivalence and nuance.

For example, and by way of introducing some issues you may want to think about for the rest of the series (call it, if you like, ‘Dreaming Robots Study Guide to Humans‘):

  • Early in the programme, when Laura (Katherine Parkinson) arrives at the train station, we see many Synths working around the city, mostly engaged in menial tasks: checking tickets, carrying luggage, picking up rubbish. So, as many people are asking today: to what extent might we expect – or fear – robots that are more like humans will take over human jobs? OR, should we welcome these opportunities, letting the robots assuming more of our mundane tasks so that, as it was suggested in Humans, we humans can be less like machines and more like… humans?

(I suspect that this might become a trickier question as the series progresses; it’s already been foreshadowed that we’ll see Synth taking over humans in emotional capacities, too.)

  • The man being interview by Krishnan Guru-Murthy says that the ‘Asimov lock in their programming mean that they simply aren’t able to do us any harm.’  Is that enough for you? Do you imagine that, were Asimov’s laws of robotics programmed into machines, you would feel that was enough to keep robots on our side? (given that most of Asimov’s stories are about a failure of the laws in some way or another…)
  • Given the apparent inevitability of human nature, that we will take any new technological development and employ it to satisfy our sexual urges, what – if any – limitations or ethical constraints would we wish to put on our use of ‘sex-bots’? Beyond answering the obvious question (Would you? Would you? nudge nudge wink wink, eh?), what are the consequences of more… intimate human-robot interactions on human-human interactions? What effect might the availability of sex-slave robots have not only on human sexuality, but on how we relate to one another as human?

Those are just some questions for now; I have no doubt that subsequent episodes will raise more complex twists to these questions, and/or new issues altogether. And I, for one, am really looking forward to it.

Feel free to post below your thoughts – let’s try to have a meaningful conversation about our future with robots, one that goes beyond the usual scaremongering, misinformed headlines.

Get ready for… ‘Humans’

Since we first got wind of the Swedish series Real Humans, we at Dreaming Robots have long been longing for the début of the UK version (though we’d still like to see the original sometime, I hope someone is listening…)

Wait no longer.

Humans gets its premier tomorrow night in the UK on Channel 4. And it looks as though it will live up to its promise, conveyed in a super-slick marketing campaign. (See the very-real looking ads, below, and the trailer, at the bottom.)

persona synthetics

Reviews will follow soon after. However, you can follow all the action live as a team from Sheffield Robotics (@ShefRobotics) will be tweeting live during the first episode, including our Director, Tony Prescott (@tonyjprescott) and, of course your very own @DreamingRobots. And more, to be sure. Follow us (link on the right) to keep up with all the action.

Watch the programme and follow #humans for commentary. We’re hoping for a great hour of television!