The New Westworld: Dehumanising the human; humanising the machine

the_vitruvian_man

Note – the following blog tries to avoid the biggest spoilers, though to avoid any spoilers you would be best advised to watch Ep. 1 of Westworld before proceeding.

HBO’s latest offering (on SkyAtlantic here in the UK) is an update of Michael Crichton‘s 1973 film Westworld, this time brought to us as a ten-part television series by sci-fi re-booter extraordinaire J.J. Abrams and screenwriter Jonathan Nolan (yes, brother and frequent collaborator of Christopher) . The new Westworld comes with much hope (for HBO as a potential ‘new Game of Thrones’) and hype, understandably with the talent behind it having given us so much terrific science-fiction of late, including Star Wars: The Force Awakens and Interstellar.

As with Channel 4’s series Humans, broadcast last June (news on forthcoming series 2 here), Dreaming Robot’s offered a live twitter commentary while the show was being broadcast in the UK, and I’ll take some time afterwards to write some reflective pieces on what we see in the show. (The Twitter ‘Superguide’ from @ShefRobotics can be seen here; my own @DreamingRobots Superguide can be seem here.)

Unsurprisingly, many of the themes in the Westworld reboot could also be seen in Humans. It seems, for example, that both shows express a certain anxiety, that as our machines become more human we humans seem to become less and and less human, or humane. But this isn’t a new idea original to either show – this anxiety as been around as long as robots themselves, from the very invention of the term robot in the 1920s. And if we trace the history of robots in science fiction, we see a history of monsters that reflect this same fear, time and again, in slightly different contexts. Because the robot – which, remember, was invented in the popular imagination long before they were built in labs – is above all else exactly that, a perfect way of expressing this fear.

shareable1So, what are we looking at in this new, improved Westworld? (because frankly, the original film lacked the depth of even the first hour of this series, being just a very traditional Frankenstein narrative and rough draft for Crichton’s Jurassic Park, made 20 years later). First – as this nifty graphic on the right illustrates – we do see the robots in Westworld becoming much more human. The programme starts with a voice asking the humanoid (very humanoid) robot, Dolores (Evan Rachel Wood), ‘Have you ever questioned the nature of your reality?’. The questioner is exploring whether Dolores have become sentient, that is, aware of her own existence. The echo here with Descartes is clear. We all know about cogito, ergo sum – I think, therefore I am. But Descartes proposition isn’t just about thinking, it is about doubt.  He begins with the proposition that the act of doubting itself means that we cannot doubt, at least, our existence . So we would better understand Descartes’s proposition as dubito, ergo cogito, ergo sum: I doubt, therefore I think, therefore I am. If Dolores is found to be questioning the nature of her reality then that would be evidence for self-awareness and being, according to the Cartesian model.

The robots in Westworld are depicted as falling in love, maintaining strong family bonds, appreciating beauty, and considering the world in philosophical, reflective contexts. How much of this is merely programming and how much exceeds the limits imposed upon them by their human masters is the key question that the show will tease its audience with for a few weeks, I suspect, though certainly it occupies much of our attention in the first hours. But if these moments – described as ‘reveries’ – are in any way genuine moments of creativity, then this is another category, beyond the notion of the cogito, that we might say the robots are becoming ‘alive’.

For many thinkers in the second-half of the twentieth century (for example, the post-Freudian psychoanalyst D. W. Winnicott), it is only in such moments of creativity, or enjoying pure moments of spontaneous being, that we truly discover the self and come alive. (The Freudian and post-Freudian influences on the narrative become even more apparent in subsequent episodes.) As response to late-industrial capitalism (and the shock of fascism to the human self-conception as a rational animal), this idea emerged of human beings coming ‘alive’ only when we are not acting compliantly, that is, when we are acting spontaneously, or creatively, and not according to the laws or dictates (or ‘programming’) of another individual, organisation or group consciousness; we see this trend not only in a post-Freudian psychotherapy (e.g. Eric Fromm and the Frankfurt SchoolR. D. Laing) and other philosophical writings but also in the popular post-war subculture media, including advertising – the sort being satirised in the ad for Westworld that opens the original film.

There are other perspectives that give us a glimpse into the robot’s moments of becoming human. Peter Abernathy, looking at a picture that offers a peek of the world outside, says, ‘I have a question, a question you’re not supposed to ask.’ This is an allusion to Adam and Eve and the fruit of forbidden knowledge, through which humankind came to self-awareness though a knowledge of the difference between right and wrong. (Peter, after this, is consumed with rage at how he and his daughter have been treated.) And like Walter, the robot that goes on a psychotic killing spree, pouring milk over his victims, Peter is determined to ‘go off script’ and reclaim for himself a degree of agency and self-determination, acting according to his own, new-found consciousness instead of according to what others have programmed for him.

shareable2

MEANWHILE, the human beings (‘newcomers’) in Westworld seem less ‘humane’ than their robot counterparts. The newcomers are shown to be sadistic, misogynist, psychopathic in the indulgence of their fantasies. One could argue that this behaviour is morally justifiable in the unreal world designed solely for the benefit of paying customers – that a ‘rape’, for example, in Westworld isn’t really ‘rape’ if it is done to a robot (who, by definition, cannot give nor deny consent), but this clearly is not how the audience is being invited to see these actions.

That human beings are becoming more like machines is an anxiety for which there is a long history of evidence, and even pre-dates the cultural invention of robots in the 1920s. We can see this anxiety in the Romantic unease with the consequences of Enlightenment that gave birth to the the new, rational man, and of the industrial revolution, that was turning humans into nothing more than cogs in the steam-powered machines that so transformed the economy. This has been addressed in the Gothic tale of Frankenstein, still the basis for so many narratives involving robots, including the original Westworld film and, more recently, even our most contemporary stories such as Ex_Machina, and, to a lesser extent, in this manifestation of Westworld (which will be subject of a future post). (I have written and spoken on this theme myself many times, for example here and here).

So in Westworld we meet Dr. Ford – the mad-scientist who creates the machines that will, inevitably, be loosed upon the world. Dr. Ford immediately reminds us of another Ford, the man whose name is synonymous with the assembly lines and a mode of production in the late industrial revolution that has done so much to dehumanise modern workforces. We see these modes of production and workers in the iconic film, which was contemporary with Henry Ford’s factories, Metropolis. (Though, as we shall see, this Ford is rather more complex…).

This fear reflects, too, that as post-Enlightenment humans become more rational they become more like machines, acting in predictable, programmed ways, having lost the spontaneity and creativity of an earlier age. The humans of Westworld are exaggerations of the humans of our ‘Western’ world of rationalism, science and alienation. (We don’t have to agree with this Romantic notion, that rationalism and science are negative forces in our world, to accept that there is a great deal of anxiety about how rationalism and science are transforming individual human beings and our societies.)

Rational dehumanisation is something personified in the actions of the corporation, which has replaced the mad-scientist as the frequent villain of the sci-fi Frankenstein-robot-twist (again, more to come on this in a future post), and we see hints in Episode 1 of what is to follow in Westworld, along the lines of film’s such as 2013’s The Machine, where the slightly misguided and naive actions of a scientist are only made monstrous when appropriated by an thoroughly evil, inhumane military-industrial complex.

The this theme is address so succinctly in Ridley Scott’s Blade Runner, an important influence on the new Westworld, where the Tyrell Corporation boasts that their replicants are More Human Than Human. And in Blade Runner, too, we see humanoid robots behaving more humanely that the humans that ruthlessly, rationally hunt down the machines. It is unclear, however, from the Tyrell slogan whether the robots are more human than the human because the technology has become so sophisticated, or because humans have fallen so low.

On Westworld as a whole, it is too early to tell, of course, if it will maintain its initial promise and be as monumentally successful as Game of Thrones, or as iconic as Blade Runner. But already this first episode has given us much more to think about than the 1973 original, and undoubtedly both the successes and failures of the programme will be instructive.

Advertisements

Raising the bar on AI

So the media last week was absolutely full of the latest Sure Sign that the robocalypse is immanent: apparently, Google-backed DeepMind have now managed to create an AI so very sophisticated that it has beat human champions at the ancient Chinese boardalphago-game of Go. DeepMind’s AlphaGo has defeated the European champion, which marks another important development in the progress of AI research, trumping IBM DeepBlue’s victory over Gary Kasparov at chess back in 1997: Go is, apparently, a much more difficult game for humans – and, it was thought, for computers – to master, due to its complexity and the need for players to recognise complex patterns.

I expected, when setting off to write a note about this achievement, to find the usual sources in the popular press, with their characteristically subtle declarations, heralding that the End of the Human Race is Nigh!; however, thankfully, responses seem to be more sanguine and muted. The British tabloids have even avoided using that picture of Terminator that almost invariably accompanies their reports on new developments in AI and robotics.

So perhaps this is a sign that things are changing, and that the popular press are becoming more sensible, and more responsible, in their technology reporting. (Lets see how many weeks – or even days – we can go without this sort of thing before claiming victory, or even that we’ve turned a significant corner.)

But there is a lot interesting about DeepMind’s success, from a cultural perspective, even if it hasn’t stirred the usual panic about the robopocalypse. It made me recall a conversation I had at an EURobotics event in Bristol in November. We humans, it seems, like to think that we’re special. And maybe the possibility that robots or AI are a threat to that special status is another reason why we are so afraid of them. Maybe we fear another blow to our narcissism, like when that crazy astronomer Copernicus spoiled things by showing that the earth wasn’t the centre of the Universe, or that Victorian poo-pooer Darwin demonstrated that we merely evolved on this earth and weren’t not placed here at the behest of some Divine Creator. Maybe we don’t really fear that robots and AI will destroy all of humanity – well, maybe we fear that, too – but maybe part of what we fear is that robots and AI will destroy another one of those special places we reserve for ourselves as unique beings amidst creation.

And yet our scientists aren’t going to let us sit wrapped in the warmth of our unique being. They keep pushing ahead and developing more and more sophisticated AI that threatens our… specialness. So how do we, as a culture, respond to such a persistent challenge? Like any good politician, it seems we have decided to confront the inevitability of our failure by constantly changing the rules.

Choose your sporting metaphor: we ‘move the goalposts‘, we ‘raise the bar’.

Once upon a time, it was enough for we humans to think of ourselves as the rational animal, the sole species on earth endowed with the capacity for reason. As evidence for reason as the basis for a unique status for humanity crumbled – thanks both to proof that other animals were capable of sophisticated thought and the lack of proof that humans were, in fact, rational – we tried to shift those goalposts. We then transformed ourselves into the symbolic animal, the sole species on earth endowed with the capacity to manipulate signs and represent.

Then we learned that whales, dolphins and all sorts of animals were communicating with each other all the time, even if we weren’t listening. And that’s before we taught chimps how to use sign language (for which Charleton Heston will never thank us).

And then computers arrived to make things even worse. After some early experiments with hulking machines that struggled to add 2 + 2, computers soon progressed to leave us in their wake. Computers can clearly think more accurately, and faster, than any human being. And they can solve complex mathematical equations, demonstrating that they are pretty adept with symbols.

Ah, BUT…

Watson_JeopardyHumans could find some solace in the comforting thought that computers were good and some things, yes, but they weren’t so smart. Not really. A computer would never beat a human being at chess, for example. Until in May 1997, when chess champion Gary Kasparov lost to IBM’s Deep Blue.  But that was always going to happen. A computer could never, we consoled ourselves, win at a game that required linguistic dexterity. Until 2011, when IBM’s Watson beat Ken Jennings and Brad Rutter at Jeopardy!, the hit US game show. And now, Google’s DeepMind as conquered all, winning the hardest game we can imagine….

So what is interesting about DeepMind’s victory is how human beings have responded – again – to the challenges of our self-conception posed by robots and AI. Because if we were under any illusion that we were special, alone among gods’ creations as a thinking animal, or a symbolising animal, or a playing animal, that status as been usurped by our own progeny, again and again, in that all-too familiar Greek-Frankenstein-Freudian way.

Animal rationabile had to give way to animal symbolicum, who in turn gave way to animal ludens… what’s left now for poor, biologically-limited humanity?

data does shakespeareA glimpse of our answer to this latest provocation can be seen in Star Trek: The Next Generation: Lieutenant Commander Data is a self-aware android with cognitive abilities far beyond that of any human being. And yet, despite these tremendous capabilities, Data is always regarded – by himself and all the humans around him – as tragically, inevitably, inferior, as less than human. Despite the lessons in Shakespeare and sermons on human romantic ideals from his mentor, the ship’s captain Jean-Luc Picard, Data is doomed to be forever inferior to humans.

It seems that now AI can think and solve problems as well as humans, we’ve raised the bar again, changing the definition of ‘human’ to preserve our unique, privileged status.

We might now be animal permotionem – the emotional animal – except while that would be fine for distinguishing between us and robots, at least until we upload the elusive ‘consciousness.dat’ file (as in Neill Blomkamp’s recent film, Chappie)  this new moniker won’t help us remain distinct from the rest of the animals, because to be an emotional animal, to be a creature ruled by impulse and feeling, is.. to just be an animal, according to all of our previous definitions. (We’ve sort of painted ourselves into a corner with that one.)

We might find some refuge, then, following Gene Roddenberry’s  example, in the notion of humans as unique animal artis, the animals that create, or engage in artistic work.

(The clever among you will have realised some time ago that I’m no classical scholar and that my attempts to feign Latin fell apart some time ago. Artis  seems to imply something more akin to ‘skill’, which robots could arguably have already achieved; ars simply means ‘technique’ or ‘science’. Neither really captures what I’m trying to get at; suggestions are more than welcome below, please.)

The idea that human beings are defined by a particular creative impulse is not terribly new; attempts to redefine ‘the human’ along these lines have been evident since the latter half of the twentieth century. For example, if we flip back one hundred years ago, we might see Freud defining human beings (civilised human beings, of course, we should clarify) as uniquely able to follow rules. But by the late 1960s, Freud’s descendants, such as British psychoanalyst D. W. Winnicott, are arguing almost the exact opposite – that what makes us human is creativity, the ability to fully participate in our being in an engaged, productive way. (I will doubtless continue this thought in a later post, as psychoanalysis is a theoretical model very close to my heart.)

What’s a poor AI to do? It was once enough for an artificial intelligence to be sufficiently impressive, maybe even deemed ‘human’, if it could prove capable of reason, or symbolic representations, or win at chess, or Jeopardy!, or Go. Now, we expect nothing less than Laurence Olivier, Lord Byron and Jackson Pollack, all in one.

(How far away is AI under this measure? Is this any good? Or this? Maybe this?)

This reminds me of Chris Columbus’s 1999 film Bicentennial Man (based, of course, on a story by Isaac Asimov). Robin Williams’s Andrew Martin begins his… ‘life’, for lack of a better word… as a simple robot, who over the decades becomes more and more like a human – he becomes sentient, he demonstrates artistic skill, he learns to feel genuine emotion, etc.. At each stage, it seems, he hopes that he will be recognised as being at least on par with humans. No, he’s told at first, you’re not sentient. Then, when he’s sentient, he’s told he cannot feel. Then he’s told he cannot love. No achievement, it seems, is enough.

bicentennial_man_prog_1600x900Even once he has achieved just about everything, and become like a human in every respect- or perhaps even ”superhuman’ – he is told that it is too much, that he has to be less than he is. In an almost a complete reversal of the Aristotelian notion of the thinking, superior animal, Andrew is told that he has to make mistakes. He is too perfect. He cannot be homo sapien – he needs to be homo errat – the man that screws up. To err is human, or perhaps in this case, to err defines the human. (Though artificial intelligence will not long be on to this as well, as suggested in another of Asimov’s stories.)

It is not until Andrew is on his deathbed and is drawing his very last breaths that the Speaker of the World Congress declares, finally, that the world will recognise Andrew as a human.

And perhaps this will be the final line; this is perhaps the one definition of human that will endure and see out every single challenge posed by robots and artificial intelligence, no matter the level of technological progress, and regardless of how far artificial life leaves human beings behind: we will be homo mortuum. The rational animal that can die.

If Singularity enthusiasts and doomsayers alike are to be believed, this inevitable self-conception is not long off. Though perhaps humans’ greatest strength – the ability to adapt, and the talent to re-invent ourselves – might mean that there’s some life in the old species yet. Regardless, it will serve us very well to create a conception of both ourselves and of artificial life forms that try to demarcate the boundaries, and decide when these boundaries might be crossed, and what the implications will be for crossing that line.

Robosapiens Film Series Launch

Robosapiens

After a brief summer-slumber, Dreaming Robots is back with the exciting news that Robosapiens is about to launch at the Showroom cinema in Sheffield.

Robosapiens will be a monthly film series that will showcase films about robots, virtual reality, artificial intelligence. Hosted by Sheffield Robotics, and funded in part by the University of Sheffield and the AHRC-funded project, Cyberselves in Immersive Technologies, each film will be introduced by an expert in the field and will be accompanied by hands-on demonstrations of new and exciting technology.

Here is the Showroom’s blurb:

Science fiction is the mirror of our future selves. Join us to explore how the future technologies of the self – virtual reality, remote presence, robotics – are imagined and reflected in film. With introductions by leading experts, and sneak previews of the latest innovations, we’ll explore how real technologies are shaping our aspirations and anxieties, and how the imaged technologies of our movie dreams and nightmares may be becoming real. Will homosapiens give way to robosapiens? Come along and find out. All welcome.

The first film in our series will be Avatar, on Tuesday 29 September, and will be introduced by Sheffield Robotic’s Professor Tony Prescott at 6pm. avatar

Join us early, as from 5pm Sheffield Robotics and others from the University of Sheffield will be on hand to demonstrate some of the latest innovations in virtual reality and telepresence. The film will be followed by a Q&A with Professor Prescott and another chance to see the technology that brings the ideas in the film closer to reality, and talk to Sheffield Robotics’ team of researchers and engineers.

Other films in the Robosapiens series include David Cronenberg’s reality-bending Existenz on Tuesday, 20 October, the iconic Metropolis on Tuesday, 24 November (showing as pat of Sheffield’s contribution to the national Being Human festival), and the Japanese animation Ghost in the Shell on Monday, 14 December.

So join us for Avatar and a unique film series that questions the limits of technology and our own humanity.

Review of Ex_Machina – Part I

Having finally had the chance to see this much-hyped, much-discussed film, it’s my turn to offer some initial thoughts on it. I call this ‘Part I’, because there is no way that this is the last word on the subject, and certainly not the last thing you’ll see about it here. I’m also conscious that this early into its official release, it’s unlikely that everyone that wants to see it has already done so, and while I’m keen to put some thoughts out there, I’m also equally eager to avoid spoilers that might detract from the experience for those who haven’t yet made the trek to the cineplex.

But nothing I can say can really avoid giving some hint that might be misconstrued as a spoiler. For example, my most immediate thought, the thing that first comes to mind that I need to report, is a terrible giveaway. If I say, ‘Ex_Machina very much follows a straight-forward Frankenstein plot‘ well, that pretty much says if not it all then certainly it says enough.

But there it is. Ex_Machina follows the Frankenstein-robot plot rather neatly. Which is a bit of a disappointment, if I’m being honest (and why I’m so looking forward to Big Hero 6), because I’m hoping for more films now that more completely break that mould. I should add that it’s not all that simplistic, and follows rather what I consider to be Asimov’s re-casting of the Frankenstein plot: though Asimov detested the Frankenstein complex, his work often replaces the mad scientist with the mad institutional entity — e.g. the corporation, the military. In Ex_Machina, while our AI is created by a scientist who is clearly a couple of resistors short of a circuit board, there is a suggestion that it wasn’t his prodigious scientific talent that drove him to madness but his corporate empire.

Also rather predictable is the fact that we have yet another film where we see here lots of pretty gynoids (female robots) and while some have questioned whether the film is ‘sexist’ for its depiction of naked (fabricated) female flesh, most opinions — mine included — seem to uncomfortably, benevolently settle on the conclusion that the film is making some very important points about the crises of masculinity. (To which, I would add, borrowing from Angela Carter, we might also include a point about the patriarchal origins of the madness of reason… watch this space.)

The question remains: why are we so obsessed with robots and AI in female form?

None of this is to say, however, that Ex_Machina does not provide surprises, or that it is not a thoughtful, insightful film about AI and our increasingly human-like technologies

I was thinking throughout the film that there is a big difference between Artificial Intelligence and Artificial Emotion, between rational intelligence and emotional intelligence, but that this is almost always elided in film and fiction about robots. There seems to be an unspoken assumption that ‘smart robots’ mean robots that can ‘feel’, which seems a pretty big leap to me. There are a lot of big leaps in any such sci-fi movie, to be sure, but here’s one that I find too often neglected. To the immense credit of Ex_Machina, however, and what sets it apart, is that this difference is not overlooked; this difference, in fact, becomes the fulcrum of the film. The question of ‘intelligence’ and intelligent responses versus emotional responses — the difference between the two, how often this difference is overlooked, and how often they are confused — lies right at the very heart of the more fundamental question that the film poses, which is the subject of so much science-fiction that purports to be about robots, or aliens, or monsters. That question is, simply, What does it mean to be human?

The interrogation of intelligence — and how it defines or defies the human — is implicit throughout the film. An intriguing throwaway line from Nathan, founder of the Google-clone ‘Bluebook’, that the name of his search engine relates to Wittgenstein’s notes on language, shows that Garland is encouraging us to delve and read much more into this. (Again, watch this space.)

Anil Seth, writing in The New Scientist, says:

The brilliance of Ex_Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine.

I would agree with that, wholeheartedly. Going maybe further, or spelling that idea out, I would say that the brilliance of Ex_Machina lies in the way that it tests our very notions of what it means to be human. Because within this classical (or Romantic) Frankenstein framework we are confronted with the same classical (or Romantic) Frankenstein question: what we see at the end of Ex_Machina is not that machines are capable of acting as human as we are, but that humans are capable of acting as inhumanely as machines. machines may be capable of acting as inhumanely as we are.

And here’s a thought to take away from the film, for everyone from the technophobes to the Singularians: maybe AI will only truly be sentient when it realises not only its capacity to act human, but its capacity to act inhumanely, like us.

So, for now, the recommendation: Yes, please, do go see it. Whatever else, it is a really enjoyable film; it is a gripping, intelligent psychological thriller. I’m sure we’ll be talking about it for a long time. It is already proving a worthy candidate in the Great Cannon of robot films, right up there with Metropolis, Blade RunnerTerminator and the rest. Here’s the trailer again, to whet your appetite once more:

Artificial people in the movies…

In a flurry of excitement as I prepare to watch, at long last, Ex_Machina, I have found that my favourite film critic, Mark Kermode from the BBC’s Flagship Film Review Programme (for the uninitiated, you can catch up here; do so soon if you want to join this year’s cruise) has just released his ‘Top 5 a̶n̶d̶r̶o̶i̶d̶ c̶y̶b̶o̶r̶g̶ r̶o̶b̶o̶t̶  artificial people‘.

Kermode’s list (‘absolutely spoilertastic’):

 5. Carol Van Sant in The Stepford Wives (1975)

4. Yul Brynner’s Gunslinger in Westworld (1973)

3. David from A.I. (2001)

2. Bishop in Aliens (1986)roy batty

1. Roy Batty in Blade Runner (1982)

Do check out the Kermode’s Uncut video blog on the subject, and make sure you check back here in a couple of days to read my own review of Ex_Machina.

Your thoughts? Did he miss anyone? Disagree with the order? Let us know below.

Happy New Year!

and it looks like it will be a terrific 2015 for science-fiction and robot fans. In the last post before Christmas we looked at the trailer for Terminator: Genisys [sic], but there is so much to look forward to in the near future. Over the next few days I’ll try to do the studios’ work for them and make sure you see some of the trailers for these forthcoming robot-inspired films. Call it public service.

Everyone is really excited about Ex_Machina, released 21 January. It’s written and directed by Alex Garland, who wrote 28 Days Later and Sunshine (both directed by Danny Boyle), so all signs are promising for an intelligent, engaging film… even if the trailer makes it look rather like we’re firmly in Frankenstein-complex territory: hubristic, eccentric billionaire/scientist creates AI-robot in his lab, though of course he is unable to control his creation and things go, inevitably, wrong. We have the usual modern twists on the tale, too, that we’ve grown used to; the scientists are a little mad, but not as completely unhinged as their corporate, or military, or corporate-military, controllers. (Cf. RobocopTerminator, etc., etc.)

In fact, it looks a lot like the much-less hyped 2014 film The Machine, also a British production, complete with a young, pretty actress in the role of the killer machine; the dangerous fem-bot is now apparently a great, noble tradition, dating back to Maria of Metropolis, and warrants a more thorough consideration (which I’ll get to when I stop pretending to be a film promoter).

Anyway, here’s the trailer. Enjoy!

And here’s the website: http://www.exmachinamovie.co.uk/