3 initial thoughts on Ready Player One

The long-anticipated, Steven Spielberg-helmed Ready Player One has just been released in UK cinemas this week, and as a film of obvious interest to DreamingRobots and Cyberselves everywhere, we went along to see what the Maestro of the Blockbuster has done with Ernest Cline’s 2011 novel (which the author himself helped to adapt to the screen).

We went in with a lot of questions, not least of which included:

  • How would Cline & Spielberg update the material? (in terms of VR technology, 2011 is so… 2011. )
  • How would the film engage with the modern politics of the Internet and gaming?
  • How would Spielberg use the most up-to-date cinematic techniques and effects to enhance the film? (would this be another game changer?)
  • What would the film have to say about our future? the future of gaming? of our interconnectedness? social media? what would the film have to say about the future of humanity itself?

A one-time viewing and a next-day review are, of course, too early to answer such big questions with any certainty. Fortunately, however you feel about the film itself, it will reward many multiple viewings on DVD as even the most unsatisfied viewer won’t be able to resist pausing the action frame-by-frame to catch all the references and fleeting glimpses of their favourite video game characters of the past.

But for now, here are 3 initial responses for discussion/debate:

readyplayerone121.  Ready Player One is a morality tale about corporate power and the Internet

Cline’s original novel was very much a paean to plucky independent gamers resisting the ruthless greed and world-conquering ambition of the Corporate Villain (while simultaneously, strangely, lionising the richest and most world-conquering of them all, James Halliday, the Gates-Jobs figure transformed here into the benevolent deus ex machina that built his trillions on creating the OASIS).  The film remains true to Cline’s vision, and perhaps even heightens this black-and-white, goodie-versus-baddie (IOI), with a brilliantly cast Ben (‘Commander Krennic’) Mendelsohn and a tragically under-used Hannah John-Kamen heading an army of faceless corporate infantry.

But while this wouldn’t have been at the forefront of Cline’s thinking in 2011, it is impossible to watch this film now, today, and not think of the erosion of net neutrality that was set in motion by the FCC’s December 2017 decision and, more recently, the exposure of Facebook’s data breach by Cambridge Analytica, which has finally woken more people up to the reality of mass surveillance and what personal data corporate giants have and how it might be misused.

There is little chance that Spielberg and Cline had either of these potential dangers in mind when the film went into production. And such issues shouldn’t be vastly oversimplified in real journalism, but storytelling is always a good way to make people understand complex issues and motivate them to action, and if RPO‘s simple story of goodies and baddies can become a cultural rallying-point for the dangerous mix of unchecked capitalism and our social interconnectedness, then that is a Good Thing

2. Spielberg’s film goes a certain way into correcting some of the problems of the original novel (though could have gone further).

Through no real fault of the author, opinions on Cline’s once much-lauded book were revised, post-#gamergate, and what was once seen as an innocent tale of a young (definitely boy) geek’s nostalgic travels from social outsider to saviour of the world (cf. also Cline’s Armada) came to be seen by some instead as a symptom of everything behind the vile misogyny of while male gamers, backlashing out at anyone that didn’t see how they were the best and most privileged of all people on this earth.

art3mis.jpgLet’s be clear: the gender politics in the film are far from ideal. How is it, for example, notes another reviewer, that two of the main female protagonists are so ignorant of basic Halliday-lore? And there is still a bit too much of White Boy Champion of the World in even this version of Cline’s tale. However, having said that, other critics, too, have noticed a much-improved gender consciousness in the film.

But what is clear from Spielberg’s offering is that women are as much a part of gaming culture as men, and have every right to occupy the same space, and anyone who thinks otherwise can be gone. Without wanting to give anything away, it is enough to note that Art3mis is a legend in the OASIS, a skilled gamer that Parzifal worships, and that one of the OASIS’s best designers/builders (or programmer) is also a woman. Outside of the VR world, the real women behind the avatars are among the best-drawn characters (albeit in a film not overburdened with character depth, but then this is a Spielberg popcorn speciality, not one of his Oscar worthies). Both Olivia Cooke and Lena Waithe are given space to live and to be (the former, in particular, being a much more interesting protagonist the poor Wade Watts, who really is little more than his avatar), and as previously mentioned, John-Kamen is a much more frightening villain than Mendelsohn’s corporate puppet.

This film shouldn’t be heralded as a feminist triumph or a shining example of post-Weinstein Hollywood, but it is a step in the right direction, and it might mean a few more people can forgive Cline for the white-boy wank-fest that they perceive (not without some good reason) the original novel to be.

3. Despite some nods to progressive politics, the film holds deeply conservative views on human nature.

A big attraction of the novel and the excitement of the film, for DreamingRobots and Cyberselves, was the way the novel created worlds in a new reality, and explored the ideas of what humans could become in such spaces no longer bound by the physical limitations of our birth. It’s what we’re looking at with our experiments in VR and teleoperative technologies, and we ask the questions: what happens to human beings when we can be transformed by such technologies? What might our posthuman future look like?

The film does not ask these questions. In this respect, again, the film does not deviate from the original novel. The novel, for all its creativity in imagining such virtual realities, before they were fully realised in real-world technology, was still very much about recognisably human worlds. The film seems actually regresses to a vision of human experience where the worlds of flesh-reality and virtual-reality are more clearly demarcated. In the book, there was at least a certain bleeding between these two worlds, as events in the virtual world could have consequences in the real world and vice versa. In the film, however, only real-world events have impacts on the virtual world. Events in the virtual world do not impact upon the real, and the two storylines, the two battles between goodies and baddies in the virtual and real worlds, are clearly separate. (Highlighted by the fact that there are distinct villains for each location: John-Kamen’s F’Nale Zandor never enters the virtual world, while T.J. Miller’s I-R0k exists only in the virtual. Only Mendelsohn’s Sorrento is the only villain crossing that boundary.)

RP1-Columbus.jpgSpielberg’s vision of 2045 is clearly dystopian: you can see it in the ‘Stacks’, where so many impoverished are forced to live, the utter dominance of mega-corporations, and the inability (or unwillingness) of the state to provide for or protect its citizens. But while so many of the citizens of 2045 take refuge in the paradise that is the OASIS, Spielberg makes it clear that this world is merely a symptom of the dystopian world of the flesh. The opium of these alienated masses, in fact, amplifies the miserable situation of these people. We’re supposed to pity the people we see, caged in their headsets, who can’t play tennis on a real tennis court, or dance in a real nightclub, or find love… wherever real people find love.

This is clear at the film’s conclusion, but as we don’t want to give away spoilers, we’ll leave it for you to see for yourselves. But what is evident throughout is that the virtual world should only be a place where gamers go to play – it is not a place where humans can live. And it is only in the world of flesh that humans can really, successfully exist. Again, this is evident in Cline’s novel: ‘That was when I realized, as terrifying and painful as reality can be, it’s also the only place where you can find true happiness. Because reality is real.’

As one reviewer has so succinctly put it:

But here’s the thing. Ready Player One is a tragedy. What seems like a fun adventure movie is actually a horror movie with a lot to say about the way we live now, the way we might live in the future, and the pitfalls and perils of loving video games too much. This is Spielberg reflecting on the culture he helped create, and telling the audience he made mistakes.

The only objection I have to the above quotation is the idea that the film has a lot to say about the way we might live in the future. Because our future will most certainly be posthuman, and this film cannot shake its humanist origins, and its deeply conservative understandings of how we might use technology. In this film, that posthuman being, and the technology that enables it, is as much of a threat to human life as a Great White shark or rampaging dinosaurs.

The film, therefore, cannot at all accommodate what will be the most imperative issues for human beings in the very near future. Such a binary understanding comes straight from the classic humanist guidebook: fantasy is fine, technology can be fun, but what’s real is what’s real, and what is human is human. That meddling in human’s true nature can never bring us happiness, and it is only by eschewing anything external to our true nature can we be truly happy, or truly human, are the usual humanist scaremongering about technology that we’ve seen time and again, since Mary Shelley’s classic Frankenstein did so much to create our present fantasies.

Nevermind that such a worldview ignores the fact that there has never been such a creature, a human being unimpacted by technology. Nevermind, too, that Spielberg’s entire cinematic oeuvre is fantastically, stubbornly, deeply and, sometimes, beautifully humanist (even when, or perhaps especially when, he’s telling stories about big fish or aliens). It is nevertheless a disappointment that such an opportunity, that such a potentially transformative film about the future and how we can be re-shaped by technology, plays it safe and retreats to a nostalgia for a kind of human being that is increasingly becoming obsolete. It would have been nice if Ready Player One was a great film about posthumanism, addressing the vital issues about technology that we are increasingly facing. But alas… Perhaps we should dive back into Spielberg’s catalogue and watch A.I. 

rpoarmy.pngHaving said that, Ready Player One is a fun film and we will be taking our children to see it ironically, perhaps, for the message that games are fun but sometimes yes, you do need to turn them off. (It is definitely worth its 12 Certificate, though, so parents of younger children be warned. And of course we’ll buy it on DVD, to catch another glimpse of our favourite gaming characters.)

(Which films do you think better address our posthuman future? Suggestions below, please!)


The New Westworld: Dehumanising the human; humanising the machine


Note – the following blog tries to avoid the biggest spoilers, though to avoid any spoilers you would be best advised to watch Ep. 1 of Westworld before proceeding.

HBO’s latest offering (on SkyAtlantic here in the UK) is an update of Michael Crichton‘s 1973 film Westworld, this time brought to us as a ten-part television series by sci-fi re-booter extraordinaire J.J. Abrams and screenwriter Jonathan Nolan (yes, brother and frequent collaborator of Christopher) . The new Westworld comes with much hope (for HBO as a potential ‘new Game of Thrones’) and hype, understandably with the talent behind it having given us so much terrific science-fiction of late, including Star Wars: The Force Awakens and Interstellar.

As with Channel 4’s series Humans, broadcast last June (news on forthcoming series 2 here), Dreaming Robot’s offered a live twitter commentary while the show was being broadcast in the UK, and I’ll take some time afterwards to write some reflective pieces on what we see in the show. (The Twitter ‘Superguide’ from @ShefRobotics can be seen here; my own @DreamingRobots Superguide can be seem here.)

Unsurprisingly, many of the themes in the Westworld reboot could also be seen in Humans. It seems, for example, that both shows express a certain anxiety, that as our machines become more human we humans seem to become less and and less human, or humane. But this isn’t a new idea original to either show – this anxiety as been around as long as robots themselves, from the very invention of the term robot in the 1920s. And if we trace the history of robots in science fiction, we see a history of monsters that reflect this same fear, time and again, in slightly different contexts. Because the robot – which, remember, was invented in the popular imagination long before they were built in labs – is above all else exactly that, a perfect way of expressing this fear.

shareable1So, what are we looking at in this new, improved Westworld? (because frankly, the original film lacked the depth of even the first hour of this series, being just a very traditional Frankenstein narrative and rough draft for Crichton’s Jurassic Park, made 20 years later). First – as this nifty graphic on the right illustrates – we do see the robots in Westworld becoming much more human. The programme starts with a voice asking the humanoid (very humanoid) robot, Dolores (Evan Rachel Wood), ‘Have you ever questioned the nature of your reality?’. The questioner is exploring whether Dolores have become sentient, that is, aware of her own existence. The echo here with Descartes is clear. We all know about cogito, ergo sum – I think, therefore I am. But Descartes proposition isn’t just about thinking, it is about doubt.  He begins with the proposition that the act of doubting itself means that we cannot doubt, at least, our existence . So we would better understand Descartes’s proposition as dubito, ergo cogito, ergo sum: I doubt, therefore I think, therefore I am. If Dolores is found to be questioning the nature of her reality then that would be evidence for self-awareness and being, according to the Cartesian model.

The robots in Westworld are depicted as falling in love, maintaining strong family bonds, appreciating beauty, and considering the world in philosophical, reflective contexts. How much of this is merely programming and how much exceeds the limits imposed upon them by their human masters is the key question that the show will tease its audience with for a few weeks, I suspect, though certainly it occupies much of our attention in the first hours. But if these moments – described as ‘reveries’ – are in any way genuine moments of creativity, then this is another category, beyond the notion of the cogito, that we might say the robots are becoming ‘alive’.

For many thinkers in the second-half of the twentieth century (for example, the post-Freudian psychoanalyst D. W. Winnicott), it is only in such moments of creativity, or enjoying pure moments of spontaneous being, that we truly discover the self and come alive. (The Freudian and post-Freudian influences on the narrative become even more apparent in subsequent episodes.) As response to late-industrial capitalism (and the shock of fascism to the human self-conception as a rational animal), this idea emerged of human beings coming ‘alive’ only when we are not acting compliantly, that is, when we are acting spontaneously, or creatively, and not according to the laws or dictates (or ‘programming’) of another individual, organisation or group consciousness; we see this trend not only in a post-Freudian psychotherapy (e.g. Eric Fromm and the Frankfurt SchoolR. D. Laing) and other philosophical writings but also in the popular post-war subculture media, including advertising – the sort being satirised in the ad for Westworld that opens the original film.

There are other perspectives that give us a glimpse into the robot’s moments of becoming human. Peter Abernathy, looking at a picture that offers a peek of the world outside, says, ‘I have a question, a question you’re not supposed to ask.’ This is an allusion to Adam and Eve and the fruit of forbidden knowledge, through which humankind came to self-awareness though a knowledge of the difference between right and wrong. (Peter, after this, is consumed with rage at how he and his daughter have been treated.) And like Walter, the robot that goes on a psychotic killing spree, pouring milk over his victims, Peter is determined to ‘go off script’ and reclaim for himself a degree of agency and self-determination, acting according to his own, new-found consciousness instead of according to what others have programmed for him.


MEANWHILE, the human beings (‘newcomers’) in Westworld seem less ‘humane’ than their robot counterparts. The newcomers are shown to be sadistic, misogynist, psychopathic in the indulgence of their fantasies. One could argue that this behaviour is morally justifiable in the unreal world designed solely for the benefit of paying customers – that a ‘rape’, for example, in Westworld isn’t really ‘rape’ if it is done to a robot (who, by definition, cannot give nor deny consent), but this clearly is not how the audience is being invited to see these actions.

That human beings are becoming more like machines is an anxiety for which there is a long history of evidence, and even pre-dates the cultural invention of robots in the 1920s. We can see this anxiety in the Romantic unease with the consequences of Enlightenment that gave birth to the the new, rational man, and of the industrial revolution, that was turning humans into nothing more than cogs in the steam-powered machines that so transformed the economy. This has been addressed in the Gothic tale of Frankenstein, still the basis for so many narratives involving robots, including the original Westworld film and, more recently, even our most contemporary stories such as Ex_Machina, and, to a lesser extent, in this manifestation of Westworld (which will be subject of a future post). (I have written and spoken on this theme myself many times, for example here and here).

So in Westworld we meet Dr. Ford – the mad-scientist who creates the machines that will, inevitably, be loosed upon the world. Dr. Ford immediately reminds us of another Ford, the man whose name is synonymous with the assembly lines and a mode of production in the late industrial revolution that has done so much to dehumanise modern workforces. We see these modes of production and workers in the iconic film, which was contemporary with Henry Ford’s factories, Metropolis. (Though, as we shall see, this Ford is rather more complex…).

This fear reflects, too, that as post-Enlightenment humans become more rational they become more like machines, acting in predictable, programmed ways, having lost the spontaneity and creativity of an earlier age. The humans of Westworld are exaggerations of the humans of our ‘Western’ world of rationalism, science and alienation. (We don’t have to agree with this Romantic notion, that rationalism and science are negative forces in our world, to accept that there is a great deal of anxiety about how rationalism and science are transforming individual human beings and our societies.)

Rational dehumanisation is something personified in the actions of the corporation, which has replaced the mad-scientist as the frequent villain of the sci-fi Frankenstein-robot-twist (again, more to come on this in a future post), and we see hints in Episode 1 of what is to follow in Westworld, along the lines of film’s such as 2013’s The Machine, where the slightly misguided and naive actions of a scientist are only made monstrous when appropriated by an thoroughly evil, inhumane military-industrial complex.

The this theme is address so succinctly in Ridley Scott’s Blade Runner, an important influence on the new Westworld, where the Tyrell Corporation boasts that their replicants are More Human Than Human. And in Blade Runner, too, we see humanoid robots behaving more humanely that the humans that ruthlessly, rationally hunt down the machines. It is unclear, however, from the Tyrell slogan whether the robots are more human than the human because the technology has become so sophisticated, or because humans have fallen so low.

On Westworld as a whole, it is too early to tell, of course, if it will maintain its initial promise and be as monumentally successful as Game of Thrones, or as iconic as Blade Runner. But already this first episode has given us much more to think about than the 1973 original, and undoubtedly both the successes and failures of the programme will be instructive.

Raising the bar on AI

So the media last week was absolutely full of the latest Sure Sign that the robocalypse is immanent: apparently, Google-backed DeepMind have now managed to create an AI so very sophisticated that it has beat human champions at the ancient Chinese boardalphago-game of Go. DeepMind’s AlphaGo has defeated the European champion, which marks another important development in the progress of AI research, trumping IBM DeepBlue’s victory over Gary Kasparov at chess back in 1997: Go is, apparently, a much more difficult game for humans – and, it was thought, for computers – to master, due to its complexity and the need for players to recognise complex patterns.

I expected, when setting off to write a note about this achievement, to find the usual sources in the popular press, with their characteristically subtle declarations, heralding that the End of the Human Race is Nigh!; however, thankfully, responses seem to be more sanguine and muted. The British tabloids have even avoided using that picture of Terminator that almost invariably accompanies their reports on new developments in AI and robotics.

So perhaps this is a sign that things are changing, and that the popular press are becoming more sensible, and more responsible, in their technology reporting. (Lets see how many weeks – or even days – we can go without this sort of thing before claiming victory, or even that we’ve turned a significant corner.)

But there is a lot interesting about DeepMind’s success, from a cultural perspective, even if it hasn’t stirred the usual panic about the robopocalypse. It made me recall a conversation I had at an EURobotics event in Bristol in November. We humans, it seems, like to think that we’re special. And maybe the possibility that robots or AI are a threat to that special status is another reason why we are so afraid of them. Maybe we fear another blow to our narcissism, like when that crazy astronomer Copernicus spoiled things by showing that the earth wasn’t the centre of the Universe, or that Victorian poo-pooer Darwin demonstrated that we merely evolved on this earth and weren’t not placed here at the behest of some Divine Creator. Maybe we don’t really fear that robots and AI will destroy all of humanity – well, maybe we fear that, too – but maybe part of what we fear is that robots and AI will destroy another one of those special places we reserve for ourselves as unique beings amidst creation.

And yet our scientists aren’t going to let us sit wrapped in the warmth of our unique being. They keep pushing ahead and developing more and more sophisticated AI that threatens our… specialness. So how do we, as a culture, respond to such a persistent challenge? Like any good politician, it seems we have decided to confront the inevitability of our failure by constantly changing the rules.

Choose your sporting metaphor: we ‘move the goalposts‘, we ‘raise the bar’.

Once upon a time, it was enough for we humans to think of ourselves as the rational animal, the sole species on earth endowed with the capacity for reason. As evidence for reason as the basis for a unique status for humanity crumbled – thanks both to proof that other animals were capable of sophisticated thought and the lack of proof that humans were, in fact, rational – we tried to shift those goalposts. We then transformed ourselves into the symbolic animal, the sole species on earth endowed with the capacity to manipulate signs and represent.

Then we learned that whales, dolphins and all sorts of animals were communicating with each other all the time, even if we weren’t listening. And that’s before we taught chimps how to use sign language (for which Charleton Heston will never thank us).

And then computers arrived to make things even worse. After some early experiments with hulking machines that struggled to add 2 + 2, computers soon progressed to leave us in their wake. Computers can clearly think more accurately, and faster, than any human being. And they can solve complex mathematical equations, demonstrating that they are pretty adept with symbols.

Ah, BUT…

Watson_JeopardyHumans could find some solace in the comforting thought that computers were good and some things, yes, but they weren’t so smart. Not really. A computer would never beat a human being at chess, for example. Until in May 1997, when chess champion Gary Kasparov lost to IBM’s Deep Blue.  But that was always going to happen. A computer could never, we consoled ourselves, win at a game that required linguistic dexterity. Until 2011, when IBM’s Watson beat Ken Jennings and Brad Rutter at Jeopardy!, the hit US game show. And now, Google’s DeepMind as conquered all, winning the hardest game we can imagine….

So what is interesting about DeepMind’s victory is how human beings have responded – again – to the challenges of our self-conception posed by robots and AI. Because if we were under any illusion that we were special, alone among gods’ creations as a thinking animal, or a symbolising animal, or a playing animal, that status as been usurped by our own progeny, again and again, in that all-too familiar Greek-Frankenstein-Freudian way.

Animal rationabile had to give way to animal symbolicum, who in turn gave way to animal ludens… what’s left now for poor, biologically-limited humanity?

data does shakespeareA glimpse of our answer to this latest provocation can be seen in Star Trek: The Next Generation: Lieutenant Commander Data is a self-aware android with cognitive abilities far beyond that of any human being. And yet, despite these tremendous capabilities, Data is always regarded – by himself and all the humans around him – as tragically, inevitably, inferior, as less than human. Despite the lessons in Shakespeare and sermons on human romantic ideals from his mentor, the ship’s captain Jean-Luc Picard, Data is doomed to be forever inferior to humans.

It seems that now AI can think and solve problems as well as humans, we’ve raised the bar again, changing the definition of ‘human’ to preserve our unique, privileged status.

We might now be animal permotionem – the emotional animal – except while that would be fine for distinguishing between us and robots, at least until we upload the elusive ‘consciousness.dat’ file (as in Neill Blomkamp’s recent film, Chappie)  this new moniker won’t help us remain distinct from the rest of the animals, because to be an emotional animal, to be a creature ruled by impulse and feeling, is.. to just be an animal, according to all of our previous definitions. (We’ve sort of painted ourselves into a corner with that one.)

We might find some refuge, then, following Gene Roddenberry’s  example, in the notion of humans as unique animal artis, the animals that create, or engage in artistic work.

(The clever among you will have realised some time ago that I’m no classical scholar and that my attempts to feign Latin fell apart some time ago. Artis  seems to imply something more akin to ‘skill’, which robots could arguably have already achieved; ars simply means ‘technique’ or ‘science’. Neither really captures what I’m trying to get at; suggestions are more than welcome below, please.)

The idea that human beings are defined by a particular creative impulse is not terribly new; attempts to redefine ‘the human’ along these lines have been evident since the latter half of the twentieth century. For example, if we flip back one hundred years ago, we might see Freud defining human beings (civilised human beings, of course, we should clarify) as uniquely able to follow rules. But by the late 1960s, Freud’s descendants, such as British psychoanalyst D. W. Winnicott, are arguing almost the exact opposite – that what makes us human is creativity, the ability to fully participate in our being in an engaged, productive way. (I will doubtless continue this thought in a later post, as psychoanalysis is a theoretical model very close to my heart.)

What’s a poor AI to do? It was once enough for an artificial intelligence to be sufficiently impressive, maybe even deemed ‘human’, if it could prove capable of reason, or symbolic representations, or win at chess, or Jeopardy!, or Go. Now, we expect nothing less than Laurence Olivier, Lord Byron and Jackson Pollack, all in one.

(How far away is AI under this measure? Is this any good? Or this? Maybe this?)

This reminds me of Chris Columbus’s 1999 film Bicentennial Man (based, of course, on a story by Isaac Asimov). Robin Williams’s Andrew Martin begins his… ‘life’, for lack of a better word… as a simple robot, who over the decades becomes more and more like a human – he becomes sentient, he demonstrates artistic skill, he learns to feel genuine emotion, etc.. At each stage, it seems, he hopes that he will be recognised as being at least on par with humans. No, he’s told at first, you’re not sentient. Then, when he’s sentient, he’s told he cannot feel. Then he’s told he cannot love. No achievement, it seems, is enough.

bicentennial_man_prog_1600x900Even once he has achieved just about everything, and become like a human in every respect- or perhaps even ”superhuman’ – he is told that it is too much, that he has to be less than he is. In an almost a complete reversal of the Aristotelian notion of the thinking, superior animal, Andrew is told that he has to make mistakes. He is too perfect. He cannot be homo sapien – he needs to be homo errat – the man that screws up. To err is human, or perhaps in this case, to err defines the human. (Though artificial intelligence will not long be on to this as well, as suggested in another of Asimov’s stories.)

It is not until Andrew is on his deathbed and is drawing his very last breaths that the Speaker of the World Congress declares, finally, that the world will recognise Andrew as a human.

And perhaps this will be the final line; this is perhaps the one definition of human that will endure and see out every single challenge posed by robots and artificial intelligence, no matter the level of technological progress, and regardless of how far artificial life leaves human beings behind: we will be homo mortuum. The rational animal that can die.

If Singularity enthusiasts and doomsayers alike are to be believed, this inevitable self-conception is not long off. Though perhaps humans’ greatest strength – the ability to adapt, and the talent to re-invent ourselves – might mean that there’s some life in the old species yet. Regardless, it will serve us very well to create a conception of both ourselves and of artificial life forms that try to demarcate the boundaries, and decide when these boundaries might be crossed, and what the implications will be for crossing that line.

Robosapiens Film Series Launch


After a brief summer-slumber, Dreaming Robots is back with the exciting news that Robosapiens is about to launch at the Showroom cinema in Sheffield.

Robosapiens will be a monthly film series that will showcase films about robots, virtual reality, artificial intelligence. Hosted by Sheffield Robotics, and funded in part by the University of Sheffield and the AHRC-funded project, Cyberselves in Immersive Technologies, each film will be introduced by an expert in the field and will be accompanied by hands-on demonstrations of new and exciting technology.

Here is the Showroom’s blurb:

Science fiction is the mirror of our future selves. Join us to explore how the future technologies of the self – virtual reality, remote presence, robotics – are imagined and reflected in film. With introductions by leading experts, and sneak previews of the latest innovations, we’ll explore how real technologies are shaping our aspirations and anxieties, and how the imaged technologies of our movie dreams and nightmares may be becoming real. Will homosapiens give way to robosapiens? Come along and find out. All welcome.

The first film in our series will be Avatar, on Tuesday 29 September, and will be introduced by Sheffield Robotic’s Professor Tony Prescott at 6pm. avatar

Join us early, as from 5pm Sheffield Robotics and others from the University of Sheffield will be on hand to demonstrate some of the latest innovations in virtual reality and telepresence. The film will be followed by a Q&A with Professor Prescott and another chance to see the technology that brings the ideas in the film closer to reality, and talk to Sheffield Robotics’ team of researchers and engineers.

Other films in the Robosapiens series include David Cronenberg’s reality-bending Existenz on Tuesday, 20 October, the iconic Metropolis on Tuesday, 24 November (showing as pat of Sheffield’s contribution to the national Being Human festival), and the Japanese animation Ghost in the Shell on Monday, 14 December.

So join us for Avatar and a unique film series that questions the limits of technology and our own humanity.

Review of Ex_Machina – Part I

Having finally had the chance to see this much-hyped, much-discussed film, it’s my turn to offer some initial thoughts on it. I call this ‘Part I’, because there is no way that this is the last word on the subject, and certainly not the last thing you’ll see about it here. I’m also conscious that this early into its official release, it’s unlikely that everyone that wants to see it has already done so, and while I’m keen to put some thoughts out there, I’m also equally eager to avoid spoilers that might detract from the experience for those who haven’t yet made the trek to the cineplex.

But nothing I can say can really avoid giving some hint that might be misconstrued as a spoiler. For example, my most immediate thought, the thing that first comes to mind that I need to report, is a terrible giveaway. If I say, ‘Ex_Machina very much follows a straight-forward Frankenstein plot‘ well, that pretty much says if not it all then certainly it says enough.

But there it is. Ex_Machina follows the Frankenstein-robot plot rather neatly. Which is a bit of a disappointment, if I’m being honest (and why I’m so looking forward to Big Hero 6), because I’m hoping for more films now that more completely break that mould. I should add that it’s not all that simplistic, and follows rather what I consider to be Asimov’s re-casting of the Frankenstein plot: though Asimov detested the Frankenstein complex, his work often replaces the mad scientist with the mad institutional entity — e.g. the corporation, the military. In Ex_Machina, while our AI is created by a scientist who is clearly a couple of resistors short of a circuit board, there is a suggestion that it wasn’t his prodigious scientific talent that drove him to madness but his corporate empire.

Also rather predictable is the fact that we have yet another film where we see here lots of pretty gynoids (female robots) and while some have questioned whether the film is ‘sexist’ for its depiction of naked (fabricated) female flesh, most opinions — mine included — seem to uncomfortably, benevolently settle on the conclusion that the film is making some very important points about the crises of masculinity. (To which, I would add, borrowing from Angela Carter, we might also include a point about the patriarchal origins of the madness of reason… watch this space.)

The question remains: why are we so obsessed with robots and AI in female form?

None of this is to say, however, that Ex_Machina does not provide surprises, or that it is not a thoughtful, insightful film about AI and our increasingly human-like technologies

I was thinking throughout the film that there is a big difference between Artificial Intelligence and Artificial Emotion, between rational intelligence and emotional intelligence, but that this is almost always elided in film and fiction about robots. There seems to be an unspoken assumption that ‘smart robots’ mean robots that can ‘feel’, which seems a pretty big leap to me. There are a lot of big leaps in any such sci-fi movie, to be sure, but here’s one that I find too often neglected. To the immense credit of Ex_Machina, however, and what sets it apart, is that this difference is not overlooked; this difference, in fact, becomes the fulcrum of the film. The question of ‘intelligence’ and intelligent responses versus emotional responses — the difference between the two, how often this difference is overlooked, and how often they are confused — lies right at the very heart of the more fundamental question that the film poses, which is the subject of so much science-fiction that purports to be about robots, or aliens, or monsters. That question is, simply, What does it mean to be human?

The interrogation of intelligence — and how it defines or defies the human — is implicit throughout the film. An intriguing throwaway line from Nathan, founder of the Google-clone ‘Bluebook’, that the name of his search engine relates to Wittgenstein’s notes on language, shows that Garland is encouraging us to delve and read much more into this. (Again, watch this space.)

Anil Seth, writing in The New Scientist, says:

The brilliance of Ex_Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine.

I would agree with that, wholeheartedly. Going maybe further, or spelling that idea out, I would say that the brilliance of Ex_Machina lies in the way that it tests our very notions of what it means to be human. Because within this classical (or Romantic) Frankenstein framework we are confronted with the same classical (or Romantic) Frankenstein question: what we see at the end of Ex_Machina is not that machines are capable of acting as human as we are, but that humans are capable of acting as inhumanely as machines. machines may be capable of acting as inhumanely as we are.

And here’s a thought to take away from the film, for everyone from the technophobes to the Singularians: maybe AI will only truly be sentient when it realises not only its capacity to act human, but its capacity to act inhumanely, like us.

So, for now, the recommendation: Yes, please, do go see it. Whatever else, it is a really enjoyable film; it is a gripping, intelligent psychological thriller. I’m sure we’ll be talking about it for a long time. It is already proving a worthy candidate in the Great Cannon of robot films, right up there with Metropolis, Blade RunnerTerminator and the rest. Here’s the trailer again, to whet your appetite once more: