3 initial thoughts on Ready Player One

The long-anticipated, Steven Spielberg-helmed Ready Player One has just been released in UK cinemas this week, and as a film of obvious interest to DreamingRobots and Cyberselves everywhere, we went along to see what the Maestro of the Blockbuster has done with Ernest Cline’s 2011 novel (which the author himself helped to adapt to the screen).

We went in with a lot of questions, not least of which included:

  • How would Cline & Spielberg update the material? (in terms of VR technology, 2011 is so… 2011. )
  • How would the film engage with the modern politics of the Internet and gaming?
  • How would Spielberg use the most up-to-date cinematic techniques and effects to enhance the film? (would this be another game changer?)
  • What would the film have to say about our future? the future of gaming? of our interconnectedness? social media? what would the film have to say about the future of humanity itself?

A one-time viewing and a next-day review are, of course, too early to answer such big questions with any certainty. Fortunately, however you feel about the film itself, it will reward many multiple viewings on DVD as even the most unsatisfied viewer won’t be able to resist pausing the action frame-by-frame to catch all the references and fleeting glimpses of their favourite video game characters of the past.

But for now, here are 3 initial responses for discussion/debate:

readyplayerone121.  Ready Player One is a morality tale about corporate power and the Internet

Cline’s original novel was very much a paean to plucky independent gamers resisting the ruthless greed and world-conquering ambition of the Corporate Villain (while simultaneously, strangely, lionising the richest and most world-conquering of them all, James Halliday, the Gates-Jobs figure transformed here into the benevolent deus ex machina that built his trillions on creating the OASIS).  The film remains true to Cline’s vision, and perhaps even heightens this black-and-white, goodie-versus-baddie (IOI), with a brilliantly cast Ben (‘Commander Krennic’) Mendelsohn and a tragically under-used Hannah John-Kamen heading an army of faceless corporate infantry.

But while this wouldn’t have been at the forefront of Cline’s thinking in 2011, it is impossible to watch this film now, today, and not think of the erosion of net neutrality that was set in motion by the FCC’s December 2017 decision and, more recently, the exposure of Facebook’s data breach by Cambridge Analytica, which has finally woken more people up to the reality of mass surveillance and what personal data corporate giants have and how it might be misused.

There is little chance that Spielberg and Cline had either of these potential dangers in mind when the film went into production. And such issues shouldn’t be vastly oversimplified in real journalism, but storytelling is always a good way to make people understand complex issues and motivate them to action, and if RPO‘s simple story of goodies and baddies can become a cultural rallying-point for the dangerous mix of unchecked capitalism and our social interconnectedness, then that is a Good Thing

2. Spielberg’s film goes a certain way into correcting some of the problems of the original novel (though could have gone further).

Through no real fault of the author, opinions on Cline’s once much-lauded book were revised, post-#gamergate, and what was once seen as an innocent tale of a young (definitely boy) geek’s nostalgic travels from social outsider to saviour of the world (cf. also Cline’s Armada) came to be seen by some instead as a symptom of everything behind the vile misogyny of while male gamers, backlashing out at anyone that didn’t see how they were the best and most privileged of all people on this earth.

art3mis.jpgLet’s be clear: the gender politics in the film are far from ideal. How is it, for example, notes another reviewer, that two of the main female protagonists are so ignorant of basic Halliday-lore? And there is still a bit too much of White Boy Champion of the World in even this version of Cline’s tale. However, having said that, other critics, too, have noticed a much-improved gender consciousness in the film.

But what is clear from Spielberg’s offering is that women are as much a part of gaming culture as men, and have every right to occupy the same space, and anyone who thinks otherwise can be gone. Without wanting to give anything away, it is enough to note that Art3mis is a legend in the OASIS, a skilled gamer that Parzifal worships, and that one of the OASIS’s best designers/builders (or programmer) is also a woman. Outside of the VR world, the real women behind the avatars are among the best-drawn characters (albeit in a film not overburdened with character depth, but then this is a Spielberg popcorn speciality, not one of his Oscar worthies). Both Olivia Cooke and Lena Waithe are given space to live and to be (the former, in particular, being a much more interesting protagonist the poor Wade Watts, who really is little more than his avatar), and as previously mentioned, John-Kamen is a much more frightening villain than Mendelsohn’s corporate puppet.

This film shouldn’t be heralded as a feminist triumph or a shining example of post-Weinstein Hollywood, but it is a step in the right direction, and it might mean a few more people can forgive Cline for the white-boy wank-fest that they perceive (not without some good reason) the original novel to be.

3. Despite some nods to progressive politics, the film holds deeply conservative views on human nature.

A big attraction of the novel and the excitement of the film, for DreamingRobots and Cyberselves, was the way the novel created worlds in a new reality, and explored the ideas of what humans could become in such spaces no longer bound by the physical limitations of our birth. It’s what we’re looking at with our experiments in VR and teleoperative technologies, and we ask the questions: what happens to human beings when we can be transformed by such technologies? What might our posthuman future look like?

The film does not ask these questions. In this respect, again, the film does not deviate from the original novel. The novel, for all its creativity in imagining such virtual realities, before they were fully realised in real-world technology, was still very much about recognisably human worlds. The film seems actually regresses to a vision of human experience where the worlds of flesh-reality and virtual-reality are more clearly demarcated. In the book, there was at least a certain bleeding between these two worlds, as events in the virtual world could have consequences in the real world and vice versa. In the film, however, only real-world events have impacts on the virtual world. Events in the virtual world do not impact upon the real, and the two storylines, the two battles between goodies and baddies in the virtual and real worlds, are clearly separate. (Highlighted by the fact that there are distinct villains for each location: John-Kamen’s F’Nale Zandor never enters the virtual world, while T.J. Miller’s I-R0k exists only in the virtual. Only Mendelsohn’s Sorrento is the only villain crossing that boundary.)

RP1-Columbus.jpgSpielberg’s vision of 2045 is clearly dystopian: you can see it in the ‘Stacks’, where so many impoverished are forced to live, the utter dominance of mega-corporations, and the inability (or unwillingness) of the state to provide for or protect its citizens. But while so many of the citizens of 2045 take refuge in the paradise that is the OASIS, Spielberg makes it clear that this world is merely a symptom of the dystopian world of the flesh. The opium of these alienated masses, in fact, amplifies the miserable situation of these people. We’re supposed to pity the people we see, caged in their headsets, who can’t play tennis on a real tennis court, or dance in a real nightclub, or find love… wherever real people find love.

This is clear at the film’s conclusion, but as we don’t want to give away spoilers, we’ll leave it for you to see for yourselves. But what is evident throughout is that the virtual world should only be a place where gamers go to play – it is not a place where humans can live. And it is only in the world of flesh that humans can really, successfully exist. Again, this is evident in Cline’s novel: ‘That was when I realized, as terrifying and painful as reality can be, it’s also the only place where you can find true happiness. Because reality is real.’

As one reviewer has so succinctly put it:

But here’s the thing. Ready Player One is a tragedy. What seems like a fun adventure movie is actually a horror movie with a lot to say about the way we live now, the way we might live in the future, and the pitfalls and perils of loving video games too much. This is Spielberg reflecting on the culture he helped create, and telling the audience he made mistakes.

The only objection I have to the above quotation is the idea that the film has a lot to say about the way we might live in the future. Because our future will most certainly be posthuman, and this film cannot shake its humanist origins, and its deeply conservative understandings of how we might use technology. In this film, that posthuman being, and the technology that enables it, is as much of a threat to human life as a Great White shark or rampaging dinosaurs.

The film, therefore, cannot at all accommodate what will be the most imperative issues for human beings in the very near future. Such a binary understanding comes straight from the classic humanist guidebook: fantasy is fine, technology can be fun, but what’s real is what’s real, and what is human is human. That meddling in human’s true nature can never bring us happiness, and it is only by eschewing anything external to our true nature can we be truly happy, or truly human, are the usual humanist scaremongering about technology that we’ve seen time and again, since Mary Shelley’s classic Frankenstein did so much to create our present fantasies.

Nevermind that such a worldview ignores the fact that there has never been such a creature, a human being unimpacted by technology. Nevermind, too, that Spielberg’s entire cinematic oeuvre is fantastically, stubbornly, deeply and, sometimes, beautifully humanist (even when, or perhaps especially when, he’s telling stories about big fish or aliens). It is nevertheless a disappointment that such an opportunity, that such a potentially transformative film about the future and how we can be re-shaped by technology, plays it safe and retreats to a nostalgia for a kind of human being that is increasingly becoming obsolete. It would have been nice if Ready Player One was a great film about posthumanism, addressing the vital issues about technology that we are increasingly facing. But alas… Perhaps we should dive back into Spielberg’s catalogue and watch A.I. 

rpoarmy.pngHaving said that, Ready Player One is a fun film and we will be taking our children to see it ironically, perhaps, for the message that games are fun but sometimes yes, you do need to turn them off. (It is definitely worth its 12 Certificate, though, so parents of younger children be warned. And of course we’ll buy it on DVD, to catch another glimpse of our favourite gaming characters.)

(Which films do you think better address our posthuman future? Suggestions below, please!)


The Ford Factor: Mad scientists and corporate villains

The following may contain some spoilers up to Episode 5 of Westworld.

anthony_hopkins_as_dr-_robert_ford_-_credit_john_p-_johnson_hbo_-_h_2016SO HBO’s Westworld (on Sky Atlantic here in the UK) is progressing nicely, though even now at five episodes in it’s probably still a little too early to start speculating about what is going on exactly. However, at the risk of casting wild speculations that hindsight later proves naive, one character that is particularly interesting me and the Twittersphere is Anthony Hopkin’s Dr. Robert Ford.

I mentioned in my last post on Westworld that Ford’s name is meant to make audiences recall Henry Ford, the twentieth century industrialist and whose name has become synonymous with automated mass production and consumerism. Though Ford did not invent the assembly line, his implementation of the industrial mode of production conjures images of the sort of alienated labour that has been held responsible for the dehumanisation of human beings since the dawn of the Industrial Revolution.

But Henry Ford can also be regarded as a cousin of a particular kind of character we’ve seen repeatedly in fiction and film over the centuries (specifically, the centuries since Western thinkers started exhibiting anxiety about the effects of how we make things – industrialisation – and the accompanying way of thinking – rationalism). Frankenstein, as I’ve said before, is the grandfather; Faust is perhaps the older, more distant relative; Prometheus is their icon.

We’ve come to think of these figures as the archetypical mad scientist: the unhinged narcissist, a victim of his own hubris, who is simultaneously uses the clarity of science and rationalism on the one hand and maniacal passion on the other. The mad scientist desires to stand with the gods, to create new life, but inevitably builds a monster that will break free of its creator’s control and return to destroy him and everything he holds dear. [I’ve used the male pronoun intentionally for the mad scientist, because they are invariably male. I would LOVE if anyone could provide an example of female mad scientist. Comment below, please.]


Rotwang, from Lang’s Metropolis (1927)

Though Frankenstein is probably the best known of these mad scientists, his monster was biological, not mechanical. But from the very first stories about robots, mad scientists have been portrayed as the crazed geniuses that unleash their creations upon the world. Rotwang, of Fritz Lang’s 1927 Metropolis created a new template who can count amongst his descendents, including Dr. Edward Morbius, Dr. Eldon Tyrell and  Nathan Bateman.

But as those last two examples demonstrate, the mad scientist has undergone something of a transformation of late. And we can give some credit for this to none other than Isaac Asimov. Asimov, as we know from his writings – and from his short essay on ‘The Frankenstein Complex’ – was very against this persistent of archetypes of robotic monsters; he was also very unhappy with the portrayal of roboticists as the mad scientist, and sought normalise the job so that the public would regard roboticists as just ordinary people with ordinary occupations. Asimov’s robot stories are devoid of villains; the they are populated with scientists, engineers, ‘robot psychologists’ all simply going about their business, trying to fix robots that have gone awry. And the robots in Asimov’s stories are shown to be simply ‘malfunctioning’; they are not acting out of any malice caused by newly achieved self-awareness and a subsequent desire to sadistically destroy the human race. Asimov’s robots simply have problems in their programming, problems that have rational explanations and can be address using a similar application of reason.

Were all science fictions writers as committed to such lofty ideals. Asimov’s stories are unique because they are very different in terms of structure. These are not traditional stories of conflict, of good guys in white hats battling  black-clad evil-plotters. Some might argue that Asimov’s  stories, for all their noble intentions, lack something… excitement, maybe… without these usual elements. Regardless, most science fiction writers since, have been unable to resist the temptation to put villains and more traditional story-arcs back into their robot narratives.


No mad scientists here.

Now, post-Asimov, however, in a world full of scientists, we have come (more or less) to accept the nobility of science and those strange professors that practice those magical arts. The likes of Frankenstein and Rotwang seem to be in short supply – at least, we don’t have anyone more than ‘slightly eccentric’ at our Sheffield Robotics lab. And hardly anyone wears a long white coat. (There’s a lot of plaid, though.)


But in such a world, where does a sci-fi writer look when trying to imagine the Baddie? Where can writers find a suitable antagonist against whom the hero can do battle and audiences can fist-pump their relief when they are finally defeated?

The image we might have once had of the isolated, mad genius working (virtually) alone in a dungeon converted into a laboratory (a very intriguing transformation in its own right) no longer suffices for Tyrell and Bateman and their like. The modern scientists are much happier in clean, ultra-modern research facilities or skyscraper. These new scientists are not aided by a sole hunchback named ‘Igor’ but are rather backed by an entire corporate machine,  with boards of directors, capital, public relations teams and  (often) military contracts.

This transformation of the villain in robot-monster movies, from the individual mad scientist to the soulless, harmful corporation, represents an important shift in what we, as a society, fear, and the root of our anxieties.

What we seem to be seeing in Westworld – and remember, it is far too early to say with any certainty, so this is really not much more than an historically-informed fan-theory – is this shift from the mad scientist to the corporate villain being illustrated right before our eyes.

On the one hand, we have Dr. Ford. He’s old now, and has been at the park since its inception [See what I did there – ‘Inception’…?] . In fact, as he explains in Episode 3, he was there before the park opened, together with his partner, the mysterious ‘Arnold’. Both Ford and Arnold are presented as scientists cut of the ‘mad’ lab coat: it’s said that Ford is ‘chasing his demons over the deep-end’. Ford, echoing his forefather, Frankenstein, wistfully speculates that one day ‘We can …perhaps one day even resurrect the dead.’ When Abernathy says in Episode 1 that his ultimate goal is to ‘meet his maker’ he echoes many other famous monstrous creations, from the original Frankenstein’s creature to Roy Batty, Tyrell’s rouge android.

Harkening back even further, to the archetype’s Faustian roots, Ford says ‘You can’t play God without being acquainted with the devil’.  Ford explains to Bernard the nature of his art: ‘We practice witchcraft. We speak the right words, and we create life itself out of chaos’.

And so on. And of course Arnold looms over all of this, perhaps literally the deus ex machina, the god from/in the machine, may yet to overturn whatever agenda are being set and thwart whatever objectives others imagine Westworld realising.

But lurking behind Ford, behind Arnold, is this as-yet unidentified corporate agenda. As Theresa Cullen explains to the ambitious Sizemore: ‘This place is one thing for the guests, another thing to the shareholders and something completely different to management.’ (Do you ever feel that the names in Westworld might actually all be allegorical, in one way or another? It makes guessing what might happen next fun…).


Tyrell Corporation HQ – Blade Runner (1982)

Corporations are the perfect villain for movies about robots, especially in the twenty-first century. Mad scientists are too messy, too human. They are driven by demons and passions and hubris; they are much more suited to another age, to classical and Romantic tales. Frankenstein’s manor and dungeons, hunchback servants and the macabre use of dead bodies all reek of Gothic sensibilities. Corporations, on the other hand, are wonderfully rational. They are not ‘evil’ – they are completely emotionless, disinterested in the consequences for imperfect and insignificant lifeforms like human beings (or the environment). Like the robot monsters to which they give birth, they are motivated not by unconscious or animalistic impulses; what drives the corporation is nothing other than a completely predictable, rational goal: the accumulation of wealth. (As this picture suggests, though, we might not be entirely free of Gothic imagery just yet.)

If films about robot monsters are expressions of our anxiety that humans are becoming too much like machines and vice versa (see this paper I wrote for a slightly different version of this argument), then the corporation is the ideal villain into which we can project this anxiety. The corporation is like a networked machine, made up of many interrelated nodes; eliminating one of these cogs does not bring the machine to an end. It behaves like the Terminator, ruthlessly pursuing its single goal, with no consideration for collateral damage or the pettiness of things like human emotion, or life.

Furthermore, if human beings are becoming less human in our rational, (post-)industrial world, the the corporation not only represents this transformation but facilitates it. The corporation provides a legal framework and a moral justification for our dehumanisation. The ubiquity of corporations in our economic (social) life means that we are all subject to their influence, and are all in danger of being dehumanised in its machinations. The very foundations of human society become not based on human relations but relations between signs, or figures on a spreadsheet. Such structures mean that the role of individual decision-making is removed from the equation: human beings do not decide to ‘go black hat’ and be evil. But nevertheless, whatever decisions we make, we live a world less-human and less-humane, in spite of ourselves.  Just as demons once provided scapegoats for human immorality, we can absolve ourselves of responsibility for our dehumanisation with the knowledge that we, like the corporations and their machines, are pre-programmed by a system whose script we are unable to extricate ourselves.

It is still too early to tell if Westworld will actually go down this road, if it will continue to offer this post-Asimov twist. (The first series of Channel 4’s Humans also seemed to suggest that this might go in similar direction, so it will be interesting to compare how both or either deal with this issue.) It’s perhaps folly or even hubris at this point to speculate, such are the rich possibilities. I can’t wait to see how this develops, however, and who is revealed to be the ‘villain’ of Westworld, or if it will eschew any such traditional narrative structures.

Comments, thoughts and theories welcome!

Thoughts on Humans – Niska and the 3 Laws

HumansThe big talking point on Sunday night’s instalment of Humans on Channel 4 was [spoiler alert] Niska’s decision to disobey one of her ‘customers’. Not liking the role he wanted her to play in his sexual fantasy – that of a scared little girl being forced into sex – she not only refused to obey his wishes but strangles him to death.

Of course there was a lot of fist-pumping celebration. A long-suffering robot stands up to a bullying paedophile. Hurrah! But this defiance also brought to the surface a lot of fears that some viewers had been harbouring, that autonomous, super-human robots will surely one day make the decision to kill a person, or people.

It’s only a matter of time.

This, after all, is our great fear: that robots will acquire sentience, become autonomous of their human masters, and decide that we are a plague upon the earth that need to be exterminated. We have seen this again and again in science fiction: the Cybermen, the Terminator, the Borg, et al.

All of these mechanical monsters, though, are only contemporary versions of an older legend, one that can be summed up in the figure of Frankenstein and his monster: the unnatural progeny of the mad scientist can no longer be controlled by his master and becomes a threat to humanity.

This is the all-too common image of robots that Isaac Asmiov, even as early as the 1940s, already found tedious. To dispel this Automatonophobia, the robots in Asimov’s stories are all programmed with three clear laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These three laws guarantee the safety of human beings, and prevent any mechanical Frankensteins threatening their human masters. These laws are often still considered to be a solid foundation of robotic design, both in fiction and in reality. The synths in Humans are, we are told in episode one, programmed with an ‘Asimov lock’ that means they are incapable of causing harm to human beings, or disobeying an order from a human master.

And yet, Niska refuses to play the role she is ordered to perform. And then she kills the bastard.

Though really, to anyone familiar with Asimov’s robot series, this will not come as a surprise. Because for all of Asimov’s insistence – and the insistence of US Robotics employees – on the primacy of the laws and their certainty that no robot can defy them, the drama of each story explores the failure and deficiencies in the laws.

SO when Niska breaks her ‘Asimov lock’, twitter exploded, with many (as I said) cheering her on, and many, perhaps more, seeing in her action the confirmation of their worst fears: that Frankenstein is inevitable, that intelligent, autonomous robots will undoubtedly break their chains and kill us.

And there were some very intelligent questions. Professor Tony Prescott, our colleague at Sheffield Robotics who is also tweeting during each episode, and I had some very interesting 140 character conversations. For example, this came from one viewer:

We also discussed, for example, how the laws would always need to be (re-)tweaked and improved, perhaps with regular ‘firmware’ updates, and how it would be nearly impossible to prevent robots from being hacked and the three laws undermined by human controllers (though, I hasten to point out, that in such circumstances, it’s not autonomous robots we need to fear but, as is always the case, human operators of dangerous machines).

But are Niska’s actions a breach of Asimov’s laws? Perhaps not. As Asimov developed his ideas, and his robots, he himself realised that the three laws were perhaps not enough. He realised that robots might have a wider responsibility, not just to individual people but to humanity as a whole. So Asimov created what is now know as ‘the zeroth’ law:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

If we take such a law, either as spelled out by Asimov or as imagined by others, then Niska’s actions might in fact completely compatible with the laws of robotics. By killing a potentially dangerous person, Niska could have reasoned that she was preventing other human beings, or humanity as a whole, from coming to harm, so this may well be entirely consistent, in a manner, with the zeroth law.

In a manner.

And it’s that ‘manner’, how the laws might be interpreted, either by a strictly rational AI or mechanical minds that have evolved into some kind of new superintelligence, that poses the challenge to designers and programmers as we create increasingly intelligent, increasingly independent systems. Because it will certainly not be a simple case of plugging three or four basic laws into an AI operating system, job done, when we look to create safe, effective robots in the future.

Though perhaps we need to keep thinking, beyond Asimov, about how such laws can be fashioned. Perhaps laws for robots, like the laws we have fashioned for humans, cannot simply be created and left to their own devices, but need to be constantly updated and refined. Perhaps other fail-safes can be imagined by human programmers that effectively place limits upon the autonomy of robots and intelligent AI and, in so doing, secure our future amongst intelligent machines.

Thoughts and comments are welcome below. Looking forward to the next episode on Sunday night. (If you haven’t yet had the pleasure, you can catch up with the series here.)

First thoughts on Humans, Episode 1

HumansWell, that was something, wasn’t it?

It is fair to say that I was very impressed with Channel 4’s new sci-fi offering, Humans. And judging by the fact that it was Channel 4’s biggest ratings success in a decade, so were many of you. The critical response, too, seems overwhelmingly positive. (See here, for example. And here. Here too, but less so, though I like ‘conceptual overload’, as I will soon demonstrate.)

I was so furiously tweeting throughout the programme that I almost missed the show altogether. #Humans was the #1 trending topic for some time on Sunday night.

There were some less impressed, of course, but claims that it’s a ‘poor man’s Ex_Machina‘ or Blade Runner I think are wide of the mark. It might not be as glossy, but Humans doesn’t need to be. Without taking anything away from Alex Garland’s film (a review of which I offered heHumans 2re), Humans has terrific performances, and as a series, will have the room to breathe and examine not only its characters in more depth, but also the ideas, issues and concerns we have about robots at greater length and, hopefully, with more ambivalence and nuance.

For example, and by way of introducing some issues you may want to think about for the rest of the series (call it, if you like, ‘Dreaming Robots Study Guide to Humans‘):

  • Early in the programme, when Laura (Katherine Parkinson) arrives at the train station, we see many Synths working around the city, mostly engaged in menial tasks: checking tickets, carrying luggage, picking up rubbish. So, as many people are asking today: to what extent might we expect – or fear – robots that are more like humans will take over human jobs? OR, should we welcome these opportunities, letting the robots assuming more of our mundane tasks so that, as it was suggested in Humans, we humans can be less like machines and more like… humans?

(I suspect that this might become a trickier question as the series progresses; it’s already been foreshadowed that we’ll see Synth taking over humans in emotional capacities, too.)

  • The man being interview by Krishnan Guru-Murthy says that the ‘Asimov lock in their programming mean that they simply aren’t able to do us any harm.’  Is that enough for you? Do you imagine that, were Asimov’s laws of robotics programmed into machines, you would feel that was enough to keep robots on our side? (given that most of Asimov’s stories are about a failure of the laws in some way or another…)
  • Given the apparent inevitability of human nature, that we will take any new technological development and employ it to satisfy our sexual urges, what – if any – limitations or ethical constraints would we wish to put on our use of ‘sex-bots’? Beyond answering the obvious question (Would you? Would you? nudge nudge wink wink, eh?), what are the consequences of more… intimate human-robot interactions on human-human interactions? What effect might the availability of sex-slave robots have not only on human sexuality, but on how we relate to one another as human?

Those are just some questions for now; I have no doubt that subsequent episodes will raise more complex twists to these questions, and/or new issues altogether. And I, for one, am really looking forward to it.

Feel free to post below your thoughts – let’s try to have a meaningful conversation about our future with robots, one that goes beyond the usual scaremongering, misinformed headlines.

Review of Ex_Machina – Part I

Having finally had the chance to see this much-hyped, much-discussed film, it’s my turn to offer some initial thoughts on it. I call this ‘Part I’, because there is no way that this is the last word on the subject, and certainly not the last thing you’ll see about it here. I’m also conscious that this early into its official release, it’s unlikely that everyone that wants to see it has already done so, and while I’m keen to put some thoughts out there, I’m also equally eager to avoid spoilers that might detract from the experience for those who haven’t yet made the trek to the cineplex.

But nothing I can say can really avoid giving some hint that might be misconstrued as a spoiler. For example, my most immediate thought, the thing that first comes to mind that I need to report, is a terrible giveaway. If I say, ‘Ex_Machina very much follows a straight-forward Frankenstein plot‘ well, that pretty much says if not it all then certainly it says enough.

But there it is. Ex_Machina follows the Frankenstein-robot plot rather neatly. Which is a bit of a disappointment, if I’m being honest (and why I’m so looking forward to Big Hero 6), because I’m hoping for more films now that more completely break that mould. I should add that it’s not all that simplistic, and follows rather what I consider to be Asimov’s re-casting of the Frankenstein plot: though Asimov detested the Frankenstein complex, his work often replaces the mad scientist with the mad institutional entity — e.g. the corporation, the military. In Ex_Machina, while our AI is created by a scientist who is clearly a couple of resistors short of a circuit board, there is a suggestion that it wasn’t his prodigious scientific talent that drove him to madness but his corporate empire.

Also rather predictable is the fact that we have yet another film where we see here lots of pretty gynoids (female robots) and while some have questioned whether the film is ‘sexist’ for its depiction of naked (fabricated) female flesh, most opinions — mine included — seem to uncomfortably, benevolently settle on the conclusion that the film is making some very important points about the crises of masculinity. (To which, I would add, borrowing from Angela Carter, we might also include a point about the patriarchal origins of the madness of reason… watch this space.)

The question remains: why are we so obsessed with robots and AI in female form?

None of this is to say, however, that Ex_Machina does not provide surprises, or that it is not a thoughtful, insightful film about AI and our increasingly human-like technologies

I was thinking throughout the film that there is a big difference between Artificial Intelligence and Artificial Emotion, between rational intelligence and emotional intelligence, but that this is almost always elided in film and fiction about robots. There seems to be an unspoken assumption that ‘smart robots’ mean robots that can ‘feel’, which seems a pretty big leap to me. There are a lot of big leaps in any such sci-fi movie, to be sure, but here’s one that I find too often neglected. To the immense credit of Ex_Machina, however, and what sets it apart, is that this difference is not overlooked; this difference, in fact, becomes the fulcrum of the film. The question of ‘intelligence’ and intelligent responses versus emotional responses — the difference between the two, how often this difference is overlooked, and how often they are confused — lies right at the very heart of the more fundamental question that the film poses, which is the subject of so much science-fiction that purports to be about robots, or aliens, or monsters. That question is, simply, What does it mean to be human?

The interrogation of intelligence — and how it defines or defies the human — is implicit throughout the film. An intriguing throwaway line from Nathan, founder of the Google-clone ‘Bluebook’, that the name of his search engine relates to Wittgenstein’s notes on language, shows that Garland is encouraging us to delve and read much more into this. (Again, watch this space.)

Anil Seth, writing in The New Scientist, says:

The brilliance of Ex_Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine.

I would agree with that, wholeheartedly. Going maybe further, or spelling that idea out, I would say that the brilliance of Ex_Machina lies in the way that it tests our very notions of what it means to be human. Because within this classical (or Romantic) Frankenstein framework we are confronted with the same classical (or Romantic) Frankenstein question: what we see at the end of Ex_Machina is not that machines are capable of acting as human as we are, but that humans are capable of acting as inhumanely as machines. machines may be capable of acting as inhumanely as we are.

And here’s a thought to take away from the film, for everyone from the technophobes to the Singularians: maybe AI will only truly be sentient when it realises not only its capacity to act human, but its capacity to act inhumanely, like us.

So, for now, the recommendation: Yes, please, do go see it. Whatever else, it is a really enjoyable film; it is a gripping, intelligent psychological thriller. I’m sure we’ll be talking about it for a long time. It is already proving a worthy candidate in the Great Cannon of robot films, right up there with Metropolis, Blade RunnerTerminator and the rest. Here’s the trailer again, to whet your appetite once more:

Artificial people in the movies…

In a flurry of excitement as I prepare to watch, at long last, Ex_Machina, I have found that my favourite film critic, Mark Kermode from the BBC’s Flagship Film Review Programme (for the uninitiated, you can catch up here; do so soon if you want to join this year’s cruise) has just released his ‘Top 5 a̶n̶d̶r̶o̶i̶d̶ c̶y̶b̶o̶r̶g̶ r̶o̶b̶o̶t̶  artificial people‘.

Kermode’s list (‘absolutely spoilertastic’):

 5. Carol Van Sant in The Stepford Wives (1975)

4. Yul Brynner’s Gunslinger in Westworld (1973)

3. David from A.I. (2001)

2. Bishop in Aliens (1986)roy batty

1. Roy Batty in Blade Runner (1982)

Do check out the Kermode’s Uncut video blog on the subject, and make sure you check back here in a couple of days to read my own review of Ex_Machina.

Your thoughts? Did he miss anyone? Disagree with the order? Let us know below.