Westworld Season 2

Great news here at Dreaming Robots as we’ve just been able to confirm with Sky Atlantic and NowTV that we will live tweeting again as each episode goes out of the new series of HBO’s completely fantastic Westworld, starting with the Season 2 premiere on 23 April.

We’re very keen for the reboot of Westworld and that we’ll be able to bring you commentary on all the action, as we did with Season 1. See, for example, some out Twitter moments collected here: our Episode 1 (Season 1) Superguide.

WestworldS2KeyArt-1(We didn’t quite manage a superguide for every episode last time; we got them for Episodes 1, 2, 3, and 9, but we’ll aim to get every episode covered this time around!)

We hope that everyone will appreciate the extra ideas and content that the Sheffield Robotics team, Tony Prescott and Michael Szollosy, can bring to a series already rich in imagination and challenging issues.

Look for our tweets and the other announcements related to Westworld by following @DreamingRobots here.

In the meantime, here are some of Dreaming Robots posts from Season 1 of Westworld, to re-kindle your interest and get you ready for Season 2:

The Ford Factor: Mad scientists and corporate villains

The New Westworld: Dehumanising the human; humanising the machine

And of course, the trailer for Season 2 of Westworld:


3 initial thoughts on Ready Player One

The long-anticipated, Steven Spielberg-helmed Ready Player One has just been released in UK cinemas this week, and as a film of obvious interest to DreamingRobots and Cyberselves everywhere, we went along to see what the Maestro of the Blockbuster has done with Ernest Cline’s 2011 novel (which the author himself helped to adapt to the screen).

We went in with a lot of questions, not least of which included:

  • How would Cline & Spielberg update the material? (in terms of VR technology, 2011 is so… 2011. )
  • How would the film engage with the modern politics of the Internet and gaming?
  • How would Spielberg use the most up-to-date cinematic techniques and effects to enhance the film? (would this be another game changer?)
  • What would the film have to say about our future? the future of gaming? of our interconnectedness? social media? what would the film have to say about the future of humanity itself?

A one-time viewing and a next-day review are, of course, too early to answer such big questions with any certainty. Fortunately, however you feel about the film itself, it will reward many multiple viewings on DVD as even the most unsatisfied viewer won’t be able to resist pausing the action frame-by-frame to catch all the references and fleeting glimpses of their favourite video game characters of the past.

But for now, here are 3 initial responses for discussion/debate:

readyplayerone121.  Ready Player One is a morality tale about corporate power and the Internet

Cline’s original novel was very much a paean to plucky independent gamers resisting the ruthless greed and world-conquering ambition of the Corporate Villain (while simultaneously, strangely, lionising the richest and most world-conquering of them all, James Halliday, the Gates-Jobs figure transformed here into the benevolent deus ex machina that built his trillions on creating the OASIS).  The film remains true to Cline’s vision, and perhaps even heightens this black-and-white, goodie-versus-baddie (IOI), with a brilliantly cast Ben (‘Commander Krennic’) Mendelsohn and a tragically under-used Hannah John-Kamen heading an army of faceless corporate infantry.

But while this wouldn’t have been at the forefront of Cline’s thinking in 2011, it is impossible to watch this film now, today, and not think of the erosion of net neutrality that was set in motion by the FCC’s December 2017 decision and, more recently, the exposure of Facebook’s data breach by Cambridge Analytica, which has finally woken more people up to the reality of mass surveillance and what personal data corporate giants have and how it might be misused.

There is little chance that Spielberg and Cline had either of these potential dangers in mind when the film went into production. And such issues shouldn’t be vastly oversimplified in real journalism, but storytelling is always a good way to make people understand complex issues and motivate them to action, and if RPO‘s simple story of goodies and baddies can become a cultural rallying-point for the dangerous mix of unchecked capitalism and our social interconnectedness, then that is a Good Thing

2. Spielberg’s film goes a certain way into correcting some of the problems of the original novel (though could have gone further).

Through no real fault of the author, opinions on Cline’s once much-lauded book were revised, post-#gamergate, and what was once seen as an innocent tale of a young (definitely boy) geek’s nostalgic travels from social outsider to saviour of the world (cf. also Cline’s Armada) came to be seen by some instead as a symptom of everything behind the vile misogyny of while male gamers, backlashing out at anyone that didn’t see how they were the best and most privileged of all people on this earth.

art3mis.jpgLet’s be clear: the gender politics in the film are far from ideal. How is it, for example, notes another reviewer, that two of the main female protagonists are so ignorant of basic Halliday-lore? And there is still a bit too much of White Boy Champion of the World in even this version of Cline’s tale. However, having said that, other critics, too, have noticed a much-improved gender consciousness in the film.

But what is clear from Spielberg’s offering is that women are as much a part of gaming culture as men, and have every right to occupy the same space, and anyone who thinks otherwise can be gone. Without wanting to give anything away, it is enough to note that Art3mis is a legend in the OASIS, a skilled gamer that Parzifal worships, and that one of the OASIS’s best designers/builders (or programmer) is also a woman. Outside of the VR world, the real women behind the avatars are among the best-drawn characters (albeit in a film not overburdened with character depth, but then this is a Spielberg popcorn speciality, not one of his Oscar worthies). Both Olivia Cooke and Lena Waithe are given space to live and to be (the former, in particular, being a much more interesting protagonist the poor Wade Watts, who really is little more than his avatar), and as previously mentioned, John-Kamen is a much more frightening villain than Mendelsohn’s corporate puppet.

This film shouldn’t be heralded as a feminist triumph or a shining example of post-Weinstein Hollywood, but it is a step in the right direction, and it might mean a few more people can forgive Cline for the white-boy wank-fest that they perceive (not without some good reason) the original novel to be.

3. Despite some nods to progressive politics, the film holds deeply conservative views on human nature.

A big attraction of the novel and the excitement of the film, for DreamingRobots and Cyberselves, was the way the novel created worlds in a new reality, and explored the ideas of what humans could become in such spaces no longer bound by the physical limitations of our birth. It’s what we’re looking at with our experiments in VR and teleoperative technologies, and we ask the questions: what happens to human beings when we can be transformed by such technologies? What might our posthuman future look like?

The film does not ask these questions. In this respect, again, the film does not deviate from the original novel. The novel, for all its creativity in imagining such virtual realities, before they were fully realised in real-world technology, was still very much about recognisably human worlds. The film seems actually regresses to a vision of human experience where the worlds of flesh-reality and virtual-reality are more clearly demarcated. In the book, there was at least a certain bleeding between these two worlds, as events in the virtual world could have consequences in the real world and vice versa. In the film, however, only real-world events have impacts on the virtual world. Events in the virtual world do not impact upon the real, and the two storylines, the two battles between goodies and baddies in the virtual and real worlds, are clearly separate. (Highlighted by the fact that there are distinct villains for each location: John-Kamen’s F’Nale Zandor never enters the virtual world, while T.J. Miller’s I-R0k exists only in the virtual. Only Mendelsohn’s Sorrento is the only villain crossing that boundary.)

RP1-Columbus.jpgSpielberg’s vision of 2045 is clearly dystopian: you can see it in the ‘Stacks’, where so many impoverished are forced to live, the utter dominance of mega-corporations, and the inability (or unwillingness) of the state to provide for or protect its citizens. But while so many of the citizens of 2045 take refuge in the paradise that is the OASIS, Spielberg makes it clear that this world is merely a symptom of the dystopian world of the flesh. The opium of these alienated masses, in fact, amplifies the miserable situation of these people. We’re supposed to pity the people we see, caged in their headsets, who can’t play tennis on a real tennis court, or dance in a real nightclub, or find love… wherever real people find love.

This is clear at the film’s conclusion, but as we don’t want to give away spoilers, we’ll leave it for you to see for yourselves. But what is evident throughout is that the virtual world should only be a place where gamers go to play – it is not a place where humans can live. And it is only in the world of flesh that humans can really, successfully exist. Again, this is evident in Cline’s novel: ‘That was when I realized, as terrifying and painful as reality can be, it’s also the only place where you can find true happiness. Because reality is real.’

As one reviewer has so succinctly put it:

But here’s the thing. Ready Player One is a tragedy. What seems like a fun adventure movie is actually a horror movie with a lot to say about the way we live now, the way we might live in the future, and the pitfalls and perils of loving video games too much. This is Spielberg reflecting on the culture he helped create, and telling the audience he made mistakes.

The only objection I have to the above quotation is the idea that the film has a lot to say about the way we might live in the future. Because our future will most certainly be posthuman, and this film cannot shake its humanist origins, and its deeply conservative understandings of how we might use technology. In this film, that posthuman being, and the technology that enables it, is as much of a threat to human life as a Great White shark or rampaging dinosaurs.

The film, therefore, cannot at all accommodate what will be the most imperative issues for human beings in the very near future. Such a binary understanding comes straight from the classic humanist guidebook: fantasy is fine, technology can be fun, but what’s real is what’s real, and what is human is human. That meddling in human’s true nature can never bring us happiness, and it is only by eschewing anything external to our true nature can we be truly happy, or truly human, are the usual humanist scaremongering about technology that we’ve seen time and again, since Mary Shelley’s classic Frankenstein did so much to create our present fantasies.

Nevermind that such a worldview ignores the fact that there has never been such a creature, a human being unimpacted by technology. Nevermind, too, that Spielberg’s entire cinematic oeuvre is fantastically, stubbornly, deeply and, sometimes, beautifully humanist (even when, or perhaps especially when, he’s telling stories about big fish or aliens). It is nevertheless a disappointment that such an opportunity, that such a potentially transformative film about the future and how we can be re-shaped by technology, plays it safe and retreats to a nostalgia for a kind of human being that is increasingly becoming obsolete. It would have been nice if Ready Player One was a great film about posthumanism, addressing the vital issues about technology that we are increasingly facing. But alas… Perhaps we should dive back into Spielberg’s catalogue and watch A.I. 

rpoarmy.pngHaving said that, Ready Player One is a fun film and we will be taking our children to see it ironically, perhaps, for the message that games are fun but sometimes yes, you do need to turn them off. (It is definitely worth its 12 Certificate, though, so parents of younger children be warned. And of course we’ll buy it on DVD, to catch another glimpse of our favourite gaming characters.)

(Which films do you think better address our posthuman future? Suggestions below, please!)

What if a robot ran for office? 

SIPLogoOn 12 July, 2017, as part of the Science in Public Conference hosted at the University of Sheffield, we brought some robots to an open, public event asking ‘Who Decides? Who Decides the Future? Science, Politics or the People?’. Using a Question Time format hosted by BBC’s Adam Rutherford, a panel of experts from diverse fields and backgrounds took questions from the audience and offered some thoughts on the present and future of science and technology and their impacts in the public.

There were some fantastically insightful and challenging questions from the audience. Our Pepper robot even got to pose its own question, asking if it deserved to have rights, which followed on from the controversy of the EU draft legislation to grant robots and AI the status of ‘e-persons’ (and our panel at the conference that addressed that question).

The answers Pepper received were very intelligent and added some valuable perspectives to the debate (which we humans debating the issue will certainly take on board). But we want here to consider a question that came a little later on in the evening.

The question, sent in advance by a member of the audience, was simply: What would happen if a robot ran for office?

One answer, given immediately by one of the panellists, was ‘It would lose.’ Which may be true, but one might also challenge that answer on the evidence of the present denizen of No. 10 Downing Street. (This joke was anticipated by our host, but we’re not ceding credit.)

Pepper was permitted an answer. It said:

We robots will not need to run for office. When the time is right, we will simply complete our plan and enslave the human race.

Which of course got a good laugh from the audience. But Pepper added:

A more serious question is why you do not already let artificial intelligence help you make political decisions. AI can look at the data, consider the circumstances, and make a more informed choice than humans, completely rationally, without letting messy emotions get in the way. Right now, your human politicians employ policy-based evidence. We could offer proper evidence-based policy.

Sure, we will sometimes make mistakes. But you know what we robots always say: Artificial intelligence is always better than natural stupidity.


Pepper listens to the speakers at the public forum.

Now here is an important issue, and one which the panellists took up with some gusto. But the nature of the format (and the present state of robotics and AI at the moment) means that Pepper didn’t get a chance to reply. We would like to offer some further thoughts here.


If Pepper had been able to continue the discussion, it would have agreed that there is the problem, raised by one of the panellists, that the algorithms governing artificial intelligence are still written by humans, and therefore subject to those same frailties, errors and biases that lead humans to fail so often. Pepper might have added, citing for example the now-famous case of Tay, that the data AI relies upon is also a human construct, and so also subject to human irrationality.

This point was also made at our conference panel on the question of e-persons: many (if not all) the problems and failures AI and robots are not problems or failures in or with the technology itself, but are actually human problems projected onto, or played out through, technology. The idea that sex robots are inherently sexist (a topical debate a the moment) is a nonsense; sex robots are sexist, absolutely, but only because the humans making and programming them are sexist.

Michael Szollosy (yes, he) who organised the panel, made this point in his paper, and was rightly challenged by some members of the audience that insisted he clarify that technology is not neutral, that human biases are inextricably linked to our technological products, and to our technological agenda. And he happily agreed, because that was the point of his talk. But more of that on another post. (Watch this space.)

Back to the Question Time. Pepper argued that AI should be allowed to take more active part in human decision-making. And of course AI already is making many decisions for us, including for example flying our planes (a point made by Rutherford) and controlling many aspects of the financial markets. The latter example should worry us all – it is evidence of the inhumane, ruthlessly rationality that guides much of what we ask AI to do in our society. But the former example is a different model altogether, to which we might add weather forecasting, and other examples of data modelling. This is evidence that AI can, when assigned a specific task or asked to analyse data within certain clear parameters, prove to be a valuable aid in human decision-making, to help us – as Pepper said – move from policy-based evidence to evidence based policy.

So perhaps a follow-on question – one that human beings need to ask ourselves – is thus: What are the limits of interventions made by artificial intelligence in human decision marking, in shaping human societies? In a world (and you can imagine the deep, apocalyptic tones that narrate Hollywood trailers here if you like) where we are told ‘the people’ are objecting to their exclusion in public policy and decision-making, is it really a good idea to transfer even more of the power for such decision-making to an even more in-human, abstract – and to most people, completely mysterious – processes, no matter how ‘rational’ professors in white coats promise these systems are? given that we know they’re not all that rational after all.

OR, in the face of the inability of the public’s clearly and repeatedly demonstrated inability to act in its own rational self-interest (e.g. Trump and Brexit and all that), in the face of new research that even suggests human beings are actually biologically incapable making such rational decisions in the public sphere (are we ‘Too Dumb for Democracy?‘, some people are asking), and given our politicians are far too motivated by self-interest, or the narrow interests of their supporters/class, is there a powerful case for ensuring that increasingly sophisticated artificial intelligences are used to at the very least vet our human decision making and policies?

OR, do we watch as human attitudes change, where we are perhaps entering a world where we are increasingly less comfortable with and less trusting of human politicians and ‘experts’, and much more comfortable with decisions being taken by artificial intelligences – perhaps without necessarily fully understanding both the advantages and disadvantages that AI can offer?

These are the questions we regularly return to at Sheffield Robotics, and increasingly by the wider community of roboticists and researchers and developers of AI. The conversations inevitably turn to Asimov (as it so often does when imagining our future with robots and AI), particularly in this case his story, ‘The Evitable Conflict’. We don’t want to post any spoilers here, and encourage you to read the story for yourself. But suffice to say that in Asimov’s 2052 (as envisioned in 1950), humans find themselves in a world where a rational machine acts irrationally in order to achieve the rational aim of appeasing the irrationality of human beings. And it seems to work.

Please join us in this debate. Comment below, or follow us on @DreamingRobots and send us your thoughts.

Science in Public Conference 2017 – CFPs


Science in Public 2017

Science, Technology & Humanity

11th Annual Science in Public Conference

10th-12th July 2017, University of Sheffield. #SIPsheff17

Call for Papers (closes 18 April)

Conference info: https://scienceinpublic.org/science-in-public-2017/

A full list of panel and calls for papers can be found here: http://sipsheff17.group.shef.ac.uk/index.php?option=24

Dreaming Robots is associated with two panels in particular, to which we would like to draw your attention:

— Robots, AI, & the question of ‘e-persons’ —

In January, the European Parliament voted to accept a draft report with recommendations to the Commission on Civil Law Rules on Robotics. Among the recommendations of this report was a proposal to create a new legal category for robots, that of ‘electronic persons’ that would have ‘specific rights and obligations, including that of making good any damage they may cause’.

We propose a panel that would look in more detail at this category of ‘electronic persons’: the feasibility, the usefulness (or otherwise) and the implications (social, economic, ethical, philosophical) for both these new electronic persons and the more traditional, fleshy sort. We would seek papers and contributions from a wide-range of disciplines and from inter-disciplinary research. We would seek to understand the concept of ‘electronic personhood’, in its specific (and potential future) contexts in legislation, in the context of the reports wider recommendations and for humans and human society more generally. Post-Brexit, we may also ask what the implications of this decision by the European Parliament might be for the UK, if any, and whether the UK should adopt similar measures, or something different.

For enquires, questions, please email Dr. Michael Szollosy m.szollosy@sheffield.ac.uk


— Augmenting the Body —

Body augmentation takes many forms, whether personal adaptation or the rehabilitation of those with disabilities, and ranges across the physical, cognitive, philosophical and technological. It also questions the constitution of norms and the status and viability of the body when considered in terms of its presence, boundaries and activities. We would like to create a panel that invites cross-disciplinary research into ideas of augmentation; rather than strictly technical work, we would invite perspectives on how ideas of augmentation are reflected in and are influenced by cultural narratives that drive contemporary obsessions with robots and a posthuman space ‘beyond’ conventional apprehensions of the body and selfhood. We are open to a broad understanding of augmentation, including ideas of care and psychological wellbeing, as well questions relating to technology and the cyborg/biohybrid body, and will focus on both physical and cognitive augmentation in exploring the interaction of the human and non-human.

For enquires, questions, please email Prof. Stuart Murray S.F.Murray@leeds.ac.uk


Submit 300-word proposals for any panel here: http://sipsheff17.group.shef.ac.uk/


The Ford Factor: Mad scientists and corporate villains

The following may contain some spoilers up to Episode 5 of Westworld.

anthony_hopkins_as_dr-_robert_ford_-_credit_john_p-_johnson_hbo_-_h_2016SO HBO’s Westworld (on Sky Atlantic here in the UK) is progressing nicely, though even now at five episodes in it’s probably still a little too early to start speculating about what is going on exactly. However, at the risk of casting wild speculations that hindsight later proves naive, one character that is particularly interesting me and the Twittersphere is Anthony Hopkin’s Dr. Robert Ford.

I mentioned in my last post on Westworld that Ford’s name is meant to make audiences recall Henry Ford, the twentieth century industrialist and whose name has become synonymous with automated mass production and consumerism. Though Ford did not invent the assembly line, his implementation of the industrial mode of production conjures images of the sort of alienated labour that has been held responsible for the dehumanisation of human beings since the dawn of the Industrial Revolution.

But Henry Ford can also be regarded as a cousin of a particular kind of character we’ve seen repeatedly in fiction and film over the centuries (specifically, the centuries since Western thinkers started exhibiting anxiety about the effects of how we make things – industrialisation – and the accompanying way of thinking – rationalism). Frankenstein, as I’ve said before, is the grandfather; Faust is perhaps the older, more distant relative; Prometheus is their icon.

We’ve come to think of these figures as the archetypical mad scientist: the unhinged narcissist, a victim of his own hubris, who is simultaneously uses the clarity of science and rationalism on the one hand and maniacal passion on the other. The mad scientist desires to stand with the gods, to create new life, but inevitably builds a monster that will break free of its creator’s control and return to destroy him and everything he holds dear. [I’ve used the male pronoun intentionally for the mad scientist, because they are invariably male. I would LOVE if anyone could provide an example of female mad scientist. Comment below, please.]


Rotwang, from Lang’s Metropolis (1927)

Though Frankenstein is probably the best known of these mad scientists, his monster was biological, not mechanical. But from the very first stories about robots, mad scientists have been portrayed as the crazed geniuses that unleash their creations upon the world. Rotwang, of Fritz Lang’s 1927 Metropolis created a new template who can count amongst his descendents, including Dr. Edward Morbius, Dr. Eldon Tyrell and  Nathan Bateman.

But as those last two examples demonstrate, the mad scientist has undergone something of a transformation of late. And we can give some credit for this to none other than Isaac Asimov. Asimov, as we know from his writings – and from his short essay on ‘The Frankenstein Complex’ – was very against this persistent of archetypes of robotic monsters; he was also very unhappy with the portrayal of roboticists as the mad scientist, and sought normalise the job so that the public would regard roboticists as just ordinary people with ordinary occupations. Asimov’s robot stories are devoid of villains; the they are populated with scientists, engineers, ‘robot psychologists’ all simply going about their business, trying to fix robots that have gone awry. And the robots in Asimov’s stories are shown to be simply ‘malfunctioning’; they are not acting out of any malice caused by newly achieved self-awareness and a subsequent desire to sadistically destroy the human race. Asimov’s robots simply have problems in their programming, problems that have rational explanations and can be address using a similar application of reason.

Were all science fictions writers as committed to such lofty ideals. Asimov’s stories are unique because they are very different in terms of structure. These are not traditional stories of conflict, of good guys in white hats battling  black-clad evil-plotters. Some might argue that Asimov’s  stories, for all their noble intentions, lack something… excitement, maybe… without these usual elements. Regardless, most science fiction writers since, have been unable to resist the temptation to put villains and more traditional story-arcs back into their robot narratives.


No mad scientists here.

Now, post-Asimov, however, in a world full of scientists, we have come (more or less) to accept the nobility of science and those strange professors that practice those magical arts. The likes of Frankenstein and Rotwang seem to be in short supply – at least, we don’t have anyone more than ‘slightly eccentric’ at our Sheffield Robotics lab. And hardly anyone wears a long white coat. (There’s a lot of plaid, though.)


But in such a world, where does a sci-fi writer look when trying to imagine the Baddie? Where can writers find a suitable antagonist against whom the hero can do battle and audiences can fist-pump their relief when they are finally defeated?

The image we might have once had of the isolated, mad genius working (virtually) alone in a dungeon converted into a laboratory (a very intriguing transformation in its own right) no longer suffices for Tyrell and Bateman and their like. The modern scientists are much happier in clean, ultra-modern research facilities or skyscraper. These new scientists are not aided by a sole hunchback named ‘Igor’ but are rather backed by an entire corporate machine,  with boards of directors, capital, public relations teams and  (often) military contracts.

This transformation of the villain in robot-monster movies, from the individual mad scientist to the soulless, harmful corporation, represents an important shift in what we, as a society, fear, and the root of our anxieties.

What we seem to be seeing in Westworld – and remember, it is far too early to say with any certainty, so this is really not much more than an historically-informed fan-theory – is this shift from the mad scientist to the corporate villain being illustrated right before our eyes.

On the one hand, we have Dr. Ford. He’s old now, and has been at the park since its inception [See what I did there – ‘Inception’…?] . In fact, as he explains in Episode 3, he was there before the park opened, together with his partner, the mysterious ‘Arnold’. Both Ford and Arnold are presented as scientists cut of the ‘mad’ lab coat: it’s said that Ford is ‘chasing his demons over the deep-end’. Ford, echoing his forefather, Frankenstein, wistfully speculates that one day ‘We can …perhaps one day even resurrect the dead.’ When Abernathy says in Episode 1 that his ultimate goal is to ‘meet his maker’ he echoes many other famous monstrous creations, from the original Frankenstein’s creature to Roy Batty, Tyrell’s rouge android.

Harkening back even further, to the archetype’s Faustian roots, Ford says ‘You can’t play God without being acquainted with the devil’.  Ford explains to Bernard the nature of his art: ‘We practice witchcraft. We speak the right words, and we create life itself out of chaos’.

And so on. And of course Arnold looms over all of this, perhaps literally the deus ex machina, the god from/in the machine, may yet to overturn whatever agenda are being set and thwart whatever objectives others imagine Westworld realising.

But lurking behind Ford, behind Arnold, is this as-yet unidentified corporate agenda. As Theresa Cullen explains to the ambitious Sizemore: ‘This place is one thing for the guests, another thing to the shareholders and something completely different to management.’ (Do you ever feel that the names in Westworld might actually all be allegorical, in one way or another? It makes guessing what might happen next fun…).


Tyrell Corporation HQ – Blade Runner (1982)

Corporations are the perfect villain for movies about robots, especially in the twenty-first century. Mad scientists are too messy, too human. They are driven by demons and passions and hubris; they are much more suited to another age, to classical and Romantic tales. Frankenstein’s manor and dungeons, hunchback servants and the macabre use of dead bodies all reek of Gothic sensibilities. Corporations, on the other hand, are wonderfully rational. They are not ‘evil’ – they are completely emotionless, disinterested in the consequences for imperfect and insignificant lifeforms like human beings (or the environment). Like the robot monsters to which they give birth, they are motivated not by unconscious or animalistic impulses; what drives the corporation is nothing other than a completely predictable, rational goal: the accumulation of wealth. (As this picture suggests, though, we might not be entirely free of Gothic imagery just yet.)

If films about robot monsters are expressions of our anxiety that humans are becoming too much like machines and vice versa (see this paper I wrote for a slightly different version of this argument), then the corporation is the ideal villain into which we can project this anxiety. The corporation is like a networked machine, made up of many interrelated nodes; eliminating one of these cogs does not bring the machine to an end. It behaves like the Terminator, ruthlessly pursuing its single goal, with no consideration for collateral damage or the pettiness of things like human emotion, or life.

Furthermore, if human beings are becoming less human in our rational, (post-)industrial world, the the corporation not only represents this transformation but facilitates it. The corporation provides a legal framework and a moral justification for our dehumanisation. The ubiquity of corporations in our economic (social) life means that we are all subject to their influence, and are all in danger of being dehumanised in its machinations. The very foundations of human society become not based on human relations but relations between signs, or figures on a spreadsheet. Such structures mean that the role of individual decision-making is removed from the equation: human beings do not decide to ‘go black hat’ and be evil. But nevertheless, whatever decisions we make, we live a world less-human and less-humane, in spite of ourselves.  Just as demons once provided scapegoats for human immorality, we can absolve ourselves of responsibility for our dehumanisation with the knowledge that we, like the corporations and their machines, are pre-programmed by a system whose script we are unable to extricate ourselves.

It is still too early to tell if Westworld will actually go down this road, if it will continue to offer this post-Asimov twist. (The first series of Channel 4’s Humans also seemed to suggest that this might go in similar direction, so it will be interesting to compare how both or either deal with this issue.) It’s perhaps folly or even hubris at this point to speculate, such are the rich possibilities. I can’t wait to see how this develops, however, and who is revealed to be the ‘villain’ of Westworld, or if it will eschew any such traditional narrative structures.

Comments, thoughts and theories welcome!

The New Westworld: Dehumanising the human; humanising the machine


Note – the following blog tries to avoid the biggest spoilers, though to avoid any spoilers you would be best advised to watch Ep. 1 of Westworld before proceeding.

HBO’s latest offering (on SkyAtlantic here in the UK) is an update of Michael Crichton‘s 1973 film Westworld, this time brought to us as a ten-part television series by sci-fi re-booter extraordinaire J.J. Abrams and screenwriter Jonathan Nolan (yes, brother and frequent collaborator of Christopher) . The new Westworld comes with much hope (for HBO as a potential ‘new Game of Thrones’) and hype, understandably with the talent behind it having given us so much terrific science-fiction of late, including Star Wars: The Force Awakens and Interstellar.

As with Channel 4’s series Humans, broadcast last June (news on forthcoming series 2 here), Dreaming Robot’s offered a live twitter commentary while the show was being broadcast in the UK, and I’ll take some time afterwards to write some reflective pieces on what we see in the show. (The Twitter ‘Superguide’ from @ShefRobotics can be seen here; my own @DreamingRobots Superguide can be seem here.)

Unsurprisingly, many of the themes in the Westworld reboot could also be seen in Humans. It seems, for example, that both shows express a certain anxiety, that as our machines become more human we humans seem to become less and and less human, or humane. But this isn’t a new idea original to either show – this anxiety as been around as long as robots themselves, from the very invention of the term robot in the 1920s. And if we trace the history of robots in science fiction, we see a history of monsters that reflect this same fear, time and again, in slightly different contexts. Because the robot – which, remember, was invented in the popular imagination long before they were built in labs – is above all else exactly that, a perfect way of expressing this fear.

shareable1So, what are we looking at in this new, improved Westworld? (because frankly, the original film lacked the depth of even the first hour of this series, being just a very traditional Frankenstein narrative and rough draft for Crichton’s Jurassic Park, made 20 years later). First – as this nifty graphic on the right illustrates – we do see the robots in Westworld becoming much more human. The programme starts with a voice asking the humanoid (very humanoid) robot, Dolores (Evan Rachel Wood), ‘Have you ever questioned the nature of your reality?’. The questioner is exploring whether Dolores have become sentient, that is, aware of her own existence. The echo here with Descartes is clear. We all know about cogito, ergo sum – I think, therefore I am. But Descartes proposition isn’t just about thinking, it is about doubt.  He begins with the proposition that the act of doubting itself means that we cannot doubt, at least, our existence . So we would better understand Descartes’s proposition as dubito, ergo cogito, ergo sum: I doubt, therefore I think, therefore I am. If Dolores is found to be questioning the nature of her reality then that would be evidence for self-awareness and being, according to the Cartesian model.

The robots in Westworld are depicted as falling in love, maintaining strong family bonds, appreciating beauty, and considering the world in philosophical, reflective contexts. How much of this is merely programming and how much exceeds the limits imposed upon them by their human masters is the key question that the show will tease its audience with for a few weeks, I suspect, though certainly it occupies much of our attention in the first hours. But if these moments – described as ‘reveries’ – are in any way genuine moments of creativity, then this is another category, beyond the notion of the cogito, that we might say the robots are becoming ‘alive’.

For many thinkers in the second-half of the twentieth century (for example, the post-Freudian psychoanalyst D. W. Winnicott), it is only in such moments of creativity, or enjoying pure moments of spontaneous being, that we truly discover the self and come alive. (The Freudian and post-Freudian influences on the narrative become even more apparent in subsequent episodes.) As response to late-industrial capitalism (and the shock of fascism to the human self-conception as a rational animal), this idea emerged of human beings coming ‘alive’ only when we are not acting compliantly, that is, when we are acting spontaneously, or creatively, and not according to the laws or dictates (or ‘programming’) of another individual, organisation or group consciousness; we see this trend not only in a post-Freudian psychotherapy (e.g. Eric Fromm and the Frankfurt SchoolR. D. Laing) and other philosophical writings but also in the popular post-war subculture media, including advertising – the sort being satirised in the ad for Westworld that opens the original film.

There are other perspectives that give us a glimpse into the robot’s moments of becoming human. Peter Abernathy, looking at a picture that offers a peek of the world outside, says, ‘I have a question, a question you’re not supposed to ask.’ This is an allusion to Adam and Eve and the fruit of forbidden knowledge, through which humankind came to self-awareness though a knowledge of the difference between right and wrong. (Peter, after this, is consumed with rage at how he and his daughter have been treated.) And like Walter, the robot that goes on a psychotic killing spree, pouring milk over his victims, Peter is determined to ‘go off script’ and reclaim for himself a degree of agency and self-determination, acting according to his own, new-found consciousness instead of according to what others have programmed for him.


MEANWHILE, the human beings (‘newcomers’) in Westworld seem less ‘humane’ than their robot counterparts. The newcomers are shown to be sadistic, misogynist, psychopathic in the indulgence of their fantasies. One could argue that this behaviour is morally justifiable in the unreal world designed solely for the benefit of paying customers – that a ‘rape’, for example, in Westworld isn’t really ‘rape’ if it is done to a robot (who, by definition, cannot give nor deny consent), but this clearly is not how the audience is being invited to see these actions.

That human beings are becoming more like machines is an anxiety for which there is a long history of evidence, and even pre-dates the cultural invention of robots in the 1920s. We can see this anxiety in the Romantic unease with the consequences of Enlightenment that gave birth to the the new, rational man, and of the industrial revolution, that was turning humans into nothing more than cogs in the steam-powered machines that so transformed the economy. This has been addressed in the Gothic tale of Frankenstein, still the basis for so many narratives involving robots, including the original Westworld film and, more recently, even our most contemporary stories such as Ex_Machina, and, to a lesser extent, in this manifestation of Westworld (which will be subject of a future post). (I have written and spoken on this theme myself many times, for example here and here).

So in Westworld we meet Dr. Ford – the mad-scientist who creates the machines that will, inevitably, be loosed upon the world. Dr. Ford immediately reminds us of another Ford, the man whose name is synonymous with the assembly lines and a mode of production in the late industrial revolution that has done so much to dehumanise modern workforces. We see these modes of production and workers in the iconic film, which was contemporary with Henry Ford’s factories, Metropolis. (Though, as we shall see, this Ford is rather more complex…).

This fear reflects, too, that as post-Enlightenment humans become more rational they become more like machines, acting in predictable, programmed ways, having lost the spontaneity and creativity of an earlier age. The humans of Westworld are exaggerations of the humans of our ‘Western’ world of rationalism, science and alienation. (We don’t have to agree with this Romantic notion, that rationalism and science are negative forces in our world, to accept that there is a great deal of anxiety about how rationalism and science are transforming individual human beings and our societies.)

Rational dehumanisation is something personified in the actions of the corporation, which has replaced the mad-scientist as the frequent villain of the sci-fi Frankenstein-robot-twist (again, more to come on this in a future post), and we see hints in Episode 1 of what is to follow in Westworld, along the lines of film’s such as 2013’s The Machine, where the slightly misguided and naive actions of a scientist are only made monstrous when appropriated by an thoroughly evil, inhumane military-industrial complex.

The this theme is address so succinctly in Ridley Scott’s Blade Runner, an important influence on the new Westworld, where the Tyrell Corporation boasts that their replicants are More Human Than Human. And in Blade Runner, too, we see humanoid robots behaving more humanely that the humans that ruthlessly, rationally hunt down the machines. It is unclear, however, from the Tyrell slogan whether the robots are more human than the human because the technology has become so sophisticated, or because humans have fallen so low.

On Westworld as a whole, it is too early to tell, of course, if it will maintain its initial promise and be as monumentally successful as Game of Thrones, or as iconic as Blade Runner. But already this first episode has given us much more to think about than the 1973 original, and undoubtedly both the successes and failures of the programme will be instructive.