Thoughts on Humans – Niska and the 3 Laws

HumansThe big talking point on Sunday night’s instalment of Humans on Channel 4 was [spoiler alert] Niska’s decision to disobey one of her ‘customers’. Not liking the role he wanted her to play in his sexual fantasy – that of a scared little girl being forced into sex – she not only refused to obey his wishes but strangles him to death.

Of course there was a lot of fist-pumping celebration. A long-suffering robot stands up to a bullying paedophile. Hurrah! But this defiance also brought to the surface a lot of fears that some viewers had been harbouring, that autonomous, super-human robots will surely one day make the decision to kill a person, or people.

It’s only a matter of time.

This, after all, is our great fear: that robots will acquire sentience, become autonomous of their human masters, and decide that we are a plague upon the earth that need to be exterminated. We have seen this again and again in science fiction: the Cybermen, the Terminator, the Borg, et al.

All of these mechanical monsters, though, are only contemporary versions of an older legend, one that can be summed up in the figure of Frankenstein and his monster: the unnatural progeny of the mad scientist can no longer be controlled by his master and becomes a threat to humanity.

This is the all-too common image of robots that Isaac Asmiov, even as early as the 1940s, already found tedious. To dispel this Automatonophobia, the robots in Asimov’s stories are all programmed with three clear laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These three laws guarantee the safety of human beings, and prevent any mechanical Frankensteins threatening their human masters. These laws are often still considered to be a solid foundation of robotic design, both in fiction and in reality. The synths in Humans are, we are told in episode one, programmed with an ‘Asimov lock’ that means they are incapable of causing harm to human beings, or disobeying an order from a human master.

And yet, Niska refuses to play the role she is ordered to perform. And then she kills the bastard.

Though really, to anyone familiar with Asimov’s robot series, this will not come as a surprise. Because for all of Asimov’s insistence – and the insistence of US Robotics employees – on the primacy of the laws and their certainty that no robot can defy them, the drama of each story explores the failure and deficiencies in the laws.

SO when Niska breaks her ‘Asimov lock’, twitter exploded, with many (as I said) cheering her on, and many, perhaps more, seeing in her action the confirmation of their worst fears: that Frankenstein is inevitable, that intelligent, autonomous robots will undoubtedly break their chains and kill us.

And there were some very intelligent questions. Professor Tony Prescott, our colleague at Sheffield Robotics who is also tweeting during each episode, and I had some very interesting 140 character conversations. For example, this came from one viewer:

We also discussed, for example, how the laws would always need to be (re-)tweaked and improved, perhaps with regular ‘firmware’ updates, and how it would be nearly impossible to prevent robots from being hacked and the three laws undermined by human controllers (though, I hasten to point out, that in such circumstances, it’s not autonomous robots we need to fear but, as is always the case, human operators of dangerous machines).

But are Niska’s actions a breach of Asimov’s laws? Perhaps not. As Asimov developed his ideas, and his robots, he himself realised that the three laws were perhaps not enough. He realised that robots might have a wider responsibility, not just to individual people but to humanity as a whole. So Asimov created what is now know as ‘the zeroth’ law:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

If we take such a law, either as spelled out by Asimov or as imagined by others, then Niska’s actions might in fact completely compatible with the laws of robotics. By killing a potentially dangerous person, Niska could have reasoned that she was preventing other human beings, or humanity as a whole, from coming to harm, so this may well be entirely consistent, in a manner, with the zeroth law.

In a manner.

And it’s that ‘manner’, how the laws might be interpreted, either by a strictly rational AI or mechanical minds that have evolved into some kind of new superintelligence, that poses the challenge to designers and programmers as we create increasingly intelligent, increasingly independent systems. Because it will certainly not be a simple case of plugging three or four basic laws into an AI operating system, job done, when we look to create safe, effective robots in the future.

Though perhaps we need to keep thinking, beyond Asimov, about how such laws can be fashioned. Perhaps laws for robots, like the laws we have fashioned for humans, cannot simply be created and left to their own devices, but need to be constantly updated and refined. Perhaps other fail-safes can be imagined by human programmers that effectively place limits upon the autonomy of robots and intelligent AI and, in so doing, secure our future amongst intelligent machines.

Thoughts and comments are welcome below. Looking forward to the next episode on Sunday night. (If you haven’t yet had the pleasure, you can catch up with the series here.)

Advertisements

Review of Ex_Machina – Part I

Having finally had the chance to see this much-hyped, much-discussed film, it’s my turn to offer some initial thoughts on it. I call this ‘Part I’, because there is no way that this is the last word on the subject, and certainly not the last thing you’ll see about it here. I’m also conscious that this early into its official release, it’s unlikely that everyone that wants to see it has already done so, and while I’m keen to put some thoughts out there, I’m also equally eager to avoid spoilers that might detract from the experience for those who haven’t yet made the trek to the cineplex.

But nothing I can say can really avoid giving some hint that might be misconstrued as a spoiler. For example, my most immediate thought, the thing that first comes to mind that I need to report, is a terrible giveaway. If I say, ‘Ex_Machina very much follows a straight-forward Frankenstein plot‘ well, that pretty much says if not it all then certainly it says enough.

But there it is. Ex_Machina follows the Frankenstein-robot plot rather neatly. Which is a bit of a disappointment, if I’m being honest (and why I’m so looking forward to Big Hero 6), because I’m hoping for more films now that more completely break that mould. I should add that it’s not all that simplistic, and follows rather what I consider to be Asimov’s re-casting of the Frankenstein plot: though Asimov detested the Frankenstein complex, his work often replaces the mad scientist with the mad institutional entity — e.g. the corporation, the military. In Ex_Machina, while our AI is created by a scientist who is clearly a couple of resistors short of a circuit board, there is a suggestion that it wasn’t his prodigious scientific talent that drove him to madness but his corporate empire.

Also rather predictable is the fact that we have yet another film where we see here lots of pretty gynoids (female robots) and while some have questioned whether the film is ‘sexist’ for its depiction of naked (fabricated) female flesh, most opinions — mine included — seem to uncomfortably, benevolently settle on the conclusion that the film is making some very important points about the crises of masculinity. (To which, I would add, borrowing from Angela Carter, we might also include a point about the patriarchal origins of the madness of reason… watch this space.)

The question remains: why are we so obsessed with robots and AI in female form?

None of this is to say, however, that Ex_Machina does not provide surprises, or that it is not a thoughtful, insightful film about AI and our increasingly human-like technologies

I was thinking throughout the film that there is a big difference between Artificial Intelligence and Artificial Emotion, between rational intelligence and emotional intelligence, but that this is almost always elided in film and fiction about robots. There seems to be an unspoken assumption that ‘smart robots’ mean robots that can ‘feel’, which seems a pretty big leap to me. There are a lot of big leaps in any such sci-fi movie, to be sure, but here’s one that I find too often neglected. To the immense credit of Ex_Machina, however, and what sets it apart, is that this difference is not overlooked; this difference, in fact, becomes the fulcrum of the film. The question of ‘intelligence’ and intelligent responses versus emotional responses — the difference between the two, how often this difference is overlooked, and how often they are confused — lies right at the very heart of the more fundamental question that the film poses, which is the subject of so much science-fiction that purports to be about robots, or aliens, or monsters. That question is, simply, What does it mean to be human?

The interrogation of intelligence — and how it defines or defies the human — is implicit throughout the film. An intriguing throwaway line from Nathan, founder of the Google-clone ‘Bluebook’, that the name of his search engine relates to Wittgenstein’s notes on language, shows that Garland is encouraging us to delve and read much more into this. (Again, watch this space.)

Anil Seth, writing in The New Scientist, says:

The brilliance of Ex_Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine.

I would agree with that, wholeheartedly. Going maybe further, or spelling that idea out, I would say that the brilliance of Ex_Machina lies in the way that it tests our very notions of what it means to be human. Because within this classical (or Romantic) Frankenstein framework we are confronted with the same classical (or Romantic) Frankenstein question: what we see at the end of Ex_Machina is not that machines are capable of acting as human as we are, but that humans are capable of acting as inhumanely as machines. machines may be capable of acting as inhumanely as we are.

And here’s a thought to take away from the film, for everyone from the technophobes to the Singularians: maybe AI will only truly be sentient when it realises not only its capacity to act human, but its capacity to act inhumanely, like us.

So, for now, the recommendation: Yes, please, do go see it. Whatever else, it is a really enjoyable film; it is a gripping, intelligent psychological thriller. I’m sure we’ll be talking about it for a long time. It is already proving a worthy candidate in the Great Cannon of robot films, right up there with Metropolis, Blade RunnerTerminator and the rest. Here’s the trailer again, to whet your appetite once more: