The big talking point on Sunday night’s instalment of Humans on Channel 4 was [spoiler alert] Niska’s decision to disobey one of her ‘customers’. Not liking the role he wanted her to play in his sexual fantasy – that of a scared little girl being forced into sex – she not only refused to obey his wishes but strangles him to death.
Of course there was a lot of fist-pumping celebration. A long-suffering robot stands up to a bullying paedophile. Hurrah! But this defiance also brought to the surface a lot of fears that some viewers had been harbouring, that autonomous, super-human robots will surely one day make the decision to kill a person, or people.
This, after all, is our great fear: that robots will acquire sentience, become autonomous of their human masters, and decide that we are a plague upon the earth that need to be exterminated. We have seen this again and again in science fiction: the Cybermen, the Terminator, the Borg, et al.
All of these mechanical monsters, though, are only contemporary versions of an older legend, one that can be summed up in the figure of Frankenstein and his monster: the unnatural progeny of the mad scientist can no longer be controlled by his master and becomes a threat to humanity.
This is the all-too common image of robots that Isaac Asmiov, even as early as the 1940s, already found tedious. To dispel this Automatonophobia, the robots in Asimov’s stories are all programmed with three clear laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These three laws guarantee the safety of human beings, and prevent any mechanical Frankensteins threatening their human masters. These laws are often still considered to be a solid foundation of robotic design, both in fiction and in reality. The synths in Humans are, we are told in episode one, programmed with an ‘Asimov lock’ that means they are incapable of causing harm to human beings, or disobeying an order from a human master.
And yet, Niska refuses to play the role she is ordered to perform. And then she kills the bastard.
Though really, to anyone familiar with Asimov’s robot series, this will not come as a surprise. Because for all of Asimov’s insistence – and the insistence of US Robotics employees – on the primacy of the laws and their certainty that no robot can defy them, the drama of each story explores the failure and deficiencies in the laws.
SO when Niska breaks her ‘Asimov lock’, twitter exploded, with many (as I said) cheering her on, and many, perhaps more, seeing in her action the confirmation of their worst fears: that Frankenstein is inevitable, that intelligent, autonomous robots will undoubtedly break their chains and kill us.
And there were some very intelligent questions. Professor Tony Prescott, our colleague at Sheffield Robotics who is also tweeting during each episode, and I had some very interesting 140 character conversations. For example, this came from one viewer:
In a situation where human harm is certain, how would ownership and hierarchies of users affect implementation of 3 Laws? @DreamingRobots
— Christina E. Stimson (@CEStimson) June 21, 2015
We also discussed, for example, how the laws would always need to be (re-)tweaked and improved, perhaps with regular ‘firmware’ updates, and how it would be nearly impossible to prevent robots from being hacked and the three laws undermined by human controllers (though, I hasten to point out, that in such circumstances, it’s not autonomous robots we need to fear but, as is always the case, human operators of dangerous machines).
But are Niska’s actions a breach of Asimov’s laws? Perhaps not. As Asimov developed his ideas, and his robots, he himself realised that the three laws were perhaps not enough. He realised that robots might have a wider responsibility, not just to individual people but to humanity as a whole. So Asimov created what is now know as ‘the zeroth’ law:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
If we take such a law, either as spelled out by Asimov or as imagined by others, then Niska’s actions might in fact completely compatible with the laws of robotics. By killing a potentially dangerous person, Niska could have reasoned that she was preventing other human beings, or humanity as a whole, from coming to harm, so this may well be entirely consistent, in a manner, with the zeroth law.
In a manner.
And it’s that ‘manner’, how the laws might be interpreted, either by a strictly rational AI or mechanical minds that have evolved into some kind of new superintelligence, that poses the challenge to designers and programmers as we create increasingly intelligent, increasingly independent systems. Because it will certainly not be a simple case of plugging three or four basic laws into an AI operating system, job done, when we look to create safe, effective robots in the future.
Though perhaps we need to keep thinking, beyond Asimov, about how such laws can be fashioned. Perhaps laws for robots, like the laws we have fashioned for humans, cannot simply be created and left to their own devices, but need to be constantly updated and refined. Perhaps other fail-safes can be imagined by human programmers that effectively place limits upon the autonomy of robots and intelligent AI and, in so doing, secure our future amongst intelligent machines.