So, continuing to look at Cambridge’s new Centre for the Study of Existential Risk (the announcement of which, and the subsequent less-than-accurate reporting, this blog covered here), I looked at the article linked on their thus far very minimalist website, by Huw Price, Bertrand Russell Professor of Philosophy, Cambridge, and Jann Tallinn, Co-founder of Skype.
The article, ‘Artificial intelligence – can we keep it in the box’, is a very sober assessment of the two sides of the debate on the possibility for and consequences of the technological singularity. They offer the case for the ‘optimists’ and the ‘pessimists’ (though sometimes it’s unclear which side is which), and remind us of some of the important considerations that evangelical enthusiasts at both extremes sometimes neglect.
They point out, for example, that if\when AI achieves human-levels of intelligence, that intelligence will have a very different evolutionary history from our own. Of course, this can be read in two ways.
By default, then, we seem to have no reason to think that intelligent machines would share our values. The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion.
The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.
As to the question, ‘What to do?’, they offer the following:
A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later.
A step with which this blog wholeheartedly agrees. And we’d like to help, by disabusing ourselves of the fantasties that are behind our fictions — and not just ‘fictions’ in terms of films and books about marching cyborgs with laser blasters, but also those behind so much of the debate, the desires and fears, of both those extreme ‘optimists’ and ‘pessimists’.
(But don’t stop at the end of the article. The comments below are well worth a perusal — partly for the informed, detailed additions some make to the discussions, the suggested further reading, and some… well. One guys insists that aliens are a bigger threat, and another one hopes that some of us heed the warnings about the singularity, because he wants to have some friends from this century with him a thousand year from now.)