Cambridge to open Centre for Study of Existential Risk – Part 1

We stopped dreaming robots this morning and became conscious of BBC Radio 4’s Today programme, where Sarah Montague interviewed Jaan Tallinn, one of the inventors of Skype of now one of the co-founders of the Centre for the Study of Existential Risk at the University of Cambridge. (The whole programme can be heard, in the UK at least, here. Tallinn’s interview is about 1 hour and 45 minutes in.)

Details on the new Centre are still emerging (so look forward soon to a Part 2 to this post in the near-ish future), but they will seek to examine what they perceive to be the four greatest threats to the future of the human species:
  • Artificial Intelligence
  • Climate Change
  • Nuclear War
  • Rogue Nanotechnology
At least two of these (AI and rogue nanotechnology) are of much interest to this blog, so we will eagerly await their findings and research. For now, I couldn’t help but notice the way this new Centre is being greeted in the press. 
Despite recent floods on both sides of the Atlantic and hysteria at the mere possibility of countries such as Iran developing nuclear technology (and despite a hint of incredulity in Montague’s questions – ‘Are you seriously?… at Cambridge?’), The Sun has gone with this angle:
Him. Again.

‘Terminator centre’ to open at Cambridge University: New faculty to study threat to humans from artificial intelligence

Their treatment focusses, unsurprisingly, almost exclusively on the threat posed by AGI and robots. (How happy we must be now, that Nuclear Armageddon seems so trivial. It wasn’t always so, those of us of a certain age might remember. I guess the films weren’t as interesting.)
Here, too, is the likewise completely not-overhyped reaction from The Daily Mail, printed the following day:

Let’s make sure he WON’T be back! Cambridge to open ‘Terminator centre’ to study threat to humans from artificial intelligence.

Let’s hope that the Centre’s future research is met with similar dispassionate scrutiny, and conveyed to the public in a similarly accurate manner.

New Paper on the ‘Uncanny Valley’

SCentRo‘s own Roger K. Moore has just had a new paper, entitled ‘A Bayesian explanation of the “Uncanny Valley” effect and related psychological phenomena’, published at nature.com’s Scientific Reports.

Here is the abstract:

There are a number of psychological phenomena in which dramatic emotional responses are evoked by seemingly innocuous perceptual stimuli. A well known example is the ‘uncanny valley’ effect whereby a near human-looking artifact can trigger feelings of eeriness and repulsion. Although such phenomena are reasonably well documented, there is no quantitative explanation for the findings and no mathematical model that is capable of predicting such behavior. Here I show (using a Bayesian model of categorical perception) that differential perceptual distortion arising from stimuli containing conflicting cues can give rise to a perceptual tension at category boundaries that could account for these phenomena. The model is not only the first quantitative explanation of the uncanny valley effect, but it may also provide a mathematical explanation for a range of social situations in which conflicting cues give rise to negative, fearful or even violent reactions.

And a brief reminder of the phenomena he address:

The link to the paper again: http://www.nature.com/srep/2012/121115/srep00864/full/srep00864.html

Conference Announcement: Winter Intelligence Conference

The Winter Intelligence Conference: AGI-12 (the Fifth Conference on Artificial General Intelligence), and AGI-Impacts.
This year celebrates one hundred years of the birth of Alan Turing, whose intellectual contribution revolutionised the field of computing, and whose ideas forecast many advances that are only coming to be realised. In honour of his work, Oxford University and the Future of Humanity Institute will be hosting the Winter Intelligence Conference, encompassing AGI-12 — the fifth international conference on Artificial General Intelligence —and AGI-Impacts, which focuses on the opportunities and risks posed by the development of Artificial General Intelligence.
This landmark conference will feature leading international experts in this emerging and potentially radically transformative field, including:
Bruce Schneier: World-leading expert on computer security, cryptography and their implications for real world engineering.
Steve Omohundro: Founder of Self-Aware Systems, leading expert in machine learning, machine vision, programming languages.
Margaret Boden OB: Author of Mind as Machine, founding Dean of the University of Sussex’s School of Cognitive and Computing Sciences.
David Hanson: President and Founder of Hanson Robotics, creators of Hanson Robokind.
Angelo Cangelosi: Professor of Artificial Intelligence and Cognition at the University of Plymouth.
Nick Bostrom: Director of the Future of Humanity Institute and the Programme on the Impacts of Future Technology at the University of Oxford, 2009 winner of the Gannon Award for the Continued Pursuit of Human Advancement
The Conference, held in conjunction with the AGI Society, will be held in St. Anne’s College, Oxford, from the 8th-11th of December. It will be divided into
a) AGI-12: Technical talks covering theoretical aspects of AGI, and reports on current developments towards AGI systems
b) AGI-Impacts: Rigorous discussion of the potential future risks, benefits and societal impacts of this emerging transformative technology.
For further details and registration, please see:
For any queries please contact:
“We can only see a short distance ahead, but we can see plenty there that needs to be done. “ Alan
Turing.