Franklin Pierce
Well-Known Member
- Joined
- May 4, 2014
- Messages
- 26,740
- Likes
- 30,390
It would be Impossible to Pull the Plug on a super-intelligent machine that wanted to control the world and harm humans, scientists warn in paper on the development of AI
The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it.
A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity.
However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations.
Iyad Rahwan, Director of the Center for Humans and Machines, said: ‘If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.’
‘In effect, this makes the containment algorithm unusable.’
Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a super-intelligent AI.
Humans couldn't halt AI that wanted to harm them, scientists claim | Daily Mail Online
The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it.
A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity.
However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations.
Iyad Rahwan, Director of the Center for Humans and Machines, said: ‘If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.’
‘In effect, this makes the containment algorithm unusable.’
Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a super-intelligent AI.
Humans couldn't halt AI that wanted to harm them, scientists claim | Daily Mail Online
Last edited: