Artificial Intelligence Threat To Humans

#1

Franklin Pierce

Well-Known Member
Joined
May 4, 2014
Messages
26,740
Likes
30,390
#1
It would be Impossible to Pull the Plug on a super-intelligent machine that wanted to control the world and harm humans, scientists warn in paper on the development of AI

The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it.

A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity.

However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations.

Iyad Rahwan, Director of the Center for Humans and Machines, said: ‘If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.’

‘In effect, this makes the containment algorithm unusable.’

Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a super-intelligent AI.

9676790-0-An_AI_doctor_recognises_childhood_diseases_more_accurately_than_-a-9_1569330902082.jpg


Humans couldn't halt AI that wanted to harm them, scientists claim | Daily Mail Online
 
Last edited:
  • Like
Reactions: LittleVol
#6
#6
Sorry, our computer doesn't trust your face: New AI app will allow banks to Deny or Approve Loan Applicants by screening their face and voice to determine 'trustworthiness

People tend to make snap judgments on each other in a single look and now an algorithm claims to have the same ability to determine trustworthiness for obtaining a loan in just two minutes.

Tokyo-based DeepScore unveiled its facial and voice recognition app last week at the Consumer Electronics Show that is touted as a 'next-generation scoring engine' for loan lenders, insurance companies and other financial institutions.

While a customer answers 10 question, the AI analyzes their face and voice to calculate a 'True Score' that can be help companies with the decision to deny or approve.

DeepScore says its AI can determine lies with 70 percent accuracy and a 30 percent false negative rate, and will alert companies that fees need to be increased if dishonesty is detected.

AI app allows banks to screen loan applicants' face and voice to determine their 'trustworthiness' | Daily Mail Online
 
#7
#7
It would be impossible to pull the plug on a super-intelligent machine that wanted to control the world and harm humans, scientists warn in paper on the development of AI

The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it.

A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity.

However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations.

Iyad Rahwan, Director of the Center for Humans and Machines, said: ‘If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.’

‘In effect, this makes the containment algorithm unusable.’

Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a super-intelligent AI.

9676790-0-An_AI_doctor_recognises_childhood_diseases_more_accurately_than_-a-9_1569330902082.jpg


Humans couldn't halt AI that wanted to harm them, scientists claim | Daily Mail Online

Ummm...HELLO...AVENGERS!!
 
#10
#10
This is also the natural evolution of life. Each new "generation" makes the last one obsolete. The best we can hope for is peaceful coexistence and minimalize how much competition over resources there is.
 
#13
#13
The Unabomber was a tech-obsessed crazed killer – but 40 years on, technologists are starting to wonder if some of his theories might be True

There is some irony here. Kaczynski dismissed television as “an important psychological tool of the system”, designed to control the masses – but now the box-set generation has been introduced to his murderous mission to halt what he believed was the erosion of human freedom and dignity by our unthinking embrace of technology.

Kaczynski, who had retreated to an off-grid cabin in Montana to plot the overthrow of technological society, was quite literally the proverbial voice in the wilderness. Forty years on, however, it could be argued that the Unabomber was a visionary to whom we should all now be paying very close attention indeed.

If anything, technological developments in the world outside that cell in the past 40 years have served only to reinforce Kaczynski’s message. Take the so-called “transhumanism” movement, with futurists such as Ray Kurzweil gleefully herding us towards the dystopian surrender of our humanity, to a hybrid amalgamation of artificial intelligence and flesh and blood – the so-called singularity, upon us as soon as 2029, according to Google’s blue-sky thinker. This was a theme embraced by electric-cars-to-rockets multi-billionaire Elon Musk at the World Government Summit in Dubai last year. In the fast-approaching era of artificial intelligence, he proclaimed, human beings must merge with machines or become obsolete.

In his manifesto, Kaczynski wrote cogently of his fear that “the technophiles are taking us all on an utterly reckless ride” and that technology “will eventually acquire something approaching complete control over human behaviour”. He was especially fearful of the rise of artificial intelligence – a concern shared today by thinkers including the cosmologist Stephen Hawking. “A super-intelligent AI,” Professor Hawking has warned, “will be extremely good at accomplishing its goals and if those goals aren't aligned with ours, we're in trouble.”

Kaczynski was way ahead of him. Twenty years earlier, he predicted that computer scientists would “succeed in developing intelligent machines that can do all things better than human beings. As society and the problems that face it become more and more complex and as machines become more and more intelligent, people will let machines make more and more of their decisions for them”.

The Unabomber was a tech-obsessed crazed killer – but 40 years on, technologists are starting to wonder if some of his theories might be true
 
#14
#14
It would be impossible to pull the plug on a super-intelligent machine that wanted to control the world and harm humans, scientists warn in paper on the development of AI

The idea of an artificial intelligence (AI) uprising may sound like the plot of a science-fiction film, but the notion is a topic of a new study that finds it is possible and we would not be able to stop it.

A team of international scientists designed a theoretical containment algorithm that ensures a super-intelligent system could not harm people under any circumstance, by simulating the AI and blocking it from wrecking havoc on humanity.

However, the analysis shows current algorithms do not have the ability to halt AI, because commanding the system to not destroy the world would inadvertently halt the algorithm’s own operations.

Iyad Rahwan, Director of the Center for Humans and Machines, said: ‘If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.’

‘In effect, this makes the containment algorithm unusable.’

Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a super-intelligent AI.

9676790-0-An_AI_doctor_recognises_childhood_diseases_more_accurately_than_-a-9_1569330902082.jpg


Humans couldn't halt AI that wanted to harm them, scientists claim | Daily Mail Online

Just unplug it.
 

VN Store



Back
Top