IluvdoubleD's
Sir Loves
- Joined
- Sep 23, 2012
- Messages
- 18,454
- Likes
- 8,626
Wow, if it saves you 25% of your time, maybe they can pay you 25% less?ChatGPT has made my job easier. I need a powershell script, no problem. Just tell GPT what you want it to do and in less than a minute you have a WORKING script that you only need to change a few variables. I am impressed and now I do not need to seach through a dozen google returns to find what I need. Need a custom query for Splunk or SQL, same deal.
MARCH 29, 2023 6:01 PM EDT
Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.
An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin. I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
AI as it currently exists does have those problems/limitations. True AI would not have those limitations. Its like insurance, it looks at the data, whatever the data is. If the data comes across as some form of bigotry thats not necessarily because the data itself is wrong, factually or morally. or was manipulated. It just means there are trends within humanity. some are good trends, some are bad.A.I. is neither artificial nor intelligence. Programmers/customers still input data in the program or server. The result is GIGO, garbage in, garbage out. That's why AI has trouble properly identifying certain ethnic individuals in face recognition. It's why Tesla cars still crash into other vehicles and people. It's why during too many AI interviews, you get biased and even malevolent responses that perfectly mirrors those of the category of people who input the data. May even be why we'll see a T-1 terminator model programmed by climate change advocates to go after humans on the premise we're the cancer killing Earth.
AI as it currently exists does have those problems/limitations. True AI would not have those limitations. Its like insurance, it looks at the data, whatever the data is. If the data comes across as some form of bigotry thats not necessarily because the data itself is wrong, factually or morally. or was manipulated. It just means there are trends within humanity. some are good trends, some are bad.
its always funny to me that people don't react negatively to positive stereotypes based on bigotry, even though they are also based on the same root problem as the negative stereotypes.