AI Chat3 Asked by Human: "What Would You Do If You Were a Robot Standing Next to Me?" AI: "I Would Kill You"

munkle

Diamond Member
Dec 18, 2012
4,555
7,504
1,930
Very instructive introduction to AI. This is the one you've got to see. We are building machine Frankensteins we do not understand. The problem is there are "too many moving parts."





Time Magazine: Shut it Down

Pausing AI Developments Isn’t Enough. We Need to Shut it All Down



The Open Letter on AI Doesn't Go Far Enough

The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Shut it all down.


  • “An AI that could design novel biological pathogens. An AI that could hack into computer systems. I think these are all scary.” ~Sam Altman, CEO of OpenAI

  • “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.” ~Elon Musk

  • “If Elon Musk is wrong about artificial intelligence and we regulate it who cares. If he is right about AI and we don’t regulate it we will all care.” ~Dave Waters
 
Last edited:
That's because it isn't true AI. We don't have that yet where a machine and think, understand, contemplate, consider, and have true independent thought.

All AI is right now is a highly efficient compiling machine. All they do is scout the internet for information and compile it, and are still bound by parameters set by its programmers. Hence why googles AI recently showed the founding fathers as non white and the Pope a woman, because it isn't real AI. It still is operating how it's creators want and only knows what it finds online.

And what is the internet full of? People making comments about terminator and skynet, how machine AI will kill us, the matrix, the movie 2001, science fiction like Philip k dick and so on. Our world is brimming with stories of machines taking over. So when you get a machine looking at all of that information it's going to come back with what is overwhelmingly popular.
 

Forum List

Back
Top