40 State Attorneys General Write to Congress Opposing Attempt to Ban State Safety Regulations on AI in Trump's "Big Ugly Bill"

munkle

Diamond Member
Joined
Dec 18, 2012
Messages
5,637
Reaction score
9,813
Points
2,130
First off the nasty little sleeper snuck into the Big Ugly Bill is unconstitutional. The Tenth Amendment reserves all powers not specifically enumerated for the federal government to the states. States routinely regulate such areas as car emissions and food safety. The federal government has no authority to usurp how the people of a state wish to handle an industry whether it is AI or prostitution.








State Scoop: Attorneys general urge Congress to reject ‘irresponsible’ state AI law moratorium

"A large group of state attorneys general said a proposed moratorium on state AI laws would be "sweeping and wholly destructive."

"A letter signed by a group of 40 state attorneys general on Friday called on Congress to reject an “irresponsible” federal measure that would bar states from enforcing their own laws and regulations governing the use of artificial intelligence systems for the next 10 years.

The letter from the National Association of Attorneys General said the “broad” state AI moratorium measure rolled into the federal budget reconciliation bill would be “sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI.”

The AGs, who addressed the letter to majority and minority leaders in the Senate and House of Representatives, along with House Speaker Mike Johnson, said the moratorium would disrupt hundreds of measures being both considered by state legislatures and those that have already passed in states led by Republicans and Democrats.

They noted that in absence of a federal law codifying consumer protections against the duplicitous use of AI systems, states have been positioned to protect their residents from harms following the introduction of new technologies, citing data privacy laws and social media harms as past examples. The group said the historical lack of federal action has made state legislatures the default forum for addressing AI risks, with enforcement in most cases left to state attorneys general. Stripping away that authority, the group said, would harm consumers.


“This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI,” the letter read. “Moreover, this bill purports to wipe away any state-level frameworks already in place. Imposing a broad moratorium on all state action while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections.”

The federal measure follows action-packed state legislative sessions this year. According to an analysis by the National Conference of State Legislatures, 48 states and Puerto Rico introduced AI legislation, and 26 states adopted or enacted at least 75 new AI measures.

Analysts predicted 2025 would bring a wave of new AI laws after a year in which state lawmakers introduced nearly 700 pieces of AI legislation. AI laws have over the last several years sought to protect personal identities from use in AI-generated explicit content, prevent the creation and sharing of deepfakes for political campaigns and bar the use of AI to send spam phone calls or texts. Other measures mandated disclosures when consumers are interacting with AI."


LETTER PDF from National Association of Attorneys General

https://www.doj.nh.gov/sites/g/file...congress-re-proposed-ai-preemption-_final.pdf

RELATED: "Humanity Marches Merrily into Extinction, After AI Inventor and Thousands of AI Experts Say 90% Chance of AI Attack, But Trump’s “Big Beautiful Bill” Unconstitutionally Bans State Safety Regulation"

Elon Musk: “AI is much more dangerous than nuclear weapons”
 
AI done right is a boon to mankind.

AI is like a gun, it's neither good nor bad on its own, it depends who's firing it and for what purpose.

In a good world humans have a symbiotic relationship with their AI. After all, we program it.

Programming an AI is a position of trust. We can't have mad scientists doing that. There has to be oversight. It's something that's a bit missing right now, because the technology is very young.

The question is, WHO out that provision in the bill? I doubt it was invented by our elected representative. I smell a Google and a Microsoft in the equation.

The Chamber of Commerce has its fingers in the pot, and those are the last people on earth you want making the law. Policy wonks don't know enough to address this subject.


Look at who they interviewed. Not experts. Policy wonks. Those people are dangerous. It's amazing our representatives are dumb enough to listen to them.
 
AI done right is a boon to mankind.
When is the last time you've seen mankind handle something so big and so dangerous "the right way?"

AI is like a gun, it's neither good nor bad on its own, it depends who's firing it and for what purpose.
And look what mankind did with black powder. We turned it into a weapon to exterminate millions with.
Some (many?) will see AI as a tool for near unlimited wealth and power.

In a good world humans have a symbiotic relationship with their AI.
But the world is not a good place. It is filled with evil intentions.

Programming an AI is a position of trust.
Yep, and we've never trusted anyone with an important job to do without getting burned in the ass.

We can't have mad scientists doing that. There has to be oversight.
Like oversight of USAID? Biden's auto-pen? Spending at the DOE? If we cannot even guarantee oversight of the most important things closest to home, how will we ever guarantee the properly restricted, limited use of AI only for good throughout the world?

AI will be used eventually, no matter what other good it may serve, to eventually centralize wealth and power for a few while enslaving the general human race to centralized machine servitude; problem is that many just will not see or admit it until far too late while those with evil designs will never admit to it.
 
Wow, suddenly the 10th Amendment is important.

If these issues are best resolved by the States, then they can resolve them using their own resources and monies.
 
When is the last time you've seen mankind handle something so big and so dangerous "the right way?"

Probably NASA in the 60's.

And look what mankind did with black powder. We turned it into a weapon to exterminate millions with.
Some (many?) will see AI as a tool for near unlimited wealth and power.

That always happens. To a certain extent it drives advances. But it has to be controlled, regulated. AI is too powerful to leave unattended.

But the world is not a good place. It is filled with evil intentions.


Yep, and we've never trusted anyone with an important job to do without getting burned in the ass.

Geoffrey Hinton is a founder of AI and a Nobel Prize winner. How come our Congress isn't asking him what he thinks, and why?

Like oversight of USAID? Biden's auto-pen? Spending at the DOE? If we cannot even guarantee oversight of the most important things closest to home, how will we ever guarantee the properly restricted, limited use of AI only for good throughout the world?

I'm programming an AI on my Raspberry Pi right now. It's going to help a friend who got paralyzed from spinal surgery. It's not for commercial purposes, it's just "home hobbying" you might say. If it gets out of hand he'll just turn it off.

But Joe Gangster down the street has access to all the same technology, and who knows what he'll use it for. Sell more drugs faster, or whatever. Find more people to rob. It would be very hard to control that kind of thing.

AI will be used eventually, no matter what other good it may serve, to eventually centralize wealth and power for a few while enslaving the general human race to centralized machine servitude; problem is that many just will not see or admit it until far too late while those with evil designs will never admit to it.

It's more the really big companies with financial obligations we need to control. Google has done a lot of great work with AI, but their public implementation is biased and slanted - and it's only dangerous to the extent that so many people use it. It's almost at the level of a public utility, that way.

Maybe that's a useful way to look at it. Same as the power or gas company, or even the insurance industry, or banking or something
 
Probably NASA in the 60's.
Gus Grissom might disagree with that.

That always happens. To a certain extent it drives advances. But it has to be controlled, regulated. AI is too powerful to leave unattended.
My point exactly.

Geoffrey Hinton is a founder of AI and a Nobel Prize winner. How come our Congress isn't asking him what he thinks, and why?
That is a very good question.

I'm programming an AI on my Raspberry Pi right now. It's going to help a friend who got paralyzed from spinal surgery. It's not for commercial purposes, it's just "home hobbying" you might say. If it gets out of hand he'll just turn it off.
My sympathies to your friend--- hope he is well.

But Joe Gangster down the street has access to all the same technology, and who knows what he'll use it for. Sell more drugs faster, or whatever. Find more people to rob. It would be very hard to control that kind of thing.
Certainly.

It's more the really big companies with financial obligations we need to control. Google has done a lot of great work with AI, but their public implementation is biased and slanted
Wow, now there is a shocker. :smoke:

- and it's only dangerous to the extent that so many people use it. It's almost at the level of a public utility, that way.
Maybe that's a useful way to look at it. Same as the power or gas company, or even the insurance industry, or banking or something
I can accept that, which then begs the question of why and whether AI should already be in the hands of the general public?
 
The fake Chinese AI robots going crazy videos are specifically designed to slow down American AI development
 

New Topics

Back
Top Bottom