YouTube case at Supreme Court could shape protections for ChatGPT and AI

Tom Paine 1949

Diamond Member
Mar 15, 2020
5,407
4,503
1,938
WASHINGTON, April 24 (Reuters) - When the U.S. Supreme Court decides in the coming months whether to weaken a powerful shield protecting internet companies, the ruling also could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT ….

What the court decides … is relevant beyond social media platforms. Its ruling could influence the emerging debate over whether companies that develop generative AI chatbots like ChatGPT from OpenAI … should be protected from legal claims like defamation or privacy violations …

Some experts forecast that courts may take a middle ground, examining the context in which the AI model generated a potentially harmful response.

In cases in which the AI model appears to paraphrase existing sources, the shield may still apply. But chatbots like ChatGPT have been known to create fictional responses that appear to have no connection to information found elsewhere online, a situation experts said would likely not be protected.

Hany Farid, a technologist and professor at the University of California, Berkeley, said that it stretches the imagination to argue that AI developers should be immune from lawsuits over models that they "programmed, trained and deployed."

"When companies are held responsible in civil litigation for harms from the products they produce, they produce safer products," Farid said. "And when they're not held liable, they produce less safe products."

 
Last edited:
So....
An AI cannot be held liable for telling a child to drink poison or copyright infringement?

Yeah...if I was SCOTUS I'd nix that in a heartbeat.
 
The actual case seems about a video “recommended” by a YouTube algorithm, later involving a terrorist attack in Paris. The larger developing issues will likely involve whether usually “Section 230 protected” media corporations can be sued for damaging “created content” as well as “suggested viewing” by algorithmic / AI programs.

I am inclined to believe that such corporations need to be liable, at least where specific harm and reckless disregard can be shown.

Anyone want to share their own thoughts on these complicated but increasingly important matters?
 
The actual case seems about a video “recommended” by a YouTube algorithm, later involving a terrorist attack in Paris. The larger developing issues will likely involve whether usually “Section 230 protected” media corporations can be sued for damaging “created content” as well as “suggested viewing” by algorithmic / AI programs.

I am inclined to believe that such corporations need to be liable, at least where specific harm and reckless disregard can be shown.

Anyone want to share their own thoughts on these complicated but increasingly important matters?
Created information bubbles by an AI are indeed harmful for a population. Kinda been experienced by many nations with Social media and the suggestions. Most of which is an inadvertent by-product of trying to become used more by the population at large. (No different than a restaurant trying to gain more customers)

But is it usury? Is it addictive? Do people (reasonable and prudent) have the willpower or cognitive ability to say "no more" and set limits themselves?

Or is it another case of doctors writing too many prescriptions for oxycotin? Or is it Mexican cartels importing percocet? Does it rise to this level of harm?

Certainly in the case of searching for information about vaccine side effects an information bubble created from antivaxxers is harmful. And as a result a good many people bought into this obviously harmful rhetoric and have paid the price with ongoing serious harmful effects of the new virus. (By real numbers not inflated the virus is doing tons more damage than the vaccines)

Pet Rocks, Rubik's Cube, and now Social Media driven by AI....is it a fad or something worse?
 
ChatGPT is an interesting site. It is more a novelty though IMHO. It is fun to play around with though if you want a rap song in Gaelic about a six legged turtle named Stan.

The meatier issue will be who owns the copyright to the content you use it to create. I have used it to generate new lease agreements which is about as close as you can get it to being Legal Zoom 2.0.
 
I think ultimately we need to sort out how to live outside the "territory" of AI, and computers in general. I'm reminded of an interesting premise in the Battlestar Galactica remake: they didn't use computers to control their ships or communications. The risk of interference from the cylons was too high.

I'm short, we need to have one finger on the off button, or be ready to pull the cord when necessary.
 

Forum List

Back
Top