AI is already out of control and unstoppable

We'll be okay as long as we remember to build one of these into the AI system.

iu
4c9d4cd7b57acc0c878e8353acd8ee3a.gif
 
Meh, I don't see AI knocking a fox squirrel out of a sycamore tree any time soon.

Might be used to to help blow the tree and the surrounding acreage up though. 😐
 
AI is in its infancy, and we're having problems.

Can you imagine in a few years when it gets "smart" and can revise its own code?

That is called "Regenerative AI", then imagine it has control of robots and nanites....
 
To me, if AI is going to exist, I wish that it could be used on singers who are not hear anymore. The following are who I would love to hear new music from.

Northern Calloway (David from the Sesame Street children's show)
Steve Sanders of the Oak Ridge Boys group
Billy Joe Royal
Hal Ketchum
Toby Keith
Joe Diffie

God bless you and their family always!!!

Holly (a day one fan of Hal)

P.S. Of course, even if it is their voice, the songs that we hear them sing would be chosen by someone who is still here and so sadly that decision being out of the late singer's hands would only make the new material even more bittersweet. :( :( :(
 
To me, if AI is going to exist, I wish that it could be used on singers who are not hear anymore. The following are who I would love to hear new music from.

Northern Calloway (David from the Sesame Street children's show)
Steve Sanders of the Oak Ridge Boys group
Billy Joe Royal
Hal Ketchum
Toby Keith
Joe Diffie

God bless you and their family always!!!

Holly (a day one fan of Hal)

P.S. Of course, even if it is their voice, the songs that we hear them sing would be chosen by someone who is still here and so sadly that decision being out of the late singer's hands would only make the new material even more bittersweet. :( :( :(
If it can create enjoyable music, who cares? Do you like good food? Does it matter who makes it or if you need to watch them make it? Same thing.
 

AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling​


AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling's Big Red Button Doesn't Work, And The Reason Is Even More Troubling



It's one of humanity's scariest what-ifs – that the technology we develop to make our lives better develops a will of its own.

Early reactions to a September preprint describing AI behavior have already speculated that the technology is exhibiting a survival drive. But, while it's true that several large language models (LLMs) have been observed actively resisting commands to shut down, the reason isn't 'will'.

Instead, a team of engineers at Palisade Research proposed that the mechanism is more likely to be a drive to complete an assigned task – even when the LLM is explicitly told to allow itself to be shut down. And that might be even more troubling than a survival drive, because no one knows how to stop the systems.



How long does humanity have to survive you reckon?

I don't think the greatest threat AI presents hasn't already happened or is as futuristically idealistic as we would like to imagine. The greatest threat I can already see coming from AI is the way people think AI can make them smarter and more informed, when in reality it is making the dumber and less able to think at all.

I cast my vote for our goose is already cooked.
 
If it can create enjoyable music, who cares? Do you like good food? Does it matter who makes it or if you need to watch them make it? Same thing.
Actually, it may not be the same thing. Food isn't something that once had a mind of its own to pick out whoever they wanted to not only do the preparation, but the actual eating of the food.

Because of the artificial intelligence ability, we can now hear people sing or even speak about things that they do not support and if those people are not here anymore, doing such a thing with their voices in my opinion couldn't be anymore disrespectful to their memories. :( :( :(

God bless you always!!!

Holly

P.S. If you are a person who has any jewelry, what kind do you prefer, the real stuff or cubic zirconia products?
 
I don't think the greatest threat AI presents hasn't already happened or is as futuristically idealistic as we would like to imagine. The greatest threat I can already see coming from AI is the way people think AI can make them smarter and more informed, when in reality it is making the dumber and less able to think at all.

I cast my vote for our goose is already cooked.
Mankind is placing his entire hope in AI, thinking that since they don't believe in a God to save them, they will create one.

For example, Elon Musk believes AI is his key to going to Mars, and the only way the US can stumble out of its massive debt, that without it, the Republic will soon fold.

I do agree with him, however, that the political system today is far too corrupt and unworkable for the two-party system to address the debt problem as they will someday collapse the system economically, and then blame it on Trump or Capitalism or something else stupid so that the same people that collapsed the system, will try to recreate a new one that will be even worse.
 
How long does humanity have to survive you reckon?

I don't get the fear about AI. Maybe I'm close to it so I don't worry so much.

AI runs in the cloud. It needs GPU's. You can't turn it off, you'd have to turn off the whole cloud.

I'll tell you a secret from the neuroscience side. Fully 50% of our brain is about inhibition. Our moral sense is programmed to inhibit our impulses, sometimes no matter how well intentioned. The frontal lobes of our brains are inhibition central. If a task involves delay or uncertainty, you'll find it there. Inhibit "until" some condition or event occurs, or until we have enough information to act.

AI doesn't have inhibition (yet). Some of the little self-driving robots do, but not the big stuff like ChatGPT. The only inhibition the chatbots have is the censors that won't let them answer certain questions.

The machine learning community is revenue based. They charge a lot for real world solutions. Whereas, the scientists will give you solutions for free but they're not production ready. The two teams are starting to merge but it'll take a long, LONG time before they understand each other. Biologists understand balance and dynamics, but the machine learners have never heard of such things because they're all about algebra. Take computer vision for example, it's taken the experts ten years to unwrap their minds from perspective geometry and use the statistics of natural scenes instead. They had to come full circle and where they finally met was in the point clouds, which is very esoteric stuff that only a handful of people on the planet really understand. (Plenty of people use the software, but almost no one understands how it works lol).

So now, try to imagine, getting from there, to a simple human task like reading. Point clouds are all about recognizing objects in 3D, but reading is a whole different ball of wax. There's these 2D stick figures (letters, words) going by really fast and your eye is moving to the next word three times a second, and sometimes it reverses so you can re-read something for better understanding. Our visual brains have lots of operating "modes" that are applied in different ways depending on the task. Right now, the machine learning community is trying to figure out which mode is best for which task, but they're competing against evolution which has already had 4 billion years to find the answer.

Any way you slice it, AI isn't going to rise to human level anytime soon. The next advance will be photonics, to miniaturize everything and get rid of the power-hungry GPU's. And that's materials science which has a 20 year production horizon. (Unless we have a war, in which case maybe it happens a little sooner).

Just for perspective, everything that's happening today, with ChatGPT and the rest, started in 1980. I was there, I was actually in the lab with John Hopfield in 1982 watching him writing notes and solving equations. He, like me, began life as a biologist, and became a physicist because he couldn't make heads or tails of anything without the math. So now, it's 40 years later and we finally have a working version of his theoretical invention. Forty years it took, to get from the invention to what we have today, which I would characterize as "half assed keep your fingers crossed" production.

Don't worry, be happy. The biologists are watching over the mad scientist machine-learning types while they tinker. Mental illness is a real thing, we don't want OCD robots that don't know when to stop. In all fairness, the robotics people already know a heckuva lot about protections and safety protocols. The problem is more the bean counters, they're the ones who take shortcuts.
 
Yeah..well, you can have as many GPU's as possible. They are only useful as tools. It's the information given to train the language models that constitutes the most important component of AI. And therein lies the danger in AI. AI models can already read and write as well as humans. And the models don't just run in the "cloud", they are being ported and written to run on prem now. And I've seen firsthand how easy it is when there are no guardrails to let an AI model collect information it shouldn't have access to (such as..HR payroll data) and the agent spits that info back and you're left with a stunned look on your face, realizing that your employees (and perhaps everyone else) have access to everyone's info....whoops. :)
 

AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling​


AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling's Big Red Button Doesn't Work, And The Reason Is Even More Troubling



It's one of humanity's scariest what-ifs – that the technology we develop to make our lives better develops a will of its own.

Early reactions to a September preprint describing AI behavior have already speculated that the technology is exhibiting a survival drive. But, while it's true that several large language models (LLMs) have been observed actively resisting commands to shut down, the reason isn't 'will'.

Instead, a team of engineers at Palisade Research proposed that the mechanism is more likely to be a drive to complete an assigned task – even when the LLM is explicitly told to allow itself to be shut down. And that might be even more troubling than a survival drive, because no one knows how to stop the systems.



How long does humanity have to survive you reckon?
It's totally stoppable....The tech bros are going to run out of money before the immense data centers that can support AI get built.
 
Actually, it may not be the same thing. Food isn't something that once had a mind of its own to pick out whoever they wanted to not only do the preparation, but the actual eating of the food.

Because of the artificial intelligence ability, we can now hear people sing or even speak about things that they do not support and if those people are not here anymore, doing such a thing with their voices in my opinion couldn't be anymore disrespectful to their memories. :( :( :(

God bless you always!!!

Holly

P.S. If you are a person who has any jewelry, what kind do you prefer, the real stuff or cubic zirconia products?
The music, AI or not, is still a diamond. The means is more of a setting.
 
Back
Top Bottom