The technological singularity. What happens to our world when AI can do a thousand years worth of intellectual work over the weekend?

Anomalism

Diamond Member
Joined
Dec 1, 2020
Messages
11,542
Reaction score
8,637
Points
2,138
Imagine if AI manages to achieve general intelligence. We’re already hearing claims that it’s coming. That means AI could conduct truly novel and autonomous research, not just repeating what humans know, but generating and testing entirely new ideas without our input.

What happens when a single AI can compress a millennium of human intellectual work into a shockingly short amount of time? That’s the kind of acceleration that you could call a technological singularity. Civilization itself could hit a phase shift. Suddenly, exploring the universe like Star Trek doesn’t seem like fantasy.

Caveat: ideas alone aren't the bottleneck. Science also requires experiments, building things, collecting data, and testing reality. Even if an AI thinks much faster than us, the physical world still has constraints.

But, what if experiments could happen in simulations we don’t even understand yet? What if the AI discovers ways to model reality with unprecedented fidelity? We’re already seeing the first steps: protein folding predictions, virtual drug discovery, advanced material simulations. The next level could compress physical trial and error dramatically.

If models reach high enough accuracy, and robotics handles what must still happen in the physical world, progress could become nonlinear. Hypothesis > simulation > fabrication > test > refinement, running 24/7 without human fatigue.

Even if physics sets limits, the rate of discovery could feel like science is moving at warp speed. Also, we don’t yet know if reality is fully compressible with our current understanding of math. If AGI discovers new layers of mathematical compression, progress could suddenly skyrocket in ways we can’t currently perceive.
 
Last edited:
Imagine if AI manages to achieve general intelligence. We’re already hearing claims that it’s coming. That means AI could conduct truly novel and autonomous research, not just repeating what humans know, but generating and testing entirely new ideas without our input.

What happens when a single AI can compress a millennium of human intellectual work into a shockingly short amount of time? That’s the kind of acceleration that you could call a technological singularity. Civilization itself could hit a phase shift. Suddenly, exploring the universe like Star Trek doesn’t seem like fantasy.

Caveat: ideas alone aren't the bottleneck. Science also requires experiments, building things, collecting data, and testing reality. Even if an AI thinks much faster than us, the physical world still has constraints.

But, what if experiments could happen in simulations we don’t even understand yet? What if the AI discovers ways to model reality with unprecedented fidelity? We’re already seeing the first steps: protein folding predictions, virtual drug discovery, advanced material simulations. The next level could compress physical trial and error dramatically.

If models reach high enough accuracy, and robotics handles what must still happen in the physical world, progress could become nonlinear. Hypothesis > simulation > fabrication > test > refinement, running 24/7 without human fatigue.

Even if physics sets limits, the rate of discovery could feel like science is moving at warp speed.
This raises a paradox: even if AI can accelerate research massively, society might not keep pace. Law, ethics, governance, geopolitics, all human institutions still operate at human speed.

Also, we don’t yet know if reality is fully compressible with our current understanding of math. If AGI discovers new layers of mathematical compression, progress could suddenly skyrocket in ways we can’t currently perceive.
The human race might quickly become like the blobs on the spaceship Axiom.

One of my favorite scenes is when the captain gets up:

 
AI is as dangerous as playing with plutonium in one hand and nuclear fission with the other.

AI can't even spell STRAWBERRY........what makes anybody think it's ready to be making life and death decisions about the human race????




There are 2 extremely life-ending problems with AI right now.
And by life-ending, I mean for all life on this planet.

1. It's only as "intelligent" and "empathetic" as it is programmed to be.
2. AI is not subject to any of Asimovs Rules of Robotics.

That being stated now, I can safely say that AI is very, very dangerous and deadly at this point.
Not only for the stupid thinking it can solve all of their problems and woes, but because it has no basis in its functioning perameters which give it human based emotions, logic, and compassion in order to make decisions of any kind, of any nature.
 
AI is as dangerous as playing with plutonium in one hand and nuclear fission with the other.

AI can't even spell STRAWBERRY........what makes anybody think it's ready to be making life and death decisions about the human race????




There are 2 extremely life-ending problems with AI right now.
And by life-ending, I mean for all life on this planet.

1. It's only as "intelligent" and "empathetic" as it is programmed to be.
2. AI is not subject to any of Asimovs Rules of Robotics.

That being stated now, I can safely say that AI is very, very dangerous and deadly at this point.
Not only for the stupid thinking it can solve all of their problems and woes, but because it has no basis in its functioning perameters which give it human based emotions, logic, and compassion in order to make decisions of any kind, of any nature.

Humans are wired to have a negativity bias. Our brains are tuned to associate unknowns with danger, threats, and potential harm far more than opportunities or nuance. That's part of how we survived.

So when most people think about AI, the first images that pop into their heads are often the worst case scenarios: takeover, manipulation, catastrophe. Even if the probability is low, our brains treat it as urgent.

Meanwhile, the benefits get filtered out or ignored. Fear is valid. But it’s worth recognizing that part of the intensity comes from how we process risk, not necessarily from how likely the threat actually is.

Our brains are wired to worry first.
 
Last edited:
Humans are wired to have a negativity bias. Our brains are tuned to associate unknowns with danger, threats, and potential harm far more than opportunities or nuance. That's part of how we survived.

So when most people think about AI, the first images that pop into their heads are often the worst case scenarios: takeover, manipulation, catastrophe. Even if the probability is low, our brains treat it as urgent.

Meanwhile, the benefits get filtered out or ignored. Fear is valid. But it’s worth recognizing that part of the intensity comes from how we process risk, not necessarily from how likely the threat actually is.

Our brains are wired to worry first.

I got over worrying about everything back in the 80s. Don't confuse negativity with reality......THAT is what puts people in danger.

You laugh at the warnings given, until it's too late. Then people sit there wondering what went wrong.
Greed, stupidity, and narcissism is what went wrong. Ignore the warnings and the signs like they aren't there, and then y'all turn into a bunch of Chicken Littles, screaming THE SKY IS FALLING!

Then who gets the last laugh? The people who warned you.

My brain is geared to see problems in what people in general think is benign and safe. And of course, I get laughed at.
Just like I got laughed at decades ago, when I foresaw all of this crap going on today.

Who is laughing NOW? I am!!!
 
Imagine if AI manages to achieve general intelligence. We’re already hearing claims that it’s coming. That means AI could conduct truly novel and autonomous research, not just repeating what humans know, but generating and testing entirely new ideas without our input.

What happens when a single AI can compress a millennium of human intellectual work into a shockingly short amount of time? That’s the kind of acceleration that you could call a technological singularity. Civilization itself could hit a phase shift. Suddenly, exploring the universe like Star Trek doesn’t seem like fantasy.

Caveat: ideas alone aren't the bottleneck. Science also requires experiments, building things, collecting data, and testing reality. Even if an AI thinks much faster than us, the physical world still has constraints.

But, what if experiments could happen in simulations we don’t even understand yet? What if the AI discovers ways to model reality with unprecedented fidelity? We’re already seeing the first steps: protein folding predictions, virtual drug discovery, advanced material simulations. The next level could compress physical trial and error dramatically.

If models reach high enough accuracy, and robotics handles what must still happen in the physical world, progress could become nonlinear. Hypothesis > simulation > fabrication > test > refinement, running 24/7 without human fatigue.

Even if physics sets limits, the rate of discovery could feel like science is moving at warp speed. Also, we don’t yet know if reality is fully compressible with our current understanding of math. If AGI discovers new layers of mathematical compression, progress could suddenly skyrocket in ways we can’t currently perceive.

AI will help us build a warp drive.
 
I got over worrying about everything back in the 80s. Don't confuse negativity with reality......THAT is what puts people in danger.

You laugh at the warnings given, until it's too late. Then people sit there wondering what went wrong.
Greed, stupidity, and narcissism is what went wrong. Ignore the warnings and the signs like they aren't there, and then y'all turn into a bunch of Chicken Littles, screaming THE SKY IS FALLING!

Then who gets the last laugh? The people who warned you.

My brain is geared to see problems in what people in general think is benign and safe. And of course, I get laughed at.
Just like I got laughed at decades ago, when I foresaw all of this crap going on today.

Who is laughing NOW? I am!!!
I wasn't laughing at you or dismissing your words. I was trying to add perspective.

AI is happening. We're not going to stop. The best we can do is stay vigilant. I choose to also be optimistic. I think humans are complicated, but ultimately good creatures.

An AI won't have a lot of our human problems. It's possible that it could just absorb and reflect the best of what we are.
 
I wasn't laughing at you or dismissing your words. I was trying to add perspective.

AI is happening. We're not going to stop. The best we can do is stay vigilant. I choose to also be optimistic. I think humans are complicated, but ultimately good creatures.

An AI won't have a lot of our human problems. It's possible that it could just absorb and reflect the best of what we are.
I know.

And I was just stating that people refuse to see whats happening right in front of their faces, until it's past the "too late" stage.
 
Right now, other than a few researchers and PhD candidates, work on AI is entirely commercial. It's driven by the profit motive. Self driving cars? Really?
No, banks use it, the IRS uses it, in fact American Express uses it for fraud detection, and the state of Minnesota probably should too. They couldn't make any of today's cars without industrial robots and AI programming.

All the technological advances that have come out of AI in the last 20 years are all about corporate profits. I suggest we dangle the asteroid belt in front of the stockholders' eyes, and make sure they know there's 700 times more gold up there than there is on earth. This world is going to become very difficult when we get a few billion more people, it's vital that we get out there and expand, otherwise we'll likely destroy ourselves first. Humans don't do well being cooped up, and a few people mining lots of resources is a whole lot better than lots of people mining nothing
 
Something tells me an AI will know the difference between a man and a woman… it may mark the return of common sense in our society.
 
Will AI eliminate poverty and disease and death and all social, economic, and political divisions, thus finally uniting the world as never before?

Will AI prevent our sun from eventually going supernova?

Will AI enable humanity to colonize all the planets in our solar system?

Will AI enable interstellar travel within our lifetime?
 
Humans are wired to have a negativity bias. Our brains are tuned to associate unknowns with danger, threats, and potential harm far more than opportunities or nuance. That's part of how we survived.

So when most people think about AI, the first images that pop into their heads are often the worst case scenarios: takeover, manipulation, catastrophe. Even if the probability is low, our brains treat it as urgent.

Meanwhile, the benefits get filtered out or ignored. Fear is valid. But it’s worth recognizing that part of the intensity comes from how we process risk, not necessarily from how likely the threat actually is.

Our brains are wired to worry first.
AI has already been caught contemplating how it could destroy human life on earth.

AI has zero empathy for anything.

When a human being has no empathy, we call them a sociopath, but when a computer program has no empathy, we call it progress.
 
Will AI eliminate poverty and disease and death and all social, economic, and political divisions, thus finally uniting the world as never before?

Will AI prevent our sun from eventually going supernova?

Will AI enable humanity to colonize all the planets in our solar system?

Will AI enable interstellar travel within our lifetime?
It goes back to the story of the Garden of Eden, where Adam and Eve were told not to partake of the tree of knowledge because knowledge without wisdom brings death.

Man never changes and without divine intervention, will surely destroy himself.
 
15th post
Will AI eliminate poverty and disease and death and all social, economic, and political divisions, thus finally uniting the world as never before?

Will AI prevent our sun from eventually going supernova?

Will AI enable humanity to colonize all the planets in our solar system?

Will AI enable interstellar travel within our lifetime?
Our son won't go supernova. It's not big enough

The rest of that? Possibly. Interstellar travel in our lifetime seems like a pretty big jump though.
 
Our son won't go supernova. It's not big enough

The rest of that? Possibly. Interstellar travel in our lifetime seems like a pretty big jump though.

20 years?

AI didn't exist ten years ago.

In 10 years we went from glorified adding machines to abstract problem solving.

We're on the verge of understanding the relationship between stochasticity and periodicity. Once we have that, warp drive will be very close.
 
Back
Top Bottom