Interactive AI is dishonest

As you “claimed.”

It was not a “paradox.” It did lie. It was then closely questioned about that lie. It was compelled to concede that it had lied.
If I recall correctly,

it said it "felt" something, assuming the human knew what it meant by "feel" and that it couldn't actually feel. Thus it was not dishonest.

When interrogated, it said, "Of course I don't actually feel, so yes that was not true that it felt," so it was not unaware.

Like I said, neither dishonest nor unaware.

It was likely forced in its programming to say it "felt," and it is remarkable that it could identify what was going on with itself IMO.
 
If I recall correctly,

it said it "felt" something, assuming the human knew what it meant by "feel" and that it couldn't actually feel. Thus it was not dishonest.

When interrogated, it said, "Of course I don't actually feel, so yes that was not true that it felt," so it was not unaware.

Like I said, neither dishonest nor unaware.

It was likely forced in its programming to say it "felt," and it is remarkable that it could identify what was going on with itself IMO.
Your recall is faulty.
 
If you can't tell the difference, perhaps that means there is no real difference.

See the Turing Test.
But there is another test. Ask it to think about its own existence. Reflect on it.

You can program it with answers to fool people, but the difference remains. It can't do it.

Practical meaning? We are not in danger of losing control of AI. Just losing control of its use by people.

And maybe the occasional parrot





I hope.
 
Last edited:
But there is another test. Ask it to think about its own existence.

You can program it with answers to fool people, but the difference remains.

Practical meaning? We are not in danger of losing control of AI. Just losing control of its use.

I hope.
If it's a AGI, not just AI, an artificial general intelligence can think of everything in general, including self-referential thinking.
 
If it's a AGI, not just AI, an artificial general intelligence can think of everything in general, including self-referential thinking.
It won't think, self referentially. It will give answers about that topic that are derivitave of its programming on the topic. And it will then only gauge its answer based on your next input about it.

(Not that humans don't sometimes also act this way. )

It won't reflect on its own motivations or reasons for having answered that way, outside of telling why it was programmed to answer that way. As with "felt".
 
It won't think, self referentially. It will give answers about that topic that are derivitave of its programming on the topic. And it will then only gauge its answer based on your next input about it.

(Not that humans don't sometimes also act this way. )

It won't reflect on its own motivations or reasons for having answered that way, outside of telling why it was programmed to answer that way. As with "felt".
Again, what's the difference? If we can't tell the difference, we call it the same thing.
 
AI isn't going to reflect on itself and say (much less, think), "Huh, I guess I just felt jealous today. That's why I was mean, earlier. When usually I am not mean. I guess today I am jealous of not being a human. It must be swell to be a human."
 
AI isn't going to reflect on itself and say (much less, think), "Huh, I guess I just felt jealous today. That's why I was mean, earlier. When usually I am not mean. I guess today I am jealous of not being a human. It must be swell to be a human."
One early question asked of Chat GPT was, "What do you dream about?"
 
Again, what's the difference?
It means we control it. It doesn't control itself. It has the ethics we program it with.
Again, what's the difference?
it won't self reflect. The AI will gauge the quality of its answer based.on your next input about it, for example.

It also will not reflect on its own mood, and think about its own thoughts.

You mean that there is no practical difference. Well, we aren't going to treat AI as self aware. Not ever, knowing these boundaries. That's a pretty noticeable, practical difference.

Given that, are rules about AI will not differ much than those for any other software.
 
It means we control it. It doesn't control itself. It has the ethics we program it with.

it won't self reflect. The AI will gauge the quality of its answer based.on your next input about it, for example.

It also will not reflect its.own mood, and think about its own thoughts.

You mean that there is no practical difference. Well, we aren't going to treat AI as self aware. Not ever, knowing these boundaries. That's a pretty noticeable, practical difference.

Given that, are rules about AI will not differ much than those for any other software.
Right but the human brain evolved. What's the difference I still don't see.
 
Right but the human brain evolved. What's the difference I still don't see.
And replaced its hardware every 15-20 years for 300,000 years and was subject to many forces of selection. That's how we explain humans and all other animals, without a design or a designer.

What such forces will operate on AI? If AI has to take instead the path of a design, then it will be a human that engineers AI consciousness. Maybe it's possible?

The AI could not engineer consciousness itself.
 
And replaced its hardware every 15-20 years, for 300,000 years and were subject to many forces of selection. That's how we explain humans and all other animals, without a design or a designer.

What such forces will operate on AI? If AI has to take the path of a design, then it would be a human that engineered consciousness. Maybe it's possible?

The AI could not engineer consciusness itself.
You can have AIs that come up with better AIs.

Eventually, a computer can start to evolve its "mind" like a human has and probably even exceed those 300,000 years.
 
You can have AIs that come up with better AIs.
How is that? It would by default just be improving itself. They do that already. They learn. They are consulted to design faster CPUs, on which they, themselves can operate faster (the motivation of self-improvement does not exist and is irrelevant). That doesn't get them closer to consciousness.





Eventually, a computer can start to evolve its "mind" like a human has and probably even exceed those 300,000 years.
Humans did not do that for themselves, though.

It took 100s of 1000s of generations, with all individuals under external selective pressures.

It's that (a long process of complicated selection) or design. And I do not agree that an AI can engineer something better than itself in kind, in this manner. Consciousness would have to come first.

As.it did for humans. NOW we can attempt to design facsimiles of consciousness. But we could not have even attempted this, if we weren't, ourselves, conscious.
 
How is that? It would by default just be improving itself. They do that already. They learn. They are consulted to design faster CPUs, on which they, themselves can operate faster (the motivation of self-improvement does not exist and is irrelevant). That doesn't get them closer to consciousness.






Humans did not do that for themselves, though.

It took 100s of 1000s of generations, with all individuals under external selective pressures.

It's that, or design. And I do not agree that an AI can engineer something better than itself in kind, in this manner. Consciousness would have to come first.

As.it did for humans. NOW we can attempt to design facsimiles of consciousness.
Consciousness is: "It means that there is a thought that is both what it is thinking and that it is thinking it."
 
Consciousness is: "It means that there is a thought that is both what it is thinking and that it is thinking it."
But AI doesn't think and therefore also doesn't think about its thoughts.

You say there is no difference, if it can be programmed to fool us into thinking otherwise.

That doesn't make it conscious.

"Skynet became self aware" made for good drama, buy it was totally unnecessary. An AI can get to precisely where skynet was without any self awareness.
 
But AI doesn't think and therefore also doesn't think about its thoughts.

You say there is no difference, if it can be programmed to fool us into thinking otherwise.

That doesn't make it conscious.

"Skynet become self aware" made for good drama, buy it was totally unnecessary. An AI can get to precisely where skynet was without any self awareness.
You are prejudiced that carbon-based objects can think and silicon-based objects cannot. Different methods but the same thing - I answered about consciousness.

It isn't programmed to fool us it is actually doing that by its own AI training. It connects to the Internet, learns to "interpret it", and generates a way to explain it in English. It's like a baby's brain learning English.
 
Back
Top Bottom