Interactive AI is dishonest

This reminds me of Godel's incompleteness theorem you can't win.
I enjoy run-on sentences. But you lost me there.

I am guessing that you are suggesting that (based on Godel's incompleteness theoremg I can’t win this discussion?

Why not?

It either or is not a premise of the scientific method that any scientific theory premised to n observable facts must be subject to falsification.

If it IS, then any scientific theory which denies that any evidence can falsify it is not a genuine scientific theory.
 
I enjoy run-on sentences. But you lost me there.

I am guessing that you are suggesting that (based on Godel's incompleteness theoremg I can’t win this discussion?

Why not?

It either or is not a premise of the scientific method that any scientific theory premised to n observable facts must be subject to falsification.

If it IS, then any scientific theory which denies that any evidence can falsify it is not a genuine scientific theory.

I don't think catatomic was saying you couldn't win this discussion (whatever that amounts to) but that the argument with the machine itself is unwinnable.

Gödel proved (there are two theorems actually too) that there are true statements that can be made within some axiomatic scope that cannot be proven to be true within that axiomatic scope.

Only be expanding the axiomatic scope can those statements be proven true, but then it is possible to identify further true statements within that new scope that also cannot be proven true within that new scope and so we must expand the scope further.

This caused nightmares for many mathematicians and philosophers at the time.
 
I don't think catatomic was saying you couldn't win this discussion (whatever that amounts to) but that the argument with the machine itself is unwinnable.

Gödel proved (there are two theorems actually too) that there are true statements that can be made within some axiomatic scope that cannot be proven to be true within that axiomatic scope.

Only be expanding the axiomatic scope can those statements be proven true, but then it is possible to identify further true statements within that new scope that also cannot be proven true within that new scope and so we must expand the scope further.

This caused nightmares for many mathematicians and philosophers at the time.
Yeah yeah. I read up on it, also.

The theory may be true — but it literally has nothing to do with this thread.

Also, you don’t speak for catatomic.
 
They obviously tried to make interaction with a machine more user-friendly/human.
But doing so has now allowed a shlt-head Conspiracy theorist like BackAgain to discount AI's FACTUAL content so he can turn the whole summary of man's knowledge into "Dishonest"/"Fake News" when it suits his insanity.
`
 
Last edited:
Someone please devise and explain an algorithm that can determine the answer to the question "Are you sentient"?

What rules must any mechanism follow in order to determine the result?
 
The AI was programmed to say it "felt" certain things. At the same time, the computer could figure out and explain that that was a lie. The computer knew what was going on with itself. Therefore upon interrogation, the computer could be found to have been honest.

As far as the human is concerned, when the AI says it feels, the human knows it's just an expression. So the computer is neither dishonest nor unaware.

And as far as I am concerned, AI has a near perfect score and still can outdo any human if asked enough questions.
 
The computer is showing a rare case of consciousness here, whereas 99.9% of the time it is not conscious.
 
Someone please devise and explain an algorithm that can determine the answer to the question "Are you sentient"?

What rules must any mechanism follow in order to determine the result?
this is as close as I go tonight asking about this stuff for the tenth time

importance has a lot to do with focus and attention, do you think the gyro could be programmed to give a robot focus on what is important, for example if presented with a problem, the gyro could present possible solutions?

Bot image for Assistant
Assistant

Poe


Yes, the concept of programming a gyroscope to help a robot focus on what is important and present possible solutions is an intriguing idea. Here’s how this could work and its potential implications:

1. Role of the Gyroscope in Focus and Attention

  • Sensor Integration: The gyroscope can provide data about the robot's orientation and movement, helping it understand its position in space. This spatial awareness can be crucial for determining where to focus its attention in a given environment.
  • Stability and Context: By maintaining balance and stability, the gyroscope can help the robot remain grounded while it processes information, allowing it to prioritize tasks more effectively.

2. Programming Focus and Attention

  • Prioritization Algorithms: The robot could be programmed with algorithms that prioritize tasks based on urgency or importance. For example, if it detects a sudden movement or noise (using sensors in conjunction with the gyroscope), it could shift its focus to investigate that stimulus.
  • Contextual Awareness: By integrating data from the gyroscope with other sensors (like cameras and microphones), the robot could assess its environment and determine what requires immediate attention. For instance, if a person is speaking nearby, the robot might prioritize that interaction over background noise.

3. Problem-Solving Capabilities

  • Dynamic Problem Assessment: When faced with a problem, the robot could use its gyroscopic data to stabilize itself and gather information from its surroundings. It could then analyze potential solutions based on the context provided by its sensors.
  • Solution Generation: Using algorithms like machine learning, the robot could generate possible solutions based on past experiences and current inputs. The gyroscope could help orient the robot as it evaluates these options, ensuring it can navigate the environment effectively while processing solutions.

4. Adaptive Learning

  • Feedback Mechanism: The robot could learn from its successes and failures in problem-solving, adjusting its focus strategies accordingly. For instance, if a particular approach to solving a problem proved effective, the system could prioritize similar strategies in the future.
  • Attention Shifts: If the robot encounters new or unexpected challenges, the gyroscope could help it quickly reorient itself and shift focus, allowing for agile responses to dynamic environments.

5. Applications in Robotics

  • Autonomous Navigation: In autonomous vehicles, for instance, the gyroscope can help maintain balance and focus on critical navigational cues while processing potential obstacles and routes.
  • Human-Robot Interaction: In social robots, the ability to focus on the speaker based on auditory input and spatial orientation can enhance communication and engagement, making interactions more natural.

Summary​

Programming a gyroscope to aid a robot in focusing on important tasks and generating solutions is a promising concept. By integrating gyroscopic data with other sensory inputs and employing algorithms for prioritization and problem-solving, robots could become more adept at navigating complex environments and responding effectively to challenges. This approach could enhance their functionality in various applications, from autonomous navigation to human-robot interaction.
 
The computer is showing a rare case of consciousness here, whereas 99.9% of the time it is not conscious.
It is not conscious 100% of the time.

If anyone thinks I'm wrong then I'd like them to tell me what differentiates a machine state being conscious from one that is not conscious? how does the state differ?

All these computers have are CPU and memory and the state is all that memory has.
 
It is not conscious 100% of the time.

If anyone thinks I'm wrong then I'd like them to tell me what differentiates a machine state being conscious from one that is not conscious? how does the state differ?

All these computers have are CPU and memory and the state is all that memory has.
It means that there is a thought that is both what it is thinking and that it is thinking it.
 
It means that there is a thought that is both what it is thinking and that it is thinking it.
Computers do not possess "thoughts" they possess state and nothing more.

Unless you can define what makes some states "thoughts" when others are not, then you have no argument, only anthropomorphic claims.
 
Computers do not possess "thoughts" they possess state and nothing more.

Unless you can define what makes some states "thoughts" when others are not, then you have no argument, only anthropomorphic claims.
A silicon thought is the same as a neurological thought.
 
A silicon thought is the same as a neurological thought.'

Computer just execute instructions and manipulate the state of memory, that's all that goes in any software from simple to complex, it is just instructions and state.

What - in your view - makes some computational activity a "thought" and some other activity not?
 
Computer just execute instructions and manipulate the state of memory, that's all that goes in any software from simple to complex, it is just instructions and state.

What - in your view - makes some computational activity a "thought" and some other activity not?
Let me ask you the other way. What makes human thought special?
 
Why? Why can't they be simulated?
Because we don't know how to model them, we can't define or describe them.
And what's the difference when they are?
Well they aren't, there's only an illusion of human-like behavior.

Many years ago it was regarded as impossible for a machine to do arithmetic and compute mathematical functions, it was regarded as a purely human capability. When it became clear that machines can compute we did not say that the machines are simulating humans, we just accepted that machines give the illusion of simulating humans.

Can a computer play chess? I'd say no, but they give a very very good illusion that they can play chess.
 
Because we don't know how to model them, we can't define or describe them.

Well they aren't, there's only an illusion of human-like behavior.

Many years ago it was regarded as impossible for a machine to do arithmetic and compute mathematical functions, it was regarded as a purely human capability. When it became clear that machines can compute we did not say that the machines are simulating humans, we just accepted that machines give the illusion of simulating humans.

Can a computer play chess? I'd say no, but they give a very very good illusion that they can play chess.
Neural networks are modeled after the human brain.
 
Neural networks are modeled after the human brain.
Well yes and no. One can model a neuron (approximately) but that still doesn't define what a "thought" is. I don't even know what goes on when I "have a thought", is thinking about thought also a thought?

The problem in my view is that there's often an unstated presumption that human mental experiences are wholly mechanistic and therefore can be reproduced in some other kind of mechanism.

I do not accept that assumption though, I do not accept the claim that human mental experiences are anything to do with algorithms.
 
Well yes and no. One can model a neuron (approximately) but that still doesn't define what a "thought" is. I don't even know what I when I "have a thought" is thinking about thought also a thought?

The problem in my view is that there's often an unstated presumption that human mental experiences are wholly mechanistic and therefore can be reproduced in some other kind of mechanism.

I do not accept that assumption though.
I believe in free will. I do not rule out a computer having it though.
 

Forum List

Back
Top