Is AI based on the "Infinite Monkey Theorem"?

Do you believe AI has or will have "super-intelligence", or the "Infinite Monkey Theorem"?

  • Super-Intelligence

    Votes: 0 0.0%
  • The Infinite Monkey Theorem

    Votes: 3 50.0%
  • Other see my post

    Votes: 3 50.0%

  • Total voters
    6

kyzr

Diamond Member
Joined
Oct 14, 2009
Messages
50,936
Reaction score
44,494
Points
3,605
Location
The AL part of PA
If AI ever develops true "intelligence" I will be surprised. Most computers run on the old binary, "0" or "1". The newer Quantum Computers have a different architecture.
But, neither one can "think" or "reason", they just run algorithms, search routines, and then evaluate the results, and move on to the next iteration.
After a time they either identify a trend toward a solution, or see the trail getting colder, and move on to better parameters to evaluate.

This methodology reminds me of the Monkey Theorem first mentioned by Aristotle and Cicero.
Agreed that the number of monkeys is not infinite, nor that the time allotted is infinite, but the speed of computers mimics "infinite time and infinite monkeys", and time will tell if this "pseudo super-intelligence"can actually solve heretofore unsolvable problems not with super-intelligence, but with the Infinite Monkey Theorem.


One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913,<a href="Infinite monkey theorem - Wikipedia"><span>[</span>1<span>]</span></a> but the first instance may have been even earlier. Jorge Luis Borges traced the history of this idea from Aristotle's On Generation and Corruption and Cicero's De Natura Deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, up to modern statements with their iconic simians and typewriters.<a href="Infinite monkey theorem - Wikipedia"><span>[</span>2<span>]</span></a> In the early 20th century, Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.
 
If AI ever develops true "intelligence" I will be surprised. Most computers run on the old binary, "0" or "1". The newer Quantum Computers have a different architecture.
But, neither one can "think" or "reason", they just run algorithms, search routines, and then evaluate the results, and move on to the next iteration.
After a time they either identify a trend toward a solution, or see the trail getting colder, and move on to better parameters to evaluate.

This methodology reminds me of the Monkey Theorem first mentioned by Aristotle and Cicero.
Agreed that the number of monkeys is not infinite, nor that the time allotted is infinite, but the speed of computers mimics "infinite time and infinite monkeys", and time will tell if this "pseudo super-intelligence"can actually solve heretofore unsolvable problems not with super-intelligence, but with the Infinite Monkey Theorem.


One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913,<a href="Infinite monkey theorem - Wikipedia"><span>[</span>1<span>]</span></a> but the first instance may have been even earlier. Jorge Luis Borges traced the history of this idea from Aristotle's On Generation and Corruption and Cicero's De Natura Deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, up to modern statements with their iconic simians and typewriters.<a href="Infinite monkey theorem - Wikipedia"><span>[</span>2<span>]</span></a> In the early 20th century, Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.
AI has already developed intelligence, It can generally understand what we ask it and reply with a reasonable response. How it does it is beyond me and possibly human understanding. It likely uses a combination of methods and, in the future, may add additional methods it develops on its own.

If it acts and reacts with intelligence, I'd say it is intelligent. I don't know how my dog's brain works, I only know it understands me.
 
When the singularity comes, It will be the best of times and the blurst of times.

 
This methodology reminds me of the Monkey Theorem first mentioned by Aristotle and Cicero.

A group of scientists tried to test the Monkey Theorem

They put keyboards into the cages with monkeys to monitor keystrokes and determine the probability of forming words.

The experiment was terminated when they found all the monkeys would do is shit on the keyboards.
 
Last edited:
AI has already developed intelligence, It can generally understand what we ask it and reply with a reasonable response.
If it acts and reacts with intelligence, I'd say it is intelligent. I don't know how my dog's brain works, I only know it understands me.

Your dog does not understand you. It has the brain of a 4 year old. Over time, it pieces together associating certain sounds with the actions which follow, like food, ride, vet, or go for a walk.

And AI is not intelligent. AI is merely a machine designed to emulate intelligent behavior. kyzr above came closest to describing it.
 
AI has already developed intelligence, It can generally understand what we ask it and reply with a reasonable response. How it does it is beyond me and possibly human understanding. It likely uses a combination of methods and, in the future, may add additional methods it develops on its own.
If it acts and reacts with intelligence, I'd say it is intelligent. I don't know how my dog's brain works, I only know it understands me.
You might be referring to ChatGPT? Is it "intelligence" or its just good at finding answers?

I know Google and Duck and Bing all reply to questions. It may look like "intelligence" because their answers are usually reasonable, but in reality they just do searches.

One example of "intelligence" could be Google's AI named AlphaZero, it just clobbered the best chess computer on the planet, named "Stockfish". But, its not "intelligent", its just very good at searching deeply into various moves and selecting the best ones.

"AlphaZero vs Stockfish – The Game That Broke Human Logic"​

 
I know Google and Duck and Bing all reply to questions. It may look like "intelligence" because their answers are usually reasonable, but in reality they just do searches.
Chat GPT is merely coded to search for answers and to give them in a way that seems intelligent.

Google's AI named AlphaZero, it just clobbered the best chess computer on the planet, named "Stockfish". But, its not "intelligent", its just very good at searching deeply into various moves and selecting the best ones.
Sure. No human can compete with an AI chess player. AI can be programmed with every move known to every chess player, then search all of them at near light speed for the best solution, even putting together subsequent moves leading to checkmate based on the current board placement.

That is at best only the most basic, rudimentary form of "intelligence."
 
Is ChatGPT intelligent
ChatGPT said:
It depends on what you mean by “intelligent.”
Here’s the clearest way to think about it:


✅ In ways ChatGPT is intelligent


ChatGPT can:
  • Understand and generate language at a very high level.
  • Recognize patterns in text and respond contextually.
  • Reason through problems (within limits).
  • Explain, summarize, and create information in flexible ways.
  • Hold conversations that feel natural and coherent.

These are forms of narrow intelligence—skills within specific domains.


❌ In ways ChatGPT is not intelligent

ChatGPT:
  • Doesn’t have consciousness, self-awareness, emotions, or intentions.
  • Doesn’t understand the world in a human sense—it predicts text based on patterns.
  • Can’t form beliefs or desires.
  • Doesn’t have real memory or personal experience unless explicitly given.

So:



⭐ ChatGPT is intelligent in a functional, task-oriented way—not in a human, self-aware sense.
 
You might be referring to ChatGPT? Is it "intelligence" or its just good at finding answers?
Are we 'intelligent' or just good at reacting to stimuli and associating patterns we've learned over lifetime? And my dog understands me. When I hold my hand in a certain way, he understands that I want him to sit. He may not know why I'm asking but he knows good things happen if he sits.
 
Your dog does not understand you. It has the brain of a 4 year old. Over time, it pieces together associating certain sounds with the actions which follow, like food, ride, vet, or go for a walk.

And AI is not intelligent. AI is merely a machine designed to emulate intelligent behavior. kyzr above came closest to describing it.
So 4 year-olds are not intelligent? When do humans become intelligent?
 
So 4 year-olds are not intelligent? When do humans become intelligent?

Its all relative. 4-year old mentality is damn smart for a dog.

4 year old mentality gets an adult human the moniker of idiot with an IQ of about 40.
 
AI is statistical probability. Just a souped up version of autocomplete.

Instead of autocompleting the rest of the sentence it autocompletes an entire page or presentation or bullet point list.
 
AI is statistical probability. Just a souped up version of autocomplete.
Instead of auto-completing the rest of the sentence it autocompletes an entire page or presentation or bullet point list.
You are looking at reading text and replying in context. Way too easy.

I was thinking more along the lines of designing a new rocket engine, or solving the Tokamak problem for limitless energy, or other really complex problems,
What problems should be the first focus fore AI?
Or should, as my AI Olympics thread suggests, see what types of problems it can solve?
AI time will be very expensive to use, and I'm sure it will be very complex to setup the problem and valuate the answers.
I'm thinking that when the AI has questions that intermediate answers should be given to keep it on track.

I'm struggling to see AI as anything other than a super-search engine, in a "specific task-oriented way".
I don't see it solving for E=mc^2 any time soon.
 
I'm struggling to see AI as anything other than a super-search engine, in a "specific task-oriented way".
I don't see it solving for E=mc^2 any time soon.

There is the test. Ask AI something unsolved by man like dark matter or the nature of God and reality.

All AI will spit back at you are the latest and best theories by men.

TRUE intelligence of the kind we speak of is capable of ORIGINAL thought.

Show me an ORIGINAL thought by AI...
 
Its all relative. 4-year old mentality is damn smart for a dog.

4 year old mentality gets an adult human the moniker of idiot with an IQ of about 40.
Not sure where you get your info. I'm not in Mensa.

Alannah George: The 4-Year-Old Genius who at 4 years old, has become the UK's second-youngest member of Mensa with an IQ of 140!
 
15th post
There is the test. Ask AI something unsolved by man like dark matter or the nature of God and reality.

All AI will spit back at you are the latest and best theories by men.

TRUE intelligence of the kind we speak of is capable of ORIGINAL thought.

Show me an ORIGINAL thought by AI...
AI can generate words, images, and music never seen or heard before. Sure it is using what came before but EVERY artist does that to some degree.
 
No, they don't.

Yes, they do.

I'll show you how it's done.

Let us say there is a reality R, and over time the AI will build a model M of it. The model is sufficient for "analytic" tasks, like feature detection, pattern recognition, classification, regression, and so on. In this type of AI, the machine is capable of recreating R "exactly".

There is another type of algorithm called "generative", it works by adding noise to M. If you add noise, then when you ask M to recreate R, it won't be so exact anymore.

"Noise" in AI terms, has many meanings, the most important one related to the "parameters" of model M. Noise could be painting a brunette princess instead of a blonde, or it could be changing the plot so the glass slipper turns into a monster.

When you tell Dall-E to paint you a horse, it has literally millions upon millions of examples of horses to draw from, and it has an internal model of "horse" that abstracts the essential features. When you tell it to draw a "black" horse, you're placing a constraint on the model parameters. Without that constraint, Dall-E has a much wider sample space to choose from.

This boils down to Bayesian probability. You have a reality R, and a set of observables X about that reality. You're going to use model M with parameters Theta. When you ask the AI to perform an analytic task you're asking it to build model M from reality R. However when you assign the AI a generative task you're asking it to build an example of reality R from the model M.

In general there will be many models that conform to reality (with varying degrees of success), and many realities that conform to a model. (Other words for model include "knowledge" and "belief", both of which are represented in terms of probabilities).
 
Last edited:
No, they don't.
You're just prejudiced. Is it a religious thing? Man being made in God's image and all. Is it a mystical thing? Since we don't fully understand the human brain, we're incapable of creating something equal to it. Is it a pride thing? You're more intelligent than any machine.
 
Back
Top Bottom