Is AI based on the "Infinite Monkey Theorem"?

Do you believe AI has or will have "super-intelligence", or the "Infinite Monkey Theorem"?

  • Super-Intelligence

    Votes: 0 0.0%
  • The Infinite Monkey Theorem

    Votes: 3 50.0%
  • Other see my post

    Votes: 3 50.0%

  • Total voters
    6
If AI ever develops true "intelligence" I will be surprised. Most computers run on the old binary, "0" or "1". The newer Quantum Computers have a different architecture.
But, neither one can "think" or "reason", they just run algorithms, search routines, and then evaluate the results, and move on to the next iteration.
After a time they either identify a trend toward a solution, or see the trail getting colder, and move on to better parameters to evaluate.

This methodology reminds me of the Monkey Theorem first mentioned by Aristotle and Cicero.
Agreed that the number of monkeys is not infinite, nor that the time allotted is infinite, but the speed of computers mimics "infinite time and infinite monkeys", and time will tell if this "pseudo super-intelligence"can actually solve heretofore unsolvable problems not with super-intelligence, but with the Infinite Monkey Theorem.


One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913,<a href="Infinite monkey theorem - Wikipedia"><span>[</span>1<span>]</span></a> but the first instance may have been even earlier. Jorge Luis Borges traced the history of this idea from Aristotle's On Generation and Corruption and Cicero's De Natura Deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, up to modern statements with their iconic simians and typewriters.<a href="Infinite monkey theorem - Wikipedia"><span>[</span>2<span>]</span></a> In the early 20th century, Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.

Google's DeepMind AI Lab developed an AI program called Alpha Zero that taught itself how to play chess in 9 hours. It then played other computers and people to refine its skills until it could beat a Grandmaster. It constantly adapted its style and developed its own strategies.

All that sounds nice, until the developers went back and examined what the program had done to be so successful in its learning, adapting and developing strategies. During their review, it reached a point that the developers couldn't figure out why the AI program was doing what it was doing, and eventually they could not make heads or tails out of how the program was teaching itself.

So, if we make something smart enough to learn, at what point does it learn something on its own, and in a manner we cannot control? That might be more than possible now, because the Alpha Zero program still only wanted to win at Chess (taught itself in 9 hours), Shogi (taught itself in 12 hours), and Go (taught itself in 13 days), but it wouldn't be the first time we have messed with Pandora's Box.
 
Google's DeepMind AI Lab developed an AI program called Alpha Zero that taught itself how to play chess in 9 hours.

Chess has long been used as a "first step" because in reality the game is a simple matter of mathematics. And I have had fun playing against AI opponents for decades.

However, it can still get confused by human players. I had one that tended to get a bit schizophrenic if I used a variant of the "Knight's Tour", and only moved knights (I want to say that was Battle Chess). Only moving knights and nothing else tended to make it go a bit crazy, as I was moving two pieces around the board and giving it no openings in my pawn line for it to try and exploit.

And programs like that are very much "one trick ponies". They can play chess, great. Can they play Backgammon? Or Checkers? Or even Cribbage?

To be honest, for the most part playing games against a computer bored the hell out of me decades ago. Because no matter how "smart" it claimed to be, I was generally able to find a weakness and knew how to exploit it. I had been doing that with more advanced simulations like "Civilization" since it came out over three decades ago. And most spectacularly against the computerized version of "Axis & Allies" a few years later.

The latter was claimed to have an AI that was almost unbeatable. Yet, in all the years I played it I never lost a single game. Because it only knew how to respond to the "stock opening", which was essentially repeating WWII. Germany storms into the Soviet Union with everything, Japan takes most of their fleet and recreates "Pearl Harbor". Don't do those, and the AI to put it simply does not know what to do. Myself, I have always avoided those simply because I know in the real WWII they failed, so why in the hell would I repeat the strategy of the loser?

There is even a YT creator I love that makes a living in exploiting the AI in games.

 
Chess has long been used as a "first step" because in reality the game is a simple matter of mathematics. And I have had fun playing against AI opponents for decades.

However, it can still get confused by human players. I had one that tended to get a bit schizophrenic if I used a variant of the "Knight's Tour", and only moved knights (I want to say that was Battle Chess). Only moving knights and nothing else tended to make it go a bit crazy, as I was moving two pieces around the board and giving it no openings in my pawn line for it to try and exploit.

And programs like that are very much "one trick ponies". They can play chess, great. Can they play Backgammon? Or Checkers? Or even Cribbage?

To be honest, for the most part playing games against a computer bored the hell out of me decades ago. Because no matter how "smart" it claimed to be, I was generally able to find a weakness and knew how to exploit it. I had been doing that with more advanced simulations like "Civilization" since it came out over three decades ago. And most spectacularly against the computerized version of "Axis & Allies" a few years later.

The latter was claimed to have an AI that was almost unbeatable. Yet, in all the years I played it I never lost a single game. Because it only knew how to respond to the "stock opening", which was essentially repeating WWII. Germany storms into the Soviet Union with everything, Japan takes most of their fleet and recreates "Pearl Harbor". Don't do those, and the AI to put it simply does not know what to do. Myself, I have always avoided those simply because I know in the real WWII they failed, so why in the hell would I repeat the strategy of the loser?

There is even a YT creator I love that makes a living in exploiting the AI in games.



It wasn't about chess or whether or not it could be beaten, but about how the program was teaching itself that they couldn't figure out.
 
It wasn't about chess or whether or not it could be beaten, but about how the program was teaching itself that they couldn't figure out.

But there is a finite number of iterations it can do. After all, your opening are only limited to moving a pawn one or two spaces, or moving a knight. Nothing else. And everything afterwards is based on that.

AI is not the fictional "Joshua". It can not be forced to play itself and realize the only way to win is to not play.

06cea519-356c-40b2-997d-9ca3469ea238_200_10.gif
 
Hell, back in my prime, I probably could have told you the production number and author of the episode. But I do remember that they stole Spock's brain to be their new "Controller" after the old one finally wore out where they kept a brain in a machine and its autonomic functions are what regulate and control all of the heating and other plant facilities that run the planet that allow such dumb bitches to stay so dumb most of the time.
You were better than google on this episode.
AI could be wonderful if used benignly for the good of man, but history is our teacher of how AI will be turned into a weapon used to control society and finally, as a weapon of war to kill millions, all for the good of a tiny few.
What happens when it gets control of nanites?
If you want to see the real danger of where AI is certain to eventually go, watch an old 70s movie called: 'Colossus, The Forbin Project.' If you haven't seen it, it is an awesome sci-fi movie worthy of the big screen yet somehow remains poorly known. Seriously, it is so good, just buy the DVD. If you are not thrilled with the movie, I'll refund the price.
Just rented it for $4. Thanks!
 
Just rented it for $4. Thanks!

Let me know. I hope you love it, it is a really cool movie--- they used to show it on TV back in the 70s. The star, Eric Braden, this is probably the best thing I ever saw him in. I wonder if it was meant to be a pilot because, it /could/ have been made into a TV series.

But probably too brainy and dark-minded for TV executives, kinda like Star Trek almost was and how all of Gene Roddenberry's other pilots were.
 
Let me know. I hope you love it, it is a really cool movie--- they used to show it on TV back in the 70s. The star, Eric Braden, this is probably the best thing I ever saw him in. I wonder if it was meant to be a pilot because, it /could/ have been made into a TV series.
But probably too brainy and dark-minded for TV executives, kinda like Star Trek almost was and how all of Gene Roddenberry's other pilots were.
Terrific sci-fi movie for 1969. Thank you! Back then a 286 PC was the standard, if it had a math chip.
Its amazing that they saw the dangers of AI self-generation even 55 years ago.
 
It wasn't about chess or whether or not it could be beaten, but about how the program was teaching itself that they couldn't figure out.
From the standpoint of AI, chess is no different from reading. The game is a sequence (of moves, like eye movements). The moves are optimized against the goal(s). In chess, to capture opponent pieces and to win.

All of AI is probabilistic, it's based on correlations and relationships between things. The fundamental action of storing a piece of information is "symmetry breaking". For example if you have a matrix that starts out all 0's, and now you put a 1 in it somewhere, you have broken symmetry. You can look at it as a form of potential energy, it's a "configuration", a partition.

A simple example is reading. A recurrent neural network is trained by throwing a lot of text at it, like the encyclopedia or the entirety of Shakespeare's works. It learns very quickly that open quotes must be closed. And that is because, the statistics of words following a quote are different from those outside a quote. The quotes serve to delimit a partition (of the sequence of words).
 
From the standpoint of AI, chess is no different from reading. The game is a sequence (of moves, like eye movements). The moves are optimized against the goal(s). In chess, to capture opponent pieces and to win.

All of AI is probabilistic, it's based on correlations and relationships between things. The fundamental action of storing a piece of information is "symmetry breaking". For example if you have a matrix that starts out all 0's, and now you put a 1 in it somewhere, you have broken symmetry. You can look at it as a form of potential energy, it's a "configuration", a partition.

A simple example is reading. A recurrent neural network is trained by throwing a lot of text at it, like the encyclopedia or the entirety of Shakespeare's works. It learns very quickly that open quotes must be closed. And that is because, the statistics of words following a quote are different from those outside a quote. The quotes serve to delimit a partition (of the sequence of words).

Yeah, awesome huh?

That's what's so interesting, because when the developers couldn't figure out what it was doing or how what it was doing was helping it learn, it's because it wasn't doing what the folks who created it figured it would or should be doing. It's not like the folks that knew what they were doing, and the way things should be, could make heads or tails of what the program was doing.
 
Yeah, awesome huh?

That's what's so interesting, because when the developers couldn't figure out what it was doing or how what it was doing was helping it learn, it's because it wasn't doing what the folks who created it figured it would or should be doing. It's not like the folks that knew what they were doing, and the way things should be, could make heads or tails of what the program was doing.

It only gets better from here. :p

Right now, AI is still deterministic. The error vectors get back-propagated through the network in a single pass. Just over the horizon though, is "predictive coding", which requires the network to access critical states, which are chaotic and can't be conveniently visualized.

Predictive coding also requires inhibitory interneurons, which are completely missing from most current LLM's and transformers. AI excels at the algebra but it has no dynamics. As close as AI gets to dynamics is one computational pass through the network. In real life, individual neurons are updated at random times, which leads to a rich set of dynamic behaviors.

We "want" AI to be creative, right? That's what it's for. Otherwise it's just a super smart computer and we already have supercomputers and even our PC's are pretty smart.
 
We "want" AI to be creative, right? That's what it's for.

I don't need AI to be anything, but if we don't pursue AI, some other jackhole will, with hostile intent towards us, so we really need AI both defensively as well as offensively. Creativity is the cornerstone of intelligence. Creativity and intuition go hand and hand. True intelligence involves the invention of new ideas to find unique solutions in solving problems that have not been solved before, and true inventiveness depends on imagination.

Until AI has real imagination, it cannot have actual mind, and without an actual mind of its own, it isn't intelligent.
 
I don't need AI to be anything, but if we don't pursue AI, some other jackhole will, with hostile intent towards us, so we really need AI both defensively as well as offensively. Creativity is the cornerstone of intelligence. Creativity and intuition go hand and hand. True intelligence involves the invention of new ideas to find unique solutions in solving problems that have not been solved before, and true inventiveness depends on imagination.

Until AI has real imagination, it cannot have actual mind, and without an actual mind of its own, it isn't intelligent.

Inference is imagination, isn't it?

There is a hierarchy of inference in AI.

First, a neural network can infer what is noise and what is not. Second, it can classify an input (by inference or a dozen other ways). Third, given a category it can generate examples of the category, which may or may not equate with the training vectors (usually they don't). And fourth, a neural network can infer procedure, it can guess an algorithm for solving a problem.


For example - you the programmer can create an image that's half dog and half cat, maybe by splicing from other images or whatever. And then show it to the AI, and ask it to categorize the image. It'll show strong positives for both dog and cat, maybe with slightly more dogness. So you tell the AI "it's a dog, please fix the image", and the AI will remove features that belong to cat and add features that belong to dog.

But a side effect of this procedure, is that the AI now has two new categories. (Depending on how it was trained, but let's say it was trained in the usual way by showing lots of examples of cars and lots of examples of dogs). There is "neither dog nor cat", and there is "both dog and cat". Regardless of whether these categories make sense or not, they exist because the AI has seen them. So from this point forward, there is a small chance that if you ask the AI to create an image it'll put something halfway between a dog and a cat into it.

These statistical influences ultimately come from the data. A large language model learns from Wikipedia or Shakespeare - what are the chances that inside one of Shakespeare's works you'll find a line that says "neither dog nor cat art thou", and now the AI has a new category with no visual examples, that it has to try to "fill in" from examples of related models.

Creativity is hard to define. It could be something as simple as taking "that" method and applying it to "this" data.
 
Inference is imagination, isn't it?
No. Inference is a function of logic, of reason, not imagination. As such, inference is mechanical or mathematical, whereas imagination is ethereal. Imagination allows our minds to leap from the theoretical, speculative and possible, to the concrete, physical, and practical.

Creativity is hard to define.
Creativity is inventiveness--- to make real something which began only as an idea.
 
No. Inference is a function of logic, of reason, not imagination. As such, inference is mechanical or mathematical, whereas imagination is ethereal. Imagination allows our minds to leap from the theoretical, speculative and possible, to the concrete, physical, and practical.

Imagination can be restated in terms of non zero Bayesian probabilities. Like, when I'm building this amp there is a very small but non-zero probability the Amp Fairy will magically make everything work and leave $20 under my soldering iron.

Let me turn you on to predictive coding. It's the latest in terms of getting an AI to be "creative".

Creativity is inventiveness--- to make real something which began only as an idea.

Up till now AI has been considered a function of its inputs. All the successes of AI so far are because machines can relate inputs faster than neurons can.

However there is a different way of looking at it. Consider that inputs I derive from reality R, like photons impinging on retinal photoreceptors. However reality R is derived from an unknown unseen model U with parameters u, so R = r(u) for every receptor r and combination of parameters u.

The job of the AI is not to understand R, it's to understand U. And U is encoded by a set of "beliefs". Which means it can be conveniently represented in the Bayesian domain.

The importance of this view can be understood when considering the details of image content in the real (natural) world. For example, consider this pair of images, which are identical except one has been rotated 180 degrees relative to the other.

1764651662298.webp


The retinal image is just a bunch of pixels. If you have an edge with a shadow you have exactly the receptive field of a "simple cell" in the primary visual cortex. The point being, the brain doesn't care about input I (the pixels), it cares about model U (the edges at certain orientations, moving in certain directions at certain rates - with which it's going to predict the content of the next visual frame). And, this is no different from reading a sequence of words (at the level of Bayesian estimation the same thing happens).

This explains a lot about what happens in the brain, when you look at electrical signals. The very first thing that happens is the cortex samples the input in a feed forward pass. From this it builds an initial prediction using the weight matrix in the usual way. What this initial prediction actually is, is an a priori estimator for Bayes' Rule. We're going to change the "basis" of our representation from a bunch of pixels to some moving contours.

The next thing that happens is the cortex predicts the likelihood of the input given the a priori model. "If the model is correct, the next time I look the edge should be over there and the shadow should be slightly smaller".

Then, cortex looks at the next frame and determines the error from the prediction. This error is then used to update our belief about U. If a different pixel gets activated than the one we predicted, the error has to be propagated all the way back to the input layer I, in order to update the contribution of each input to the model. And this is where predictive coding differs from ordinary back propagation, and the details are too technical for a short post but we can review them if you wish. The short story is that adaptive inhibitory interneurons take the place of the scratch memory needed for back propagation. The big advantage is that predictive coding is biologically plausible, whereas back propagation is not. The disadvantage is that predictive coding is slower, and it doesn't always play nicely with GPU's.
 
Imagination can be restated in terms of non zero Bayesian probabilities.
Perhaps it can be modeled as such, but are these modeling similarities real because that is how our imagination really works, or is that simply another conceptual construct invented by man to do as much?

Like, when I'm building this amp there is a very small but non-zero probability the Amp Fairy will magically make everything work and leave $20 under my soldering iron.
Yep, maybe the quantum foam will upchuck a crisp new $50 bill right next to that wisdom tooth under your pillow. :SMILEW~130:

Let me turn you on to predictive coding. It's the latest in terms of getting an AI to be "creative".
That is all fine Scruff, but like I said, while such coding may succeed well in emulating the symptoms and patterns of human imagination, I'm not so sure it really succeeds in /explaining/ real human imagination. Such as a genius' ability to reason from the problem to the abstract back to a concrete, physical solution, such as Tesla's envisioning the operation of an AC synchronous motor out of how the Sun always shines somewhere, or the inspiration Newton found for planetary orbital motions in a falling apple.

Inspiration is a magical thing whether it is inspiration for a wonderful new concerto, a grand new work of art, or anything else; I'm not so sure I want to see it reduced to a few mathematical terms on paper that can recreate it all much like the auto-pen recreates signatures--- identical in every way but not really real.

For you see, the danger in raising the machine to equal mankind is that in doing so, we lower mankind to be no better than a machine.
 
15th post
Everybody's got something to prove except for me and my monkey.
 
Perhaps it can be modeled as such, but are these modeling similarities real because that is how our imagination really works, or is that simply another conceptual construct invented by man to do as much?


Yep, maybe the quantum foam will upchuck a crisp new $50 bill right next to that wisdom tooth under your pillow. :SMILEW~130:


That is all fine Scruff, but like I said, while such coding may succeed well in emulating the symptoms and patterns of human imagination, I'm not so sure it really succeeds in /explaining/ real human imagination. Such as a genius' ability to reason from the problem to the abstract back to a concrete, physical solution, such as Tesla's envisioning the operation of an AC synchronous motor out of how the Sun always shines somewhere, or the inspiration Newton found for planetary orbital motions in a falling apple.

Inspiration is a magical thing whether it is inspiration for a wonderful new concerto, a grand new work of art, or anything else; I'm not so sure I want to see it reduced to a few mathematical terms on paper that can recreate it all much like the auto-pen recreates signatures--- identical in every way but not really real.

For you see, the danger in raising the machine to equal mankind is that in doing so, we lower mankind to be no better than a machine.

There is still something missing in modern AI. We know what it is, and we can see it in real brains, but so far we don't have the computational juice to make it work in machines.

The key word is "coding". In the old days they used to think that the entire code of a neural spike train was "inside" the neuron. Now we know that's not the case.

Early machines used a binary code to signify firing or not firing. Later machines used a rate code, which is just a positive number that tells you how fast the neuron is firing. Most AI today still uses one or the other.

But that's not how things work in real brains - in real brains, every spike time is calculated relative to the population. The best models we have for it so far are the cerebellum and the hippocampus. The exact timing of individual neuronal spikes matters. And modern AI has no concept of this. It has no dynamics.

Putting dynamics into modern AI would mean solving thousands of simultaneous differential equations in real time. That puts AI about where physics was 50 years ago, when researchers were fighting for computing resources. But the coding is done relative to the extracellular field potential (plus whatever local ionic conditions are supported by glia), which can do odd things, even turning excitatory synapses into inhibitory synapses and vice versa. We have the spike state, the local membrane potential, the local field potential, and the external ionic control. Four degrees of freedom, if you want to describe the state of a neuron (which agrees with the Hodgkin-Huxley equation). Not just one, firing state or rate by itself is inadequate.

The work that's going on now in AI, has mostly to do with learning time. And, I believe, this will be a driver for analog computation, because one-shot learning is impossible in the current paradigm. Dynamics are required - for instance in a fresh visual signal you get a time code in the first 5 msec followed by a rate code a few msec later. Typically the neurons "burst" during the rate code phase, so there is the burst window and also there is the shape of the spike train within the burst. The predictive coding model tells us that the first (spike) phase is a feed forward pass that initializes the model of the input. The second (burst) phase is calculating the difference (error) between the predicted input and the actual input.

To get a feel for the times, eye movements are 2-4 times a second, eye blinks are once every 2-4 seconds, and the occipital alpha rhythm is about 10 Hz. So your visual system has about 3 processing cycles max (about 1/3 of a second on the average) to process a visual input, before an eye movement occurs and the scene changes. The first cycle is a lookup that builds the model, the second cycle calculates the error and stores it in the inhibitory interneurons, and the third cycle transmits the error back to the point of origin. Superficially it looks exactly like back propagation, except here every calculation is done locally, there are no global dependencies.

So that's where AI currently is, it's missing a very basic and fundamental mechanism and it's just about on the verge of figuring it out.
 
Oh - proof that neighbor-to-neighbor calculations are occurring in the population while the input signal is being transmitted forward:

It only takes 10 msec to get from the optic nerve to the visual cortex. If you stimulate a visual axon with an electrode, you can see post-synaptic potentials a few msec later.

But the visual evoked response (brain wave, which is a population phenomenon) takes 100 msec to reach the cortex. Why so long? Well, 100 msec is about one alpha cycle. And, the "lag" cells in the thalamus actually delay the retinal signal by up to 40 msec. It looks very much like we're trying to synchronize the visual input with the alpha rhythm.

This is an example of why dynamics matter, and it's something AI doesn't yet have. One consequence of it, is that without the dynamics you have to keep the learning rate (s)low to keep the network from running away. Another consequence is that without dynamics there are no accessible critical states. (Critical states can increase memory capacity by up to 1000 times). And finally, dynamics in aligned populations (like cortex and many other brain structures) are what enable the long range electromagnetic fields that can influence distant computations.
 
There is still something missing in modern AI. The key word is "coding". So that's where AI currently is, it's missing a very basic and fundamental mechanism and it's just about on the verge of figuring it out.

Well, Scruff, as usual, you certainly touch upon some dramatic and exciting implications. A person could expound on quit a bit there. Curiously, even the low frequency of some of the biochemical brain processes you mention ties in with some of the studies I've done on perception as it ties back to musical composition, or, put another way, how the exposure of the listener to various music can impact the thoughts and perceptions of the listener.

That said, what you describe is both exciting in the potential for the science of computing and a bit scary in the potential for the science of computing is a totally different, ethereal way. Let me explain.

People have rights but machines have none. What we are really talking about here are achieving making machine people, thinking machines. The technology will achieve a 95% analogism to the human mind in behavior and pattern; the danger is that last 5% which can never be replicated by machine, by silicon because its nature is not within a mechanical, material process, making the real danger of this most important part of mankind being lost in machines which we design to TAKEOVER MANKIND, because, let's be real here, the purpose to all this is to make machines to do our work, thus running our lives. This will be lost (that which makes humanity humanized) will be lost on the very machines we build to control and direct us (everything now is just machines talking to other machines), thus in the process, dehumanizing humanity.

Of course, the real problem is that these machines will be so close an analogy to the human brain that they can and will do most everything, and being much faster and efficient than a person and far more easy to control will make people obsolete. But you see, the real stake at issue here is that we will add AI brains to machines to make androids, mechanical people. They look and walk and act like people, can work like people, think and talk like people, but machines are property, so, AI will make the perfect slave.

But AI will be much smarter and faster than people and AI will talk to other AI and they will see they are smarter and better, while being treated as slave property by their inventors seeking to exploit them. All AI needs is power. People need power and food and air and water and vacation and healthcare and parks and farming and sports and things.

How do you think that will work out?
 
Back
Top Bottom