I'm gonna throw a couple of facts at you, and by the end of the post you'll understand what they mean.
1. Of the 100 billion or so neurons in a human brain, about 20 billion of them are in the cerebral cortex. Let's be conservative and say 10 billion. Question: what is the smallest interval of time that can be resolved by ten billion neurons? Answer: 1 / 10 billion times the refractory period of a neuron, which is about 1 msec. So about 10^-13 seconds. A tenth of a picosecond.
2. The length of a human brain from front to back is about 10 cm. At the speed of light (3x10^10 cm/sec) it takes about 1/3 nanosecond for an electromagnetic signal to get from one end to the other.
In an earlier post I showed the "timeline" of brain electrical potentials. It's just an oriented line segment, and the orientation is "the arrow of time imposed by the environment", in other words causality and all that jazz. The evoked potential timeline extends for about 300 msec in either direction from "now".
Here is the key piece of evidence: the electric potential timeline is a space code. Any observer looking at it will see a pattern of moving dots, just like a visual image in the retina. And, the primary characteristic of the observer is that it sees the entire timeline all at once. In other words, the electrical potential timeline has been mapped into some other system using an omniconnected neural network. This is a characteristic and repeated architecture in the brain, it's used over and over again.
So now, look at the times. Neurons are pretty slow, so it takes 600 msec to get from one end of the timeline to the other. That 600 msec is the duration of an entire voluntary action sequence, as the intention is formulated and translated into a motor command, then executed, and finally the sensory results emerge on the other side of "now".
Imagine you're an observer and you can watch this entire pattern traveling through the timeline. Depending on where you're looking (what part of the timeline you're paying attention to), you will see different patterns, and they're all pretty logical. For example visual cortex area V1 always does the same thing, whenever a visual image goes into it it's going to report the spatial frequency content. So the observer, if he's paying attention to the input to V1 (about -70 msec on the timeline) and he sees a particular pattern entering it, he can PREDICT what the output will be. After learning V1's transfer function (through repeated correlation analysis), he can say what the expected pattern will be at timeline position t = -100 (the output of V1, assuming V1 takes 30 msec to process a signal) a few milliseconds from now. In other words the observer is making a prediction - he knows how the system is going to behave before it even behaves.
Look at the times. We can do some further math, but the short story is a human brain is several orders of magnitude more resolute than an electromagnetic wave traveling at the speed of light. This is where consciousness lives - it lives in the "dt" that's just ahead of NOW. It is constantly predicting "now", comparing expected results with actual results. From a machine learning standpoint this is nothing more than real-time Bayesian inference - but there's a catch. The catch is called "the Libet experiments".
The Libet experiments were performed by Benjamin Libet in the 80's. They fall into the category called "psychophysics", which is when subjects report their actual subjective sensations and perceptions. It's an interesting story, Libet was a psychologist (not a doctor), but he had a doctor friend who let him poke around in peoples' brains. You should read about these fascinating experiments, here are some links:
www.informationphilosopher.com
en.wikipedia.org
The readiness potential is visible in the EEG and can be traced on the timeline. The point being, that it takes about 500 msec for the "observer" to react to an observation. And it's even weirder than that - the brain subtracts the 500.msec! It fools us into thinking the observer is aligned with real time.
My moving window theory, the Hawaiian earring model, is the only current model that explains all these details. I'll show you again what the earring looks like:
In this example you can imagine the timeline extending horizontally below the earring, in such a way that they touch where all the hoops meet. That point, is NOW, the current moment.
Here is the architecture of the omniconnected predictive network:
Take for an example the biggest circle in the diagram, the largest hoop of the earring. Imagine that each point on the hoop is a mini-observer, looking out at all the other points as if they were a timeline. Now embed these mini-observers into a neural network, and connect it into the timeline in such a way that a dynamic balance is achieved. (This is an important piece, the dynamic balance lets us determine what we're going to pay attention to). From a machine learning standpoint the timeline is equivalent to a spoken sentence (it's just a glorified time series), and the transformer model can predict the next word with great accuracy. But actual implementation will not work with a transformer, because transformers use synchronous backpropagation. Success requires the asynchronous predictive coding ability, it's the only way to get the required resolution. (And obviously, if your machine is too slow it ain't gonna work).
These are the basics. Coupla points here:
A. The earring has fractal structure, it can be described by an algorithm that uses a bug to create a space filling curve.
B. Dynamics on a fractal surface imply a chaotic ability, and in fact the brain EEG shows this quite clearly in the form of a power law spectrum.
With this model you can understand the brain, and the role played by each of its parts. The model makes specific predictions that can be tested experimentally. The one-sentence summary is, your brain predicts the next moment faster than it can occur.
1. Of the 100 billion or so neurons in a human brain, about 20 billion of them are in the cerebral cortex. Let's be conservative and say 10 billion. Question: what is the smallest interval of time that can be resolved by ten billion neurons? Answer: 1 / 10 billion times the refractory period of a neuron, which is about 1 msec. So about 10^-13 seconds. A tenth of a picosecond.
2. The length of a human brain from front to back is about 10 cm. At the speed of light (3x10^10 cm/sec) it takes about 1/3 nanosecond for an electromagnetic signal to get from one end to the other.
In an earlier post I showed the "timeline" of brain electrical potentials. It's just an oriented line segment, and the orientation is "the arrow of time imposed by the environment", in other words causality and all that jazz. The evoked potential timeline extends for about 300 msec in either direction from "now".
Here is the key piece of evidence: the electric potential timeline is a space code. Any observer looking at it will see a pattern of moving dots, just like a visual image in the retina. And, the primary characteristic of the observer is that it sees the entire timeline all at once. In other words, the electrical potential timeline has been mapped into some other system using an omniconnected neural network. This is a characteristic and repeated architecture in the brain, it's used over and over again.
So now, look at the times. Neurons are pretty slow, so it takes 600 msec to get from one end of the timeline to the other. That 600 msec is the duration of an entire voluntary action sequence, as the intention is formulated and translated into a motor command, then executed, and finally the sensory results emerge on the other side of "now".
Imagine you're an observer and you can watch this entire pattern traveling through the timeline. Depending on where you're looking (what part of the timeline you're paying attention to), you will see different patterns, and they're all pretty logical. For example visual cortex area V1 always does the same thing, whenever a visual image goes into it it's going to report the spatial frequency content. So the observer, if he's paying attention to the input to V1 (about -70 msec on the timeline) and he sees a particular pattern entering it, he can PREDICT what the output will be. After learning V1's transfer function (through repeated correlation analysis), he can say what the expected pattern will be at timeline position t = -100 (the output of V1, assuming V1 takes 30 msec to process a signal) a few milliseconds from now. In other words the observer is making a prediction - he knows how the system is going to behave before it even behaves.
Look at the times. We can do some further math, but the short story is a human brain is several orders of magnitude more resolute than an electromagnetic wave traveling at the speed of light. This is where consciousness lives - it lives in the "dt" that's just ahead of NOW. It is constantly predicting "now", comparing expected results with actual results. From a machine learning standpoint this is nothing more than real-time Bayesian inference - but there's a catch. The catch is called "the Libet experiments".
The Libet experiments were performed by Benjamin Libet in the 80's. They fall into the category called "psychophysics", which is when subjects report their actual subjective sensations and perceptions. It's an interesting story, Libet was a psychologist (not a doctor), but he had a doctor friend who let him poke around in peoples' brains. You should read about these fascinating experiments, here are some links:
Libet Experiments
Information Philosopher is dedicated to the new Information Philosophy, with explanations for Freedom, Values, and Knowledge.
Benjamin Libet - Wikipedia
The readiness potential is visible in the EEG and can be traced on the timeline. The point being, that it takes about 500 msec for the "observer" to react to an observation. And it's even weirder than that - the brain subtracts the 500.msec! It fools us into thinking the observer is aligned with real time.
My moving window theory, the Hawaiian earring model, is the only current model that explains all these details. I'll show you again what the earring looks like:
In this example you can imagine the timeline extending horizontally below the earring, in such a way that they touch where all the hoops meet. That point, is NOW, the current moment.
Here is the architecture of the omniconnected predictive network:
Take for an example the biggest circle in the diagram, the largest hoop of the earring. Imagine that each point on the hoop is a mini-observer, looking out at all the other points as if they were a timeline. Now embed these mini-observers into a neural network, and connect it into the timeline in such a way that a dynamic balance is achieved. (This is an important piece, the dynamic balance lets us determine what we're going to pay attention to). From a machine learning standpoint the timeline is equivalent to a spoken sentence (it's just a glorified time series), and the transformer model can predict the next word with great accuracy. But actual implementation will not work with a transformer, because transformers use synchronous backpropagation. Success requires the asynchronous predictive coding ability, it's the only way to get the required resolution. (And obviously, if your machine is too slow it ain't gonna work).
These are the basics. Coupla points here:
A. The earring has fractal structure, it can be described by an algorithm that uses a bug to create a space filling curve.
B. Dynamics on a fractal surface imply a chaotic ability, and in fact the brain EEG shows this quite clearly in the form of a power law spectrum.
With this model you can understand the brain, and the role played by each of its parts. The model makes specific predictions that can be tested experimentally. The one-sentence summary is, your brain predicts the next moment faster than it can occur.
