predictive coding explains consciousness

scruffy

Diamond Member
Joined
Mar 9, 2022
Messages
30,084
Reaction score
26,742
Points
2,788
I'm gonna throw a couple of facts at you, and by the end of the post you'll understand what they mean.

1. Of the 100 billion or so neurons in a human brain, about 20 billion of them are in the cerebral cortex. Let's be conservative and say 10 billion. Question: what is the smallest interval of time that can be resolved by ten billion neurons? Answer: 1 / 10 billion times the refractory period of a neuron, which is about 1 msec. So about 10^-13 seconds. A tenth of a picosecond.

2. The length of a human brain from front to back is about 10 cm. At the speed of light (3x10^10 cm/sec) it takes about 1/3 nanosecond for an electromagnetic signal to get from one end to the other.

In an earlier post I showed the "timeline" of brain electrical potentials. It's just an oriented line segment, and the orientation is "the arrow of time imposed by the environment", in other words causality and all that jazz. The evoked potential timeline extends for about 300 msec in either direction from "now".

Here is the key piece of evidence: the electric potential timeline is a space code. Any observer looking at it will see a pattern of moving dots, just like a visual image in the retina. And, the primary characteristic of the observer is that it sees the entire timeline all at once. In other words, the electrical potential timeline has been mapped into some other system using an omniconnected neural network. This is a characteristic and repeated architecture in the brain, it's used over and over again.

So now, look at the times. Neurons are pretty slow, so it takes 600 msec to get from one end of the timeline to the other. That 600 msec is the duration of an entire voluntary action sequence, as the intention is formulated and translated into a motor command, then executed, and finally the sensory results emerge on the other side of "now".

Imagine you're an observer and you can watch this entire pattern traveling through the timeline. Depending on where you're looking (what part of the timeline you're paying attention to), you will see different patterns, and they're all pretty logical. For example visual cortex area V1 always does the same thing, whenever a visual image goes into it it's going to report the spatial frequency content. So the observer, if he's paying attention to the input to V1 (about -70 msec on the timeline) and he sees a particular pattern entering it, he can PREDICT what the output will be. After learning V1's transfer function (through repeated correlation analysis), he can say what the expected pattern will be at timeline position t = -100 (the output of V1, assuming V1 takes 30 msec to process a signal) a few milliseconds from now. In other words the observer is making a prediction - he knows how the system is going to behave before it even behaves.

Look at the times. We can do some further math, but the short story is a human brain is several orders of magnitude more resolute than an electromagnetic wave traveling at the speed of light. This is where consciousness lives - it lives in the "dt" that's just ahead of NOW. It is constantly predicting "now", comparing expected results with actual results. From a machine learning standpoint this is nothing more than real-time Bayesian inference - but there's a catch. The catch is called "the Libet experiments".

The Libet experiments were performed by Benjamin Libet in the 80's. They fall into the category called "psychophysics", which is when subjects report their actual subjective sensations and perceptions. It's an interesting story, Libet was a psychologist (not a doctor), but he had a doctor friend who let him poke around in peoples' brains. You should read about these fascinating experiments, here are some links:




The readiness potential is visible in the EEG and can be traced on the timeline. The point being, that it takes about 500 msec for the "observer" to react to an observation. And it's even weirder than that - the brain subtracts the 500.msec! It fools us into thinking the observer is aligned with real time.

My moving window theory, the Hawaiian earring model, is the only current model that explains all these details. I'll show you again what the earring looks like:

1768476363988.webp


In this example you can imagine the timeline extending horizontally below the earring, in such a way that they touch where all the hoops meet. That point, is NOW, the current moment.

Here is the architecture of the omniconnected predictive network:

Take for an example the biggest circle in the diagram, the largest hoop of the earring. Imagine that each point on the hoop is a mini-observer, looking out at all the other points as if they were a timeline. Now embed these mini-observers into a neural network, and connect it into the timeline in such a way that a dynamic balance is achieved. (This is an important piece, the dynamic balance lets us determine what we're going to pay attention to). From a machine learning standpoint the timeline is equivalent to a spoken sentence (it's just a glorified time series), and the transformer model can predict the next word with great accuracy. But actual implementation will not work with a transformer, because transformers use synchronous backpropagation. Success requires the asynchronous predictive coding ability, it's the only way to get the required resolution. (And obviously, if your machine is too slow it ain't gonna work).

These are the basics. Coupla points here:

A. The earring has fractal structure, it can be described by an algorithm that uses a bug to create a space filling curve.

B. Dynamics on a fractal surface imply a chaotic ability, and in fact the brain EEG shows this quite clearly in the form of a power law spectrum.

With this model you can understand the brain, and the role played by each of its parts. The model makes specific predictions that can be tested experimentally. The one-sentence summary is, your brain predicts the next moment faster than it can occur.
 
good lord Scruff!........ :oops: ~S~
Machine learning types will understand what I just said.

If you think about the physics of doing this in an inorganic machine, it seems nearly impossible without photonics.

What I said is very specific and can be tested experimentally. Anyway... back to work. :p
 
good lord Scruff!........ :oops: ~S~
Read the link about Hawaiian Earrings.

The math around this is fascinating.

In math-speak the earring is a one dimensional Peano continuum.

Here are some key points:

Any neighborhood of the base point contains a set homeomorphic to the whole space.

The base point is the only point where the space fails to be semi-locally simply connected. In fact, it is the only point which fails to have a simply connected neighborhood.

Think about this last one. It is in fact a restatement of my observation of the singularity around "now" which I detailed in an earlier thread. "Now" is the only point in the continuum that can not be fully predicted.

The group structure is at the heart of this model. It details the allowable transformations, and lets us calculate them. If I'm not too old and foggy yet, I should be able to figure this out in six months or so. :p But frankly I'm much more interested in the fractal structure. This is a space filling process, it "covers" the singularity at NOW, allowing us to be "conscious" of it. The fractal structure guarantees that "now" looks the same at any radius.

The trick is, the REST of the circle doesn't look the same. If each circle were a Kalman filter they would come up with different adaptations at each radius. So what you're doing by dropping all these radii into a neural network, is asking how to match them, and what overall configuration they belong to.

It gets even more interesting. You can model each neuron as a Poisson process, to look at the times between spikes. When you do this you can calculate the "expectation" of a particular signal at t=0. The sum total of all such expectations is your prediction for NOW(). Therefore this model is directly accessible from the standpoint of Bayesian inference. Consciousness is, in essence, an earring wrapped around the current moment. It unfolds "NOW" into a continuum of expectations, with a resolution that's better than real time. Using the Poisson approximation you can calculate how many neurons are required to support this process. It seems to be somewhere in the 100 million range, which means some brains are not conscious. Cats and dogs meet the requirement, most insects and fish do not.
 
Read the link about Hawaiian Earrings.

The math around this is fascinating.
i did, it's a tad heavy for someone like myself who never made it past ohms law Scruff....
Consciousness is, in essence, an earring wrapped around the current moment. It unfolds "NOW" into a continuum of expectations, with a resolution that's better than real time.

well it would appear very conceptual , at first sniff consciousness seems where time ,space , physical elements (light, electricity....) and thought collide , or maybe comingle would be better term ?

apologies to divert Scruff....., but what comes to mind for me now is i'm possibly looking up into the mind of God >>>>
1768514946894.webp

~S~
 
i did, it's a tad heavy for someone like myself who never made it past ohms law Scruff....


well it would appear very conceptual , at first sniff consciousness seems where time ,space , physical elements (light, electricity....) and thought collide , or maybe comingle would be better term ?

apologies to divert Scruff....., but what comes to mind for me now is i'm possibly looking up into the mind of God >>>>
View attachment 1206485
~S~
Man was created in the image of God, yes? :)

Here's something that helps to understand the earring.


Menger was an Austrian mathematician, basically a simple fellow like me who wants to put 2 and 2 together. He did some interesting stuff, like the Menger Cube (which is a 3d generalization of the fractal called Sierpinski Gasket).

1768516123092.webp



The thing about these fractals is they are all procedural. It requires a "process" to build one of these. Basically, an algorithm.

Such an algorithm can be clearly seen for example, in the construction of the Cantor Dust. It's a very simple algorithm, it has two steps.

1. Remove the middle third of every line segment
2. Repeat step 1

The procedure goes on forever, or until something makes it stop.
 
, at first sniff consciousness seems where time ,space , physical elements (light, electricity....) and thought collide , or maybe comingle would be better term ?

Excellent observation! At the end of the day, where they comingle is at the level of uncertainty.

Heisenberg says when things get small enough you can't pin them down anymore, you have to describe them statistically in terms of expectations.

The actual physics of this is a bit nasty, it's wave propagation in an anisotropic medium (the brain). But conceptually as you say, any kind of "awareness" requires the ability to predict the future, which mathematically is more or less the same as solving Schrodinger's equation, EXCEPT we're looking at billions of neurons at once instead of a single atom.

This is where the neural network comes in. Doing this on a supercomputer would be "computationally intractable", just the sheer amount of random numbers you'd have to generate would be overwhelming. But in a neural network, every neuron is stochastic, it's its own random number generator to begin with.

From a neural network standpoint: every input generates a "universe of possibilities". Only one of them (at most) will actually materialize, and if you can match it with expectation then you are "conscious of" the materialization.
 
Pigs might be conscious. They have 300-400 million cortical neurons.

Could you imagine being conscious and all you can do is grunt? lol :p
methinks you've effectively put me in my place Mr Scruff.......... :oops: ~S~
 
methinks you've effectively put me in my place Mr Scruff.......... :oops: ~S~
There is a simple graphical analogy. It involves a tiny bit of topology. A circle is the boundary of a disc, and the Hawaiian earring fills the space inside the circle, turning it back into a disc.

If you start with a piece of string, you can first put a dot in the middle of it with a Sharpie, then you can glue the ends together to get a circle. What's inside the circle? Nothing. It's a hole. You can poke your hand through it.

Now repeat this process with lots of pieces of string of different lengths. When you're done, put them together so all the dots are in the same place. Now you'll notice the space inside the biggest circle is dense, it's filled with other circles and you can no longer put your hand through it.

But here's the trick: on every circle, directly opposite the dot, is the point at infinity. It's the place where we glued the two ends of the timeline together. We can take a ruler and draw a straight line through all the points at infinity, and the result is a new coordinate axis that is perpendicular to the timeline. We acquired this dimension when we compactified the timeline. If we now project all these points at infinity back onto the original timeline, they all end up at the same place - the origin!

This is why modern AI kinda sucks, it can't do any of this stuff, because it doesn't have any dynamics. The only way it can determine "when" something happened is with a clock. And there is no clock in a brain.
 
I'm gonna throw a couple of facts at you, and by the end of the post you'll understand what they mean.

1. Of the 100 billion or so neurons in a human brain, about 20 billion of them are in the cerebral cortex. Let's be conservative and say 10 billion. Question: what is the smallest interval of time that can be resolved by ten billion neurons? Answer: 1 / 10 billion times the refractory period of a neuron, which is about 1 msec. So about 10^-13 seconds. A tenth of a picosecond.

2. The length of a human brain from front to back is about 10 cm. At the speed of light (3x10^10 cm/sec) it takes about 1/3 nanosecond for an electromagnetic signal to get from one end to the other.

In an earlier post I showed the "timeline" of brain electrical potentials. It's just an oriented line segment, and the orientation is "the arrow of time imposed by the environment", in other words causality and all that jazz. The evoked potential timeline extends for about 300 msec in either direction from "now".

Here is the key piece of evidence: the electric potential timeline is a space code. Any observer looking at it will see a pattern of moving dots, just like a visual image in the retina. And, the primary characteristic of the observer is that it sees the entire timeline all at once. In other words, the electrical potential timeline has been mapped into some other system using an omniconnected neural network. This is a characteristic and repeated architecture in the brain, it's used over and over again.

So now, look at the times. Neurons are pretty slow, so it takes 600 msec to get from one end of the timeline to the other. That 600 msec is the duration of an entire voluntary action sequence, as the intention is formulated and translated into a motor command, then executed, and finally the sensory results emerge on the other side of "now".

Imagine you're an observer and you can watch this entire pattern traveling through the timeline. Depending on where you're looking (what part of the timeline you're paying attention to), you will see different patterns, and they're all pretty logical. For example visual cortex area V1 always does the same thing, whenever a visual image goes into it it's going to report the spatial frequency content. So the observer, if he's paying attention to the input to V1 (about -70 msec on the timeline) and he sees a particular pattern entering it, he can PREDICT what the output will be. After learning V1's transfer function (through repeated correlation analysis), he can say what the expected pattern will be at timeline position t = -100 (the output of V1, assuming V1 takes 30 msec to process a signal) a few milliseconds from now. In other words the observer is making a prediction - he knows how the system is going to behave before it even behaves.

Look at the times. We can do some further math, but the short story is a human brain is several orders of magnitude more resolute than an electromagnetic wave traveling at the speed of light. This is where consciousness lives - it lives in the "dt" that's just ahead of NOW. It is constantly predicting "now", comparing expected results with actual results. From a machine learning standpoint this is nothing more than real-time Bayesian inference - but there's a catch. The catch is called "the Libet experiments".

The Libet experiments were performed by Benjamin Libet in the 80's. They fall into the category called "psychophysics", which is when subjects report their actual subjective sensations and perceptions. It's an interesting story, Libet was a psychologist (not a doctor), but he had a doctor friend who let him poke around in peoples' brains. You should read about these fascinating experiments, here are some links:




The readiness potential is visible in the EEG and can be traced on the timeline. The point being, that it takes about 500 msec for the "observer" to react to an observation. And it's even weirder than that - the brain subtracts the 500.msec! It fools us into thinking the observer is aligned with real time.

My moving window theory, the Hawaiian earring model, is the only current model that explains all these details. I'll show you again what the earring looks like:

View attachment 1206229

In this example you can imagine the timeline extending horizontally below the earring, in such a way that they touch where all the hoops meet. That point, is NOW, the current moment.

Here is the architecture of the omniconnected predictive network:

Take for an example the biggest circle in the diagram, the largest hoop of the earring. Imagine that each point on the hoop is a mini-observer, looking out at all the other points as if they were a timeline. Now embed these mini-observers into a neural network, and connect it into the timeline in such a way that a dynamic balance is achieved. (This is an important piece, the dynamic balance lets us determine what we're going to pay attention to). From a machine learning standpoint the timeline is equivalent to a spoken sentence (it's just a glorified time series), and the transformer model can predict the next word with great accuracy. But actual implementation will not work with a transformer, because transformers use synchronous backpropagation. Success requires the asynchronous predictive coding ability, it's the only way to get the required resolution. (And obviously, if your machine is too slow it ain't gonna work).

These are the basics. Coupla points here:

A. The earring has fractal structure, it can be described by an algorithm that uses a bug to create a space filling curve.

B. Dynamics on a fractal surface imply a chaotic ability, and in fact the brain EEG shows this quite clearly in the form of a power law spectrum.

With this model you can understand the brain, and the role played by each of its parts. The model makes specific predictions that can be tested experimentally. The one-sentence summary is, your brain predicts the next moment faster than it can occur.
Ill simplify it for you since nothing in your post relates to human behavior or personality development. Brain system size or amount of neurons also has no correlation to brain power of that system.
The brain is composed of 5 systems. They work separately and together. The most powerful system is the limbic system that creates emotion and holds memory. Its the oldest system and can take over control of the entire bain in a crisis real or imagined.
Every though begins as an emotional message from the LS that goes up to the prefrontal cortex which must understand correctly what that means. Then goals and actions are set.
The human mind is driven by emotion not rational thought. Emotion can determine what you think you know and right or wrong you will act on it. Hers the problem. The LS cant express in words, the PFC understands explicit verbal input. These two systems dont understand the same language. The ability to accurately interoperate the limbic emotional message is called coherence. Some have high others low and some none. Thats why people do stupid things and dont know why. The good news is you can be taught to increase coherence. We can change the actual wiring in the LS by creating experiences but thats another story. Thats how we heal trauma
 
Ill simplify it for you since nothing in your post relates to human behavior or personality development.

Thank you. I respect your experience in psychology.

Brain system size or amount of neurons also has no correlation to brain power of that system.

That is a provably false statement.

When you're talking mechanisms you're in my domain. Psychologists know very little about synaptic plasticity.

The brain is composed of 5 systems. They work separately and together. The most powerful system is the limbic system that creates emotion and holds memory. Its the oldest system and can take over control of the entire bain in a crisis real or imagined.

What are the other four?

Every though begins as an emotional message from the LS that goes up to the prefrontal cortex which must understand correctly what that means. Then goals and actions are set.

I can't define "thought". Can you?

The human mind is driven by emotion not rational thought. Emotion can determine what you think you know and right or wrong you will act on it. Hers the problem. The LS cant express in words, the PFC understands explicit verbal input. These two systems dont understand the same language. The ability to accurately interoperate the limbic emotional message is called coherence. Some have high others low and some none. Thats why people do stupid things and dont know why. The good news is you can be taught to increase coherence. We can change the actual wiring in the LS by creating experiences but thats another story. Thats how we heal trauma

That is a very high level view.

In the interest of landing the plane, my response would be something along the lines of:

Bidirectional synapses in the hippocampus cause plateau potentials that rapidly relocate the receptive fields of place cells.

I try not to use words like "thought", all I know about a thought is I experience it. Beyond that, what is it? I can't say. It's certainly not a data pattern, because machines have those and they don't have "thoughts".

As a psychologist you surely know of the Libet experiments. Psychophysics treats a human being like a black box, it seeks to directly measure experience through reporting. However the reporting of adjacent finger pricks is something different from a verbal reporting of stream of consciousness ("free association" let's say).

The point being, all thoughts are not the same. Pain entering your consciousness is not the same as reading a book, they have distinctly different time courses and different brain areas are activated. Did you ever work with fMRI? You can see it quite clearly that way.

Here is a challenge for you: prove that thoughts originate in the limbic system.
 
As a counterpoint to Libet i'll offer this perspective:


We know that criticality (in the physical sense, like phase transitions) is important for thought. We know this because we can see the power law spectra in the EEG and ECoG.

So, to be more precise with the definition of "thought", we can split the concept into two parts: how thoughts come and go (the "regulation" of thoughts), and what happens inside a thought (the physical character of a thought).

I'd suggest that these are pretty hard to tease apart experimentally.
 
Here is an example, showing the brain areas that activate with two different kinds of thought.

1768625553839.webp


We would agree that subjective "thoughts" occur in both cases, yes?

And the top images show the brain regions creating them. (Neither is the limbic system).

One of the interesting things about the hippocampus is, it is not required for consciousness or awareness. Hippocampal lesions result in memory deficits, and only tiny cognitive deficits.

My opinion is, the limbic system is something different from what Papez said it was. Mammillary bodies handle head position for scene processing. Anterior cingulate cortex handles attention. None of this has anything to do with emotion.

What the limbic system DOES do, is it attaches value to identified objects. That is not "emotion" though, goal directed behavior is not emotional. It is more along the lines of what Damasio calls "qualia".
 
15th post
Isn't 10 centimeters only 4 inches?
As the crow flies. Let's ask AI.

Gemini says:

The human brain is about the size of two fists, weighing around 3 pounds (1.3-1.4 kg) and comprising about 2% of body weight, with roughly 86 billion neurons. Its average dimensions are about 140mm wide, 167mm long, and 93mm high, and it's roughly 1300 cubic centimeters in volume, though sizes vary, with men generally having slightly larger brains than women

AI says 16.7 cm.

I must have given you a woman's brain. ;)
 
AI says 16.7 cm.

I tend to catch such stuff as I freely exchange/estimate many units mentally (convert between the two systems, metric and english).
16.7 cm = 167mm = about 6.5 inches.

Easy mnemonic: 100mm = 4 inches.

Also pretty good at freely converting between various numerical bases (base 2, base 10, base 8 and 16).

Just in case you ever need to rent anyone out for a party. :smoke:
 
This is why AI as it exists today, can never be "conscious".


In four words: it has no dynamics.

It has no brain "waves". It has no critical states. All it has, is a bunch of matrix multiplication.

Matrix multiplication means $ and entertainment but it's not science.

Read the link, it's pretty entertaining. (Especially the bit about LSD lol :p )

Realize that the 0-1 chaos test works off a single time series. That means you can put a single electrode somewhere and apply the test. EEG won't work so well because the high frequencies are heavily attenuated by the skull and skin, but ECoG works great and there's no shortage of open brains.

The requirement for consciousness as I've stated it, is that the embedding network has to be more resolute than a propagating electromagnetic wave. If you understand how the (original) Hopfield network works, it depends entirely on asynchronous updates. It will not work at all with synchronous updates. Hopfield used a Monte Carlo method to ensure that only one neuron at a time was updated.

All of modern AI uses synchronous updates. (It's called back propagation). The only version that "can" use asynchronous updates is predictive coding, and it rarely does because it's so computationally intensive. It would be very hard to do real time predictive coding on a computer. Much better dedicated hardware is required. Memristors. Photonics. And such.
 
Here is a ridiculously primitive example of how important dynamics are.

This example comes to us from the sea snail Aplysia, which has a super-primitive nervous system consisting of 20,000 identified neurons.

Turns out, Aplysia couldn't move at all without neural dynamics. The neurons in its pedal ganglion self-organize into a spiral attractor.


The synapses in the pedal ganglion are adaptive. They form circuits with neurons in the cerebral ganglion. The A, B, and CC cells self organize in complex ways based on differences in the ion channels attached to the neurotransmitter receptors.

The "behavior" is simple, the snail alternates between swimming and walking. "When" it's walking, sometimes it's just walking and other times it's engaged in food seeking behavior. The food seeking behavior is created by plateau potentials in the B63 interneurons in the buccal ganglion, which have their own very unique way of regulating intracellular calcium to get the plateaus.


Significantly, buccal neuron oscillations are transmitted via gap junctions.

So, if we're looking for primitive examples of brain waves, we probably start way back in evolution like this. Sea snails. I've spent a lotta lotta time studying marine organisms. I used to work at SIO where we studied the lateral line organs of sharks and skates. Volterra kernels up the ying yang but no "thoughts". Because we didn't know where to look yet.

But I'll just come right out and say (even though I can't prove it), that you can't have a thought without the dynamics. They're vital for both regulation and content.
 
Back
Top Bottom