predictive coding explains consciousness

What my model can do in real time, that others can't even do on a supercomputer

Check this out. We're going to extract causality from multiple time series in real time. Faster than real time. So fast, you're actually "conscious" of it.


Full article: Discovering the Network Granger Causality in ...

Multi-window Granger causality is an advanced technique for analyzing dynamic causal links in complex, multi-channel time series data, such as brain signals or financial markets, by applying traditional Granger causality within sliding time windows or across different time scales (multi-scale) to capture changing relationships, unlike static methods that assume constant causality. It involves techniques like rolling windows, dynamic window-level analysis, or spectral density methods to detect how causal influences evolve, identifying when and where relationships strengthen or weaken, often using statistical tests like the F-test on forecasting errors within those windows.

-------

This orientation of information flow from future to past, is impressed on the network by the environment. The network is not inherently directional, and in fact we're very interested in the information that flows in the opposite direction, from past to future - because that's how sensory events become motor behavior.

So we compactify the processing window and add a point at infinity, and let's say that point is a device - an observer, looking out on the entire processing interval. This immediately previous sentence, is equivalent to embedding the processing window into a 2-d neural network. And if we do the embedding correctly, we end up with the ability to rotate the circle, which means we end up with lots of little observers predicting every view all at once. The result is an unfolding of the singularity (neighborhood of "now") into the embedding space.

"Embedding" is a machine learning terms, but it's also a topological term. The idea of the earring is the same as using a continuum of window widths to analyze a time series. You end up with a spectrum plotted against window width, any it has a shape, and to a neural network that shape is just like any other shape. You can convolve one micro filter with another, to get "micro causality".

In reality the earring is finite and discrete. It can not be infinite and continuous, that's mathematically forbidden. If it's discrete it has the desired fractal properties, if it's continuous it doesn't. (That's why they call it wild topology). The information we have from the ramp cells in the entorhinal cortex suggests there are about 7 diameters (+/- 2 ha ha) that constitute the outer loops.
 
Ta-da! Granger Causality Using Neural Networks.

This is the same David Tank from Hopfield and Tank. He's on the right track. To use the Granger method you have to invert a matrix, which neural networks can do in half a dozen ways. Equation 11 is your loss function and 12 shows the penalties.

When you embed a compactified timeline into a two dimensional Hopfield network and ask it to do a multi-factor Granger analysis, the output is a map like this:

1768973906627.webp


Pretty cool, eh? The brain analyzing the brain. All the correlations along the timeline are shown. This is actually a bunch of time series from fMRI data, but it doesn't matter what the input is as long as you can vectorize it and embed it. This has been done with stock tickers and for seismic analysis and even for oil exploration. Same principle, just time series around an event. Exactly like an event related potential in the brain.

So for example if this were a timeline for visual attention we might have a saccade candidate. The presence and location of the object of interest are exceedingly clear in this map. The left and right sides are approximately symmetrical because the conduction delays are similar.

Farthest of outs. I calculated the resolution of the cerebral cortex under the generous assumption that the minicolumn is the elemental processing structure. It turns out to be around 5 picoseconds. Five. Faster than an augenblick.
 
Here's another example. This one deals with the localization of auditory cues in space. Which is exactly like the timeline, insofar as low frequency localization uses delay lines (conduction delays) to pinpoint neural activity.

And, just like the timeline, the original resolution can be magnified by a factor of at least 10 with a simple calculation of the overlap of Gaussians (addition and subtraction, nothing more).

This is a lengthy video but it's worth the watch, if you're interested in the subject matter (like maybe if you're a musician, or a gamer - Apple pays people huge money to generate accurate sounds in virtual reality).

 
Still looking for a counterexample. Haven't found one yet.

You'll notice that consciousness is associated with all the areas that project in the opposite direction from ERP's. Ultimately, in the limit of uncertainty, the destination of all sensory information is memory, and the source of all motor activity is memory.

The plastic molecules inside synapses, that change shape and effectiveness, operate on a time scale of microseconds "or so". Chemical reactions and conformational changes are often in that range. Every time one of them changes state, the state of the whole network will change along with it, since the synaptic matrix and therefore the energy surface is being updated.

With a big enough neural network, connected in the right way, we can self organize Kalman filters that actually predict these changes. In other words they know when the brain is "about to" change state. This is where consciousness lives, it's in the dt just ahead of "now". Somewhere in the range of .05 picoseconds to .5 nanoseconds ahead.

This is experimentally testable and provable. It's easier in a computer simulation though. Of course it won't be conscious, consciousness has to happen in real time. But it will show us a Kalman filter self organizing to the right place along the timeline. And we can show the resolution achieved by the basic methods, like Hebbian plasticity (simple correlation), along with some lateral inhibition to focus the target.

This is still not the whole story though. It doesn't account for the phase transitions and critical (chaotic) behavior. At best we can phase encode a space-mapped time series to a spike train. Which is a significant, huge, accomplishment. But the critical behavior adds something, and it isn't totally clear what that is yet. It might be related to memory, like a search or a query or something (under the assumption that chaotic communication is somehow a "separate channel" - and in keeping with the concept that plasticity is an integral part of the limit as dt => 0, and without it there's a big chunk of the earring that's missing, in the microsecond range in the neighborhood of 0).
 
More evidence:


for very brief exposures to the stimulus, that we perceive color 40 ms before we perceive form and 80 ms before we perceive motion. Thus, in the visual system color, form and motion are processed independently resulting in an asynchronous behavioral output from each independently.



This is fully explained by the timeline model, and by no other model that I know of.
 
So this "binding" of activities occurring at different times, depends on having the signals available for "long enough".

Note that this is the same type of binding we see in the Libet experiments, where the conscious mechanism subtracts half a second from the perceptions.

In the timeline model, this subtraction equates with rotating the circle. It's that simple. All we're doing is choosing one of the reference frames and rotating it.

You can see intuitively that this is necessary. And there is no other model that describes it. Surprisingly enough, after 150 years of research...

You can see the relationship between consciousness and memory very clearly in the Friston video, in the second half where he's showing you the handwriting bit. He shows you five points in the ring attractor, and consciousness, memory, and motor action all align while the handwriting is occurring.
 
AI is a f*cking great research tool. I can't even imagine how long it would take me to find this information in a library.

Hans Berger discovered the EEG in 1924, and here we are a hundred years later finally figuring out how to use it.

1769034421850.webp
 
Back
Top Bottom