consciousness precedes real time

scruffy

Diamond Member
Joined
Mar 9, 2022
Messages
30,104
Reaction score
26,764
Points
2,788
In most time series analysis, past performance is used to predict the future.

Statistics is based on the idea of "stationary" generators (Markov processes being an example).

In probability theory, the Chapman-Kolmogorov equation describes two types of behaviors, one is smoothe an continuous like Brownian motion, the other is discontinuous and is called a "jump" process for that reason.

The economist Robert Merton proposed hybrid behavior for a stock ticker, where prices show large jumps but are approximately continuous in between.

But there is an entirely different way of looking at time series. Instead of using past behavior to predict the future, we use actual outcomes (models of future behavior) to interpret the recent past. This is equivalent under translation, to "selecting from possible futures", which is something quite different from determining historical mean and variance.

When we ask the system to generate possible outcomes we are looking for orthogonal results. Instead of predicting the most likely outcome we are looking for a set of distinctly different outcomes, where the differences are orthogonal. In essence we are asking what are the "factor" that are relevant for future outcomes.

We can thus create a "space" of possible outcomes, where each factor is like a dimension. This way, predicting the most likely outcome is like placing a point in the space, whereas assessing the range of outcomes is like drawing the coordinate axes.

In the brain there are two sources of information about sensory events, one is from the receptors and the other is from motor activity (usually modeled as efference copy). At some point the motor output becomes irreversible, it can no longer be stopped. At this point we can predict the future with near 100% accuracy, if everything else remains stable.

The salient feature of the irreversibility is it occurs "before" now. It actually occurs in the future, so to speak.

The only way to model this is with a system that continually generates the future, "in real time". It has to do the same thing as a room full of analysts at Goldman Sachs.

Is it acheivable with AI? Absolutely. Certainly. It can be done with the $50 AI hat from Raspberry Pi.

The interesting and important piece is the relationship between awareness and irreversibility. This is an entirely new concept, and we'll have to see how it plays out in the next few years.

 
In most time series analysis, past performance is used to predict the future.

Statistics is based on the idea of "stationary" generators (Markov processes being an example).

In probability theory, the Chapman-Kolmogorov equation describes two types of behaviors, one is smoothe an continuous like Brownian motion, the other is discontinuous and is called a "jump" process for that reason.

The economist Robert Merton proposed hybrid behavior for a stock ticker, where prices show large jumps but are approximately continuous in between.

But there is an entirely different way of looking at time series. Instead of using past behavior to predict the future, we use actual outcomes (models of future behavior) to interpret the recent past. This is equivalent under translation, to "selecting from possible futures", which is something quite different from determining historical mean and variance.

When we ask the system to generate possible outcomes we are looking for orthogonal results. Instead of predicting the most likely outcome we are looking for a set of distinctly different outcomes, where the differences are orthogonal. In essence we are asking what are the "factor" that are relevant for future outcomes.

We can thus create a "space" of possible outcomes, where each factor is like a dimension. This way, predicting the most likely outcome is like placing a point in the space, whereas assessing the range of outcomes is like drawing the coordinate axes.

In the brain there are two sources of information about sensory events, one is from the receptors and the other is from motor activity (usually modeled as efference copy). At some point the motor output becomes irreversible, it can no longer be stopped. At this point we can predict the future with near 100% accuracy, if everything else remains stable.

The salient feature of the irreversibility is it occurs "before" now. It actually occurs in the future, so to speak.

The only way to model this is with a system that continually generates the future, "in real time". It has to do the same thing as a room full of analysts at Goldman Sachs.

Is it acheivable with AI? Absolutely. Certainly. It can be done with the $50 AI hat from Raspberry Pi.

The interesting and important piece is the relationship between awareness and irreversibility. This is an entirely new concept, and we'll have to see how it plays out in the next few years.

thanks scruffy. does that rasberry have enough power to crunch what must be enormous numbers?

here is amazon today for the "starter kit. " add $50 for the "hat" is it programmed in lisp or is there a gui? i think i'll get a starter kit and see what it can do with my portfolio.

GeeekPi​

for Raspberry Pi 5 4GB Starter Kit, with Pi 5 Board, Pi 5 Case with Active Cooler, 64GB Card and Card Readers, HDMI Cables and 27W USB C Power Supply for Raspberry Pi 5 (4GB RAM)

4.6 out of 5 stars


87
$125.99$125.99

FREE delivery
 
You can't even get a decent Pi formula

What is the modern way to calculate pi?


The ideology can be extended to what is now a popular and fun way to estimate pi. This involves randomly placing points inside a square and counting how many lie within a circle inscribed in the square. The ratio of points inside the circle to the number of points in total can be used to approximate pi.May 24, 2024

Estimating Pi Using the Monte Carlo Method and Particle Tracing​

 
thanks scruffy. does that rasberry have enough power to crunch what must be enormous numbers?

here is amazon today for the "starter kit. " add $50 for the "hat" is it programmed in lisp or is there a gui? i think i'll get a starter kit and see what it can do with my portfolio.

GeeekPi​

for Raspberry Pi 5 4GB Starter Kit, with Pi 5 Board, Pi 5 Case with Active Cooler, 64GB Card and Card Readers, HDMI Cables and 27W USB C Power Supply for Raspberry Pi 5 (4GB RAM)

4.6 out of 5 stars


87
$125.99$125.99

FREE delivery
A R-pi can run Linux. It will run a 32-bit version of Docker. And, it will run Python.

The industry standard AI learning tool is called TensorFlow (formerly Keras). It's mostly Python.

You can download the Python libraries for free. And most of the training data is also free, for instance there are NIST training sets for image recognition and speech.

Google "how do I train an AI", you'll learn a lot. Raspberry has a bunch of educational videos that start with Ubuntu Linux and go all the way through how to program TensorFlow for moving images.
 
You can't even get a decent Pi formula

What is the modern way to calculate pi?


The ideology can be extended to what is now a popular and fun way to estimate pi. This involves randomly placing points inside a square and counting how many lie within a circle inscribed in the square. The ratio of points inside the circle to the number of points in total can be used to approximate pi.May 24, 2024

Estimating Pi Using the Monte Carlo Method and Particle Tracing

Here, ...

1721897672349.webp


Off the cuff, what would you say about this data?

This is why you need multiple windows.

A good pattern classifier will pick up 4 peaks and 4 tails. A bad one will get confused.

If you look at the major peak in the middle, you'll see inverse peaks underneath it, and if you follow those to either side you'll see they lead to the side lobes.

The point made in the OP is we are "not" trying to predict time series, we're trying to classify the behavior.

Try this exercise with the $50 AI hat (the $129 version comes bundled with an M.2 interface so you can add lots and lots of memory.

Step 1: train the network by shifting static time series through it. In this step, what will happen is the network will learn "position invariance". The goal being, to get it to respond to the "shape" of the time series rather than its time coordinates.

Step 2: Train again, but run the time series backwards.

Step 3: Now watch what happens when the network has to respond to a real time series in real time.

The easiest way to confuse an AI is with recursive data. For example recursion can be used to generate fractals, and it can also be used to generate a Cantor dust. The AI has a hard time with this UNLESS you can get it to understand both forward and backward processes. For instance, you can run a Cantor dust backwards into the original line segment with a very simple algorithm. An AI that is trained on both dust generation and line reconstruction will properly classify intermediate datasets. An AI that's only trained on one or the other will become confused.

The point is TIME SEQUENCES running through the trained AI. Like, reading. What happens in reading? It's exactly analogous to the shift register model. A word is presented, then the eyes move, rinse and repeat. The "meaning" only becomes apparent after the whole sentence is ingested. Meanwhile, the AI keeps "snapshots" of the previous words. To arrive at the meaning, it has to stack the words in the order they were presented. Yes, it can predict the next word. But that's not what we want it to do - and if we train it that way it won't ever learn to extract meaning from the whole sentence.
 
To see how effective this is, use information geometry.

First map window size to position, so small windows to the left and big ones to the right. Done this way, you're basically doing a spatial Fourier transform, with the frequency mapped to the X axis. What you will then see as a time series shifts through the network, is an estimate of the frequency spectrum at any given time. What we're interested in, is how the spectrum changes over time. This ends up being exactly like a visual scene moving across the retina - and sure enough, the first thing our brains do with a visual image is a spatial frequency decomposition. (These are the well known "orientation columns" in the primary visual cortex). Turns out, you don't need a precise mapping to accomplish this, all you need is a little topology, and the self-organizing network will take care of the rest, because the AI will automatically learn the organization it needs. How this happens is clever - an example is provided by a complex log mapping. The Riemann surface ends up looking like this:

1721901445521.webp


You can see the orientations, and the columns. Easy peasy.
 
You can't even get a decent Pi formula

What is the modern way to calculate pi?


The ideology can be extended to what is now a popular and fun way to estimate pi. This involves randomly placing points inside a square and counting how many lie within a circle inscribed in the square. The ratio of points inside the circle to the number of points in total can be used to approximate pi.May 24, 2024

Estimating Pi Using the Monte Carlo Method and Particle Tracing


pi=4
You can have thre proof
 
A R-pi can run Linux. It will run a 32-bit version of Docker. And, it will run Python.

The industry standard AI learning tool is called TensorFlow (formerly Keras). It's mostly Python.

You can download the Python libraries for free. And most of the training data is also free, for instance there are NIST training sets for image recognition and speech.

Google "how do I train an AI", you'll learn a lot. Raspberry has a bunch of educational videos that start with Ubuntu Linux and go all the way through how to program TensorFlow for moving images.
i've been learning a lot since last i visited this thread. i've ordered the pi5 "starter kit" but there see, to be a few of these things out there in the "internet of things" that should be coming up surplus soon.

are you familiar with an "arduino?" i'm actually thinking of the pi to manage a little herd of them, collect their data and , send it to my spreadsheet.

anyway, thanks much.
 
i've been learning a lot since last i visited this thread. i've ordered the pi5 "starter kit" newebut there see, to be a few of these me inthings out there in the "internet of things" that should be coming up surplus soon.

are you familiar with an "arduino?" i'm actually thinking of the pi to manage a little k weherd of them, collect their data and , n send it to my spreadsheet.

Hmmanyway, thanks much.
You like to tinker? Yes, I believe you could create a network with multiple Arduino's and an R-pi. You have only two native choices on the Arduino side, either USB or SPI (the pins), and the USB is busy whenever it's connected to your PC. I'd use the pins, you need the SPI driver on the R-pi which I believe is native or you can download it. Arduino's are only 5 bucks these days.

Beware though, if you add the AI hat you're chewing up some pins. Do research up front!
 
You are still forgetting one important point. Consciousness has to factor in random unpredictable events that can change how the mind constructs the consciousness that is being experienced in real time. As the Quantum Psychologists would point out, everything we experience is always changing. Nothing really stays the same, at the subatomic level. Even though we may not observe it with our human eyes.!!?
 
You are still forgetting one important point. Consciousness has to factor in random unpredictable events that can change how the mind constructs the consciousness that is being experienced in real time. As the Quantum Psychologists would point out, everything we experience is always changing. Nothing really stays the same, at the subatomic level. Even though we may not observe it with our human eyes.!!?
This is why consciousness requires criticality, which means irreversibility. The Hawaiian earring is basically a fractal, in this case a continuous fractal like a Weierstrass function. If you create it discretely, you can assign any fractal dimension you want - that's why it's so powerful.

The topology is such that the rings go all the way down to 0 - in other words, "dt". This is a requirement for true consciousness. The system needs to be able cover an infinitesimal interval of time. If you have models at different scales you can extrapolate their features all the way down to the tiniest imaginable interval.

There is another version of this worth considering too. It's the "light cone", from physics. The idea is, at infinite window size the number of possible futures is basically infinite, and as you move in closer to "now" the number of possibilities keeps reducing. At zero, there is only one possibility, it's the one that actually occurs.

Here is a light cone. Consider the future part of it.

1721994744583.png


Far away from now, the number of possibilities is large, so you have a big diameter, corresponding with a large loop of the earring. Here once again is the earring.

1721995027165.webp



If you take the loops and stack them on top of each other you get a light cone.
 
A quick primer about the AI:

AI involves analog computing. Instead of bits that can only be in two states (0 and 1), analog computing uses numbers and functions. The numbers can be anything, positive, negative, integers or floating point.

The weapon of choice for AI is the (artificial) neural network, or ANN. It has neurons and synapses. The neurons fire at a certain rate, which is usually characterized by a number. And, the synapses have "weights" (transmission strengths), which are also numbers. Synapses can be excitatory (positive numbers) or inhibitory (negative numbers). If you have two neurons A and B connected by a synapse S, you get an equation that looks something like this:

B = w(S) * f(A)

where w is the synaptic weight, and f is usually a sigmoidal function of the activity level in A. W is a function because the synaptic weight changes by learning. There are all kinds of learning rules - one of the simplest ones is called Hebbian after the psychologist DO Hebb, in this rule the synaptic weight is made stronger whenever A and B fire at the same time. (ie correlation)

Even though neural networks are analog, they are not necessarily quantum. (Although they could be). In quantum computers the activity is a probability between 0 and 1, whereas in ordinary ANN's it's just a number.
One of the first ANN's was the Perceptron, invented by Frank Rosenblatt in the early 50's as an "artificial retina". He was trying to get a machine to read, but he only got as far as getting it to memorize the alphabet. How that happened was in two steps: first the training phase, where the letters of the alphabet are "burned in" to the synapses by repeated presentation, and then the readout phase, where the network is expected to correctly identify the letter presented to it.
A Perceptron is a very elementary network, and suffers from many problems. It's feed forward only, so it gets confused by similar letters, like A and R. Also it's memory is position dependent, so if you train it by showing it the letter A in the middle of the screen, it won't recognize the same letter when it's presented in the corner.
To solve some of these problems, scientists invented "recurrent" neural networks, which are connected to themselves in feedback loops. And, they gave the network convergent and divergent synaptic connections (many to one and one to many). This way, the network can memorize sequences, and it also exhibits invariances (so the memory is no longer position dependent). One of the early invariant networks was the Neo-Cognitron, invented by Kunihiko Fukushima in the early 70's. It is important in several ways. First it opened the study of "layering" in cascaded networks, and second it was an early form of a convolutional ANN.
This is what it looks like:
1722058390268.png

The biggest advance that led to modern ANN's was the recurrent network designed by the physicist John Hopfield in the early 80's. His model was very simple: every neuron is connected to every other neuron, and the update times are selected randomly using the Monte Carlo method. This is a Hopfield network:
1722058720286.png


The random update times are very important, the network doesn't work when all the synapses are updated at the same time. A further advance was introduced in the form of a Hamiltonian that represents the total energy in the network at any given time. This led to the invention of the Boltzmann machine by Terrence Sejnowski, which was the first ANN that could learn to read and talk.

Now we've come a long way in a very short time, there are multi-headed transformers with selective self-attention and so on. One of the key elements in all ANN's is associative memory, first studied in detail by Teuvo Kohonen in the early 70's. There are two aspects, one is the memorization of content and features, and the other is the associative discovery of relationships. Sequential learning is only possible with recurrent networks, they are in effect "self-associstive", they relate one pattern to the next.

When a network learns, it creates an energy surface in its phase space. In simple terms, every new pattern goes to a corner in a hypercube, and readout consists of dropping a ball onto the energy surface, which subsequently rolls downhill to find the (local) minimum. This process is very fast, it only takes microseconds with the right device. Lately people have experimented with optics and memristors, both of which are well within 1 msec for pattern recognition

Why am I telling you this?

Because I want you to understand the light cone.

For instance - how is it that a network can hold all of the possible futures in memory at once?

And why are the possible futures reduced in number as the distance to "now" approaches zero?

The answer is actually pretty simple: compactification.

In a brain, the ultimate destination of all sensory information is memory.

And, memory is ultimately the source of all (voluntary) motor activity.

Memory is the "point at infinity" in a compactified network.

The important thing to understand is the entire timeline operates at "now", the time series is just a representation. This is why asynchronous updates are so important - essentially they spread out the current moment into a narrow window within which all these operations take place. The window moved through physical time along with the information. So, when we talk about possible futures in the light cone, we're actually talking about a narrow interval on the positive side of a compactified network. And since every neuron is connected to every other neuron, this interval coexists with all the other intervals that are mapped along the earring.

This is the topological unfolding that makes our brains work. And, it requires support circuitry - a "control system" if you will. Using this model, concepts like free will can be easily explained and mapped onto the network. This is the reason for brain waves like the P300, which is a "reset" of the ongoing information flowing along the timeline. And it is also the reason for a separation between short term and long term memory - because consolidation is contextual, it requires episodes to be played forward and backward before being allowed to enter the global associative store. To get a rough idea of how this looks, take the light cone and bend it into a circle, so the distant past and distant future becomes the same point. Now you have the plane representing "now", intersecting with all the memory on both the past and future sides. They all live at the exact same point, which is the moment in physical time we call "now".
 
Last edited:
Further proof that consciousness preces real time:

In the light cone analogy, the timeline goes vertically from future to past, so the direction of information points down. Plans are on top, actualization is in the "now" plane, and sensory consequences are on the bottom.

Here's what it looks like in an actual brain. In this drawing the axes have been reversed and rotated 90 degrees, so future is on the left and past is on the right. Here we're looking primarily at Brodmsnn's area 10 in the prefrontal cortex.

1722077267451.webp



Pretty obvious, yes?

Now - here is a technical point and an important observation:

Traditionally in AI and in neuroscience, "attention" has been studied on the sensory side - as in, which stimuli do we pay attention to.

However that is less than half the story. Note the points labeled "INtention" in the above pic. "INtention" is a form of "ATtention". It equates with free will. When we act, we pay "ATtention" to a small subset of possible futures, out of the many we could choose from. Attention and intention are the same phenomena, the only difference is which brain areas are engaged.

Both intention and attention are restrictive. INtention is what's responsible for the conical shape on the future side of the timeline. During intention we are restricting the transmission of futures down the timeline. The ones we don't want are being actively inhibited. We know which brain areas are responsible. Especially there is an area in the anterior cingulate cortex that is a very reliable indicator of intention.

The way to understand the light cone is as follows: draw a vertical line along the Y axis to represent the flow of the time series. Each point along that line, represents a LAYER of neurons, just like in the schematic of the Neo-Cognitron shown earlier. The conical shape represents divergence and convergence. For example, on the sensory side of vision, a single retinal image becomes a dozen images in the cerebral cortex. You have a linear flow from V1 to V2 that splits into a what pathway and a where pathway. The what pathway has areas that process color, orientation, movement, shape, and so on.

On the motor side it's the inverse. You have a dozen areas feeding M1. So when we compactify the timeline, the point at infinity covers the point at 0 with a dozen overlapping multi-mono representations of the retinal image, each responsible handling a small subset of the detail.
 
In most time series analysis, past performance is used to predict the future.

Statistics is based on the idea of "stationary" generators (Markov processes being an example).

In probability theory, the Chapman-Kolmogorov equation describes two types of behaviors, one is smoothe an continuous like Brownian motion, the other is discontinuous and is called a "jump" process for that reason.

The economist Robert Merton proposed hybrid behavior for a stock ticker, where prices show large jumps but are approximately continuous in between.

But there is an entirely different way of looking at time series. Instead of using past behavior to predict the future, we use actual outcomes (models of future behavior) to interpret the recent past. This is equivalent under translation, to "selecting from possible futures", which is something quite different from determining historical mean and variance.

When we ask the system to generate possible outcomes we are looking for orthogonal results. Instead of predicting the most likely outcome we are looking for a set of distinctly different outcomes, where the differences are orthogonal. In essence we are asking what are the "factor" that are relevant for future outcomes.

We can thus create a "space" of possible outcomes, where each factor is like a dimension. This way, predicting the most likely outcome is like placing a point in the space, whereas assessing the range of outcomes is like drawing the coordinate axes.

In the brain there are two sources of information about sensory events, one is from the receptors and the other is from motor activity (usually modeled as efference copy). At some point the motor output becomes irreversible, it can no longer be stopped. At this point we can predict the future with near 100% accuracy, if everything else remains stable.

The salient feature of the irreversibility is it occurs "before" now. It actually occurs in the future, so to speak.

The only way to model this is with a system that continually generates the future, "in real time". It has to do the same thing as a room full of analysts at Goldman Sachs.

Is it acheivable with AI? Absolutely. Certainly. It can be done with the $50 AI hat from Raspberry Pi.

The interesting and important piece is the relationship between awareness and irreversibility. This is an entirely new concept, and we'll have to see how it plays out in the next few years.

AI, as it exists today, does not possess consciousness in the same way humans do. While AI can simulate human-like behavior and intelligence, it lacks self-awareness, emotions, and subjective experiences that are essential components of consciousness.

The ability of AI to generate future possibilities, such as predicting how your kids might look in the next 10 years, is rooted in its capacity to analyze data and patterns to make educated guesses. This process is based on algorithms and statistical models rather than consciousness or understanding of the future.

AI can analyze vast amounts of data and extrapolate trends to make predictions, but these are based on statistical probabilities and patterns rather than genuine foresight or consciousness.

In the case of generating images of your kids in the future, AI algorithms would analyze existing data on growth patterns and facial features to generate a prediction, but it does not involve any form of conscious awareness or understanding.

Here is an example of an AI-generated schoolgirl in the future. Yes, she could be your great-grandchild, but she does not exist now! lol. :)

Exampleofaigirl.webp
 
AI, as it exists today, does not possess consciousness in the same way humans do. While AI can simulate human-like behavior and intelligence, it lacks self-awareness, emotions, and subjective experiences that are essential components of consciousness.

The ability of AI to generate future possibilities, such as predicting how your kids might look in the next 10 years, is rooted in its capacity to analyze data and patterns to make educated guesses. This process is based on algorithms and statistical models rather than consciousness or understanding of the future.

AI can analyze vast amounts of data and extrapolate trends to make predictions, but these are based on statistical probabilities and patterns rather than genuine foresight or consciousness.

In the case of generating images of your kids in the future, AI algorithms would analyze existing data on growth patterns and facial features to generate a prediction, but it does not involve any form of conscious awareness or understanding.

Here is an example of an AI-generated schoolgirl in the future. Yes, she could be your great-grandchild, but she does not exist now! lol. :)

View attachment 985374

Thank you, you're making my point.

"Possible futures" in the sense being discussed, absolutely do NOT equate with predictions. They're something entirely different, different animal. I thought I was being quite clear on this point, in the discussion of time series. Predictions are static, and the space of impending futures is not. Predictions are unitary, and the radius of the light cone and the earring are not.

Yes, this is not the easiest concept to wrap your mind around, that's why it merits discussion. We're talking about a topological unfolding, in the limit as dt => 0. It's character is such that it embeds the entire timeline into what is basically a single point (a "vanishingly small interval", in the same sense as a Cantor dust).

The compactification has some interesting and unusual properties. One of them is torsion. For example - a Mobius strip has no torsion, it only embeds one way, because it's planar, it's two dimensional. However the compactified light cone is more like a Klein bottle, it's three dimensional and when you join the ends you can rotate one or both to achieve torsion.

You can describe the torsion algebraically, with modified Lie groups. Technically the result falls into the category called "orbifolds", they are LOCALLY manifolds but globally they don't qualify because of orientability. You can read more here:



It's not a prediction, it's a geometry.
 
Your brain is a democratic and republican machine. The parts vote about who gets to be in charge, and there are also governors that can regulate the process.

There are two main areas of importance in cerebral voting: the cerebral cortex itself, and an area called the thalamus, which is like an egg that sits on top of the brain stem.

Both the cortex and the thalamus are specialized by function, for example there is an area that only recognizes faces and nothing else. In the visual system we have specific areas for color, orientation, movement, and so on. When these are all engaged on the same stimulus, there is communication between the different areas.

But there is an area that regulates the communication and helps decide which area gets to be in charge at any given time. It is called the reticular nucleus, it sits between the egg and the shell. It's like a thin layer of richly connected neurons in a ring around the egg.

This is what it looks like:

1722217513577.webp


The reticular nucleus mediates between all the cortical areas that have their hands raised. It points to one of them and says "you get to be in charge for the next few seconds". Then, the designated area of cortex is granted access to the global memory store, and can make use of it to guide the other areas.

This has nothing to do with "attention" per se, which is stimulus driven. It is an internal regulatory and control mechanism. There is another nearby area with similar function, it's called the intralaminar nucleus it was the subject of Francis Crick's "searchlight hypothesis". Same basic idea.

1722218006996.webp




This is one of the methods used to control the light cone. Basically it determines which of the cortical areas gets to live at T=0, in other words it ROTATES the compactified timeline so the chosen area is at the origin
 
15th post
Phase coding is what makes this whole thing work.

It works the same way it does in the hippocampus, where there are place cells and grid cells.

Only, in the reticular nucleus, the places and grid are internal instead of external - they map locations in the cerebral cortex.

You can think of the alpha rhythm in the cortex exactly the same way the theta rhythm works in the hippocampus. However in the cortex, it goes around the loops in the earring - which means, it's linear along the timeline just like it's linear along the longitudinal axis of the hippocampus. The hippocampus is not compactified to our knowledge, but the cerebral cortex is, because of the rich intracortical connections. The pyramidal cells there are "almost" omniconnected.

So what we end up with in global memory, is that a single cell can phase-encode the sequence of activity in the entire rest of the cerebral cortex. This is why our global memory is so efficient. It only takes one cell to encode an entire episode.

The trick is, the reticular systems have to know "where" that cell is located, and this programming cannot occur at the same time the timeline is active, because the reticular layer is busy directing traffic. It has to occur during quiescent moments, which is when you see the alpha rhythm strongly synchronized in the EEG because nothing else is going on. You have quiescent moments during the day, but the biggest quiescent moment of all occurs during sleep. So that's when a lot of memory gets consolidated.

I'm fully satisfied this model explains all the "computational" aspects of human consciousness. It still doesn't explain why red is red, or why pain and orgasms feel the way they do, but it's a huge step forward in understanding how we work.

Feelings seem to be processed in the areas around the amygdala, at the tip of the temporal lobe and the adjoining frontal areas like Brodmann 25 (which for example is involved in "major depression"). Social cues and social feelings are handled in the areas right in front of it, in vmPFC the ventromedial prefrontal cortex. These are specifically the areas that are "in front of" T=0 along the timeline. They "precede" real time.
 
This is also why babies sleep a lot. Their brains need an enormous amount of offline consolidation.
 
Back
Top Bottom