encoding of episodic memories in the brain

scruffy

Diamond Member
Mar 9, 2022
18,593
15,114
2,288
I found a sleeper.

This is huge, and no one even knows about it.


You have to read through this in detail. There's a lot of math, but it's not too hard and it's intuitive. Mostly you have to understand how they build the "tag vector".

What they're showing us here, is pretty astounding. It answers a lot of questions all at the same time.

This model is showing us how memories are stored and retrieved in the brain.

And why "brain waves" are important for that.

STDP stands for "spike timing dependent plasticity", and is only slightly different from a Hebbian learning rule. The idea is synapses get reinforced on the basis of correlations between input and output.im this particular case the windows are deliberately asymmetrical.

This model accounts for a plethora of observations in real brains. In the rat hippocampus, memory sequences are played forward before navigation, and backward afterwards. And, every encoded sequence can be played back at different speeds, both forward and backward. This model does all of that, and provides a precise mathematical definition of the components of an episode and the form.in which they're stored.
 
You people don't get excited about this.

Maybe I can explain.

Y'r average neural network looks something like this:
1677826822388.png


This is called "recurrent" because the outputs feed back to the inputs. The connection strengths are represented by a matrix, W, which connects every neuron I to every other neuron j, so you have Wij.

When this thing starts out, the weights are random, and there is a "learning rule" that changes them. One of the simplest learning rules is correlation, so the weight increases whenever the input and output are active at the same time.

What happens when a memory is formed, is you get an "attractor basin", which could be as simple as an energy surface with a minimum. Like this:

1677827284224.png


No matter where you start the ball, it rolls down the hill and ends up in the valley. So, to find (recall) the memory, you just do a "gradient descent" till you can't get any lower.

Over time then, as more and more memories are stored, the energy surface gets more complicated, like this:

1677827656645.png




The above picture works both ways, so for instance, you can start at the bottom, learn, and then arrive at the top. Or, you can start at the top, encode, and then arrive at the bottom.

The link in the OP is describing the encoding process. With a complicate energy surface, the system will find the "nearest" minimum, so, depending on where you start it, it'll find the memory closest to the input.

But, what happens when you get multiple words with the same meaning, or one word with multiple meanings?

In that case, the learning process generates "subspaces" (which are the planes they're talking about), and clusters similar memories together along a coordinate system that allows them to be distinguished with ease.

The organization of clusters into readable subspaces occurs "only" with phase coding, it doesn't happen in ordinary nonlinear networks. Brain waves are essential for learning and recall. They allow one shot learning and other things humans are very good at.
 

Forum List

Back
Top