this is how a neuron really works

scruffy

Diamond Member
Joined
Mar 9, 2022
Messages
30,084
Reaction score
26,742
Points
2,788
This is part of the recent discussion about why you shouldn't be scared of AI. Because it's going to be a LONG time before it grows a brain, so to speak.
I'll assume you may already be familiar with two different versions of a neuron: the machine learning version, and the textbook biological version.

The machine learning version is really simple, it adds up all its inputs and passes the result through a (nonlinear) threshold element. It's very algebraic, learning and recognition end up being matrix multiplications.

The machine learning neuron is supposed to be a simplified version of a biological neuron. You've heard of the Hodgkin-Huxley equation that describes membrane dynamics in terms of voltage dependent sodium and potassium conductances. This is a four dimensional differential equation that's really really hard to calculate in real time, even on a supercomputer. So people just use the simplified version, and what gets lost with that is anything to do with dynamics, and anything to do with the positioning of the synapses on the dendrites and cell body.

The first area, is huge. The two things I'd like you to take away from this post are:

1. Sub-Threshold Oscillations based on voltage dependent calcium channels, and

2. Phase Modulation of spike trains relative to the STO's, with phase reset accomplished by hyper polarization (inhibition)

Point number 1, is there are at least half a dozen different kinds of voltage gated calcium channels that can be inserted into an ordinary Hodgkin-Huxley membrane to make it behave different ways.

In its simplest form, an STO is simply a sine wave. The frequency can range from 1 to 100 Hz.


The STO's in neighboring neurons are often linked together by gap junctions. You can look at a gap junction as a low value resistance, it links the internal potentials of the two neurons (and also allows them to exchange small molecules), thereby SYNCHRONIZING their electrical activity - the oscillations in one will tend to be in phase with the oscillations in the other.

A special and important spiking behavior is called BURSTING, which occurs on the rising slope of the STO, and can be elicited by a sufficiently long hyperpolarization (100 msec usually does it). Here's some bursting behavior, you can see the bursts of spikes riding on the membrane potential:

1767260239170.webp


So what happens is you have a large population of these cells that are electrically coupled, with a membrane potential that signals one thing and a spike frequency that signals another, and a spike time that signals yet another. The network providers call this multiplexing, using different codes to represent different forms of information traveling in the same wire.

Point 2 is, there are two special types of inhibitory neurons, one affects only the cell body, the other affects only the dendrites. A hyperpolarization (inhibitory input) of sufficient magnitude will cause the cell body to start bursting afterwards, and it resets the phase of the STO. There is the time to the start of the burst, the duration of the burst, and the rate of the burst, as well as the specific spike times.

When the cell body is not being hyperpolarized, it tends not to burst. In this mode a synaptic input that coincides with the rising edge of an STO will elicit a spike. So the population will tend to sync to the STO's, while individual neurons may be hyperpolarized and then burst afterwards. Meaning, there are multiple channels of information within a single neuron, and there are multiple channels in the population.

Which renders the "machine learning neuron" entirely obsolete.

The reset of the phase of the STO accomplished by the inhibitory input, is a form of phase modulation. In the digital domain it would be very close to PCM.

Phase modulation is what gives you things like place cells in the hippocampus. There is an enormous literature on "place cell precession", you can Google it.

Phase modulation is particularly resistant to noise. Which is important in a biological context. And it confers other advantages like compressibility, and in the context of self organizing neural networks it enables a "content by key" functionality so multiple hot spots can be processed at once.

Modern AI is nowhere near this level of sophistication. Not even close. It can't handle carrier modulation at all. The closest approach is an approximation of the Poisson rate deviation. (In other words, faking modulation using statistics).
 
One very important use of phase modulation is in data compression.

The easiest example is a place cell in the hippocampus.

On the retina, everything is topographic. Each photoreceptor covers only a tiny dot in the visual field, maybe a minute of arc or something. So when an object moves from left to right, you have thousands of photoreceptors being activated in a row, one after the other.

But in a place cell, the receptive field is large, and "where" the object is, is represented by the phase of a spike burst relative to the theta rhythm. So you have compressed the information from thousands of neurons into only one neuron.

Here is an example of a place cell. Every time the rat enters the area, the cell fires. The exact phase of firing depends on the specific location.

You can recover the relationship between the place-phase and the retinal topography, to the extent that you can "de-modulate" the phase code and link it to its sources.

 
Here's a super cool picture. This is a monkey retina, showing cones in green and cone bipolar cells in red.

You can see that each bipolar cell contacts two, maybe three cones.

1767426520287.webp


The horizontal red line at the bottom is where the bipolar cells synapse with the ganglion cells, which then feed out through the optic nerve.

There's two kinds of bipolar cells, on and off. The cones release glutamate as a neurotransmitter, and the on receptors are inhibitory whereas the off receptors are excitatory. Then at the ganglion cell level there is a further subdivision into transient (Y) and static (X) and a few other kinds, so you get on-static and on-transient, that kind of thing.
 
This is part of the recent discussion about why you shouldn't be scared of AI. Because it's going to be a LONG time before it grows a brain, so to speak.
I'll assume you may already be familiar with two different versions of a neuron: the machine learning version, and the textbook biological version.

The machine learning version is really simple, it adds up all its inputs and passes the result through a (nonlinear) threshold element. It's very algebraic, learning and recognition end up being matrix multiplications.

The machine learning neuron is supposed to be a simplified version of a biological neuron. You've heard of the Hodgkin-Huxley equation that describes membrane dynamics in terms of voltage dependent sodium and potassium conductances. This is a four dimensional differential equation that's really really hard to calculate in real time, even on a supercomputer. So people just use the simplified version, and what gets lost with that is anything to do with dynamics, and anything to do with the positioning of the synapses on the dendrites and cell body.

The first area, is huge. The two things I'd like you to take away from this post are:

1. Sub-Threshold Oscillations based on voltage dependent calcium channels, and

2. Phase Modulation of spike trains relative to the STO's, with phase reset accomplished by hyper polarization (inhibition)

Point number 1, is there are at least half a dozen different kinds of voltage gated calcium channels that can be inserted into an ordinary Hodgkin-Huxley membrane to make it behave different ways.

In its simplest form, an STO is simply a sine wave. The frequency can range from 1 to 100 Hz.


The STO's in neighboring neurons are often linked together by gap junctions. You can look at a gap junction as a low value resistance, it links the internal potentials of the two neurons (and also allows them to exchange small molecules), thereby SYNCHRONIZING their electrical activity - the oscillations in one will tend to be in phase with the oscillations in the other.

A special and important spiking behavior is called BURSTING, which occurs on the rising slope of the STO, and can be elicited by a sufficiently long hyperpolarization (100 msec usually does it). Here's some bursting behavior, you can see the bursts of spikes riding on the membrane potential:

View attachment 1200028

So what happens is you have a large population of these cells that are electrically coupled, with a membrane potential that signals one thing and a spike frequency that signals another, and a spike time that signals yet another. The network providers call this multiplexing, using different codes to represent different forms of information traveling in the same wire.

Point 2 is, there are two special types of inhibitory neurons, one affects only the cell body, the other affects only the dendrites. A hyperpolarization (inhibitory input) of sufficient magnitude will cause the cell body to start bursting afterwards, and it resets the phase of the STO. There is the time to the start of the burst, the duration of the burst, and the rate of the burst, as well as the specific spike times.

When the cell body is not being hyperpolarized, it tends not to burst. In this mode a synaptic input that coincides with the rising edge of an STO will elicit a spike. So the population will tend to sync to the STO's, while individual neurons may be hyperpolarized and then burst afterwards. Meaning, there are multiple channels of information within a single neuron, and there are multiple channels in the population.

Which renders the "machine learning neuron" entirely obsolete.

The reset of the phase of the STO accomplished by the inhibitory input, is a form of phase modulation. In the digital domain it would be very close to PCM.

Phase modulation is what gives you things like place cells in the hippocampus. There is an enormous literature on "place cell precession", you can Google it.

Phase modulation is particularly resistant to noise. Which is important in a biological context. And it confers other advantages like compressibility, and in the context of self organizing neural networks it enables a "content by key" functionality so multiple hot spots can be processed at once.

Modern AI is nowhere near this level of sophistication. Not even close. It can't handle carrier modulation at all. The closest approach is an approximation of the Poisson rate deviation. (In other words, faking modulation using statistics).
One of the more mind boggling aspects about all of this is how little energy the biological "machine" uses compared to artificial intelligence.

Some people like to argue about how poorly the human body is designed. I think those people are idiots.
 
scruffy do you agree that chains of atoms are magnetic aligned according to spin? Parkinson's Disease
Generally yes, that relationship seems to be carefully controlled in the brain.

There has been some research on spin alignment in microtubules, it's very difficult work.

There has also been some work on spin exchange in ligand binding.

Every shifting electric current generates a magnetic field, yes? So in wetware the currents are carried by ions, more than electrons. Calcium seems to be especially important.
 
scruffy do you agree that chains of atoms are magnetic aligned according to spin? Parkinson's Disease

One of the theories of quantum consciousness revolves around the Posner molecule, which is a cluster of calcium phosphate. It's found in bone (where it was called hydroxyapatite) and it's electrical properties are vital for growth and healing.


Spin alignment is important in brain function. The shape of a neuron is maintained by a "cytoskeleton" consisting mostly of microtubules and actin. In dendrites, there are clusters of microtubules that send branches out into the dendritic spines, where the synapses are. These microtubules have an electrical resonance around 39 hz, they generate electrical oscillations inside the cell which you can pick up in the neuron's membrane potential.

 
Neurons work on ion channels, which are the conductances. To get "behavior" out of a neuron, you first select the conductances you want across the membrane. These can be voltage sensitive or not.

The classic neuron is merely an integrative surface, and it generates a spike whenever the voltage goes above threshold. But most neurons don't behave that way.

Most of the pyramidal cells in the cerebral cortex, hippocampus, and elsewhere use glutamate as an excitatory neurotransmitter, and they have two types of glutamate receptors: a fast one and a slow one. The slow one depolarizes the cell for about 100-200 msec, and riding on top of that is when the spike bursts occur.

The long lasting depolarization from the metabotropic glutamate receptors causes gamma frequency oscillations (very small, about 4 mV peak to peak) to appear on the dendrites of the cell, and the spikes during the spike bursts are timed to coincide with these oscillations. The excitation driving the spike bursts can come from the fast receptors or from elsewhere.

So overall, there is a slow population oscillation at alpha or theta frequency (10 hz or so, maybe a little less), and then there is a much faster modulated spike train riding on top of that. The circuitry driving most of these systems (from the thalamus) uses feed forward inhibition, so the network will let through exactly one spike before being inhibited. After emerging from the inhibition the cells will become depolarized and generate plateau potentials coupled with gamma range oscillations.

So you have TTFS ("time to first spike"), and then you have a phase modulated encoding of regional activity.

The conditions for generating an action potential in this encoding are quite different from those in the classical neurons. The spikes mean something quite different. With TTFS you get a map of where the hot spots are (the loudest or brightest signals, as it were). This map is regional, it's not strictly local but it's not network wide either. You can think of each hot spot as encoding the information around it, so like, handling the "major features" of its neighborhood.

Neurons turn out to be very powerful. Not like the dumb simple neurons the Perceptrons used.
 
One interesting method that hasn't gotten much play, is white noise analysis.

In a linear system, if you have the impulse response and the steady state response you can predict the state of the system at any time in the future - ASSUMING the system itself is time invariant and its coupling with the environment doesn't change - and most of all, assuming the system has no memory, that is, the next state only depends on the present state and not on any previous state.

But there is math that looks at previous states. These are based on Norbert Wiener's model of Brownian motion. What you do is, treat your system like a black box, where all you can see are the inputs and outputs. Then you shoot white noise through it, which is a combination of Gaussians having a flat frequency spectrum. The advantage of this type of noise is, it's guaranteed to be uncorrelated with itself from one moment to the next, or at any two points in time. So if in addition to f(t) you start looking at f(t-x), where x ranges into the past from now till the beginning of time, you get the Volterra kernels, which describe how the system behavior depends on past information.

This method only works for "weakly" nonlinear systems, that is, systems that can be approximated by polynomials (which is most of them, probably not the whole brain though, although parts of it will certainly qualify).

So like, in the spiky plateaus I showed you, the strange behavior where spikes seem to wait till the crest of the sub-threshold oscillation, is the kind of thing you can quantify with Volterra kernels. Most of the membrane conductances will play nicely with this because they're oligomers that can be described polynomially.

This is what you want when you're teasing apart complex synapses. Let's say you have a bidirectional dendro-dendritic synapse and each side has a fast and a slow glutamate receptor. You put a signal into it and you get a complex response, a long lasting plateau voltage with a bunch of spikes riding on top of it. What's going on?

The Volterra kernels will tell you, you have underlying events with time constants of X and Y, and then you can match those with the molecular kinetics, and determine what types of channels are in play.

We successfully applied this method to shark retinas in the 80's, but now it's making a comeback because neurons in the hippocampus are very complicated and we need the high tech analysis to figure them out.

I'm working on a Volterra model for gap junctions. Gap junctions are "very fast" synapses, 100 to 1000 times faster than ordinary vesicular synapses. They couple neighboring neurons, keeping them synchronized. The gap junctions can be turned on or off chemically, and it looks like astrocytes may be involved in some of that.
 
They haven't figured it out yet.

Google AI leaves a lot to be desired. So do some of these "scientists".

Check this out - this is what AI has to say about rhythmic activity in the superior colliculus.


They haven't figured it out yet. But you and I know what it's for, we've been talking all about it.

The visual input from the retina is topographic, meaning it's a space code. The location of the firing neuron determines where the eye should move.

But the muscles moving the eye use a time code, specifying the intensity and duration of muscle contraction.

So, how do you get from a space code to a time code? Answer: phase coding!

The same thing happens in the cognitive map in the hippocampus. Locations in space get changed into the precise firing times of neurons.

This is what the oscillations are for. And BECAUSE the brain chooses to use a phase code, we'd like to know what's so good about it, how it contributes to the speed and efficiency of information storage.
 
Back
Top Bottom