This is part of the recent discussion about why you shouldn't be scared of AI. Because it's going to be a LONG time before it grows a brain, so to speak.
I'll assume you may already be familiar with two different versions of a neuron: the machine learning version, and the textbook biological version.
The machine learning version is really simple, it adds up all its inputs and passes the result through a (nonlinear) threshold element. It's very algebraic, learning and recognition end up being matrix multiplications.
The machine learning neuron is supposed to be a simplified version of a biological neuron. You've heard of the Hodgkin-Huxley equation that describes membrane dynamics in terms of voltage dependent sodium and potassium conductances. This is a four dimensional differential equation that's really really hard to calculate in real time, even on a supercomputer. So people just use the simplified version, and what gets lost with that is anything to do with dynamics, and anything to do with the positioning of the synapses on the dendrites and cell body.
The first area, is huge. The two things I'd like you to take away from this post are:
1. Sub-Threshold Oscillations based on voltage dependent calcium channels, and
2. Phase Modulation of spike trains relative to the STO's, with phase reset accomplished by hyper polarization (inhibition)
Point number 1, is there are at least half a dozen different kinds of voltage gated calcium channels that can be inserted into an ordinary Hodgkin-Huxley membrane to make it behave different ways.
In its simplest form, an STO is simply a sine wave. The frequency can range from 1 to 100 Hz.
en.wikipedia.org
The STO's in neighboring neurons are often linked together by gap junctions. You can look at a gap junction as a low value resistance, it links the internal potentials of the two neurons (and also allows them to exchange small molecules), thereby SYNCHRONIZING their electrical activity - the oscillations in one will tend to be in phase with the oscillations in the other.
A special and important spiking behavior is called BURSTING, which occurs on the rising slope of the STO, and can be elicited by a sufficiently long hyperpolarization (100 msec usually does it). Here's some bursting behavior, you can see the bursts of spikes riding on the membrane potential:
So what happens is you have a large population of these cells that are electrically coupled, with a membrane potential that signals one thing and a spike frequency that signals another, and a spike time that signals yet another. The network providers call this multiplexing, using different codes to represent different forms of information traveling in the same wire.
Point 2 is, there are two special types of inhibitory neurons, one affects only the cell body, the other affects only the dendrites. A hyperpolarization (inhibitory input) of sufficient magnitude will cause the cell body to start bursting afterwards, and it resets the phase of the STO. There is the time to the start of the burst, the duration of the burst, and the rate of the burst, as well as the specific spike times.
When the cell body is not being hyperpolarized, it tends not to burst. In this mode a synaptic input that coincides with the rising edge of an STO will elicit a spike. So the population will tend to sync to the STO's, while individual neurons may be hyperpolarized and then burst afterwards. Meaning, there are multiple channels of information within a single neuron, and there are multiple channels in the population.
Which renders the "machine learning neuron" entirely obsolete.
The reset of the phase of the STO accomplished by the inhibitory input, is a form of phase modulation. In the digital domain it would be very close to PCM.
Phase modulation is what gives you things like place cells in the hippocampus. There is an enormous literature on "place cell precession", you can Google it.
Phase modulation is particularly resistant to noise. Which is important in a biological context. And it confers other advantages like compressibility, and in the context of self organizing neural networks it enables a "content by key" functionality so multiple hot spots can be processed at once.
Modern AI is nowhere near this level of sophistication. Not even close. It can't handle carrier modulation at all. The closest approach is an approximation of the Poisson rate deviation. (In other words, faking modulation using statistics).
I'll assume you may already be familiar with two different versions of a neuron: the machine learning version, and the textbook biological version.
The machine learning version is really simple, it adds up all its inputs and passes the result through a (nonlinear) threshold element. It's very algebraic, learning and recognition end up being matrix multiplications.
The machine learning neuron is supposed to be a simplified version of a biological neuron. You've heard of the Hodgkin-Huxley equation that describes membrane dynamics in terms of voltage dependent sodium and potassium conductances. This is a four dimensional differential equation that's really really hard to calculate in real time, even on a supercomputer. So people just use the simplified version, and what gets lost with that is anything to do with dynamics, and anything to do with the positioning of the synapses on the dendrites and cell body.
The first area, is huge. The two things I'd like you to take away from this post are:
1. Sub-Threshold Oscillations based on voltage dependent calcium channels, and
2. Phase Modulation of spike trains relative to the STO's, with phase reset accomplished by hyper polarization (inhibition)
Point number 1, is there are at least half a dozen different kinds of voltage gated calcium channels that can be inserted into an ordinary Hodgkin-Huxley membrane to make it behave different ways.
In its simplest form, an STO is simply a sine wave. The frequency can range from 1 to 100 Hz.
Subthreshold membrane potential oscillations - Wikipedia
The STO's in neighboring neurons are often linked together by gap junctions. You can look at a gap junction as a low value resistance, it links the internal potentials of the two neurons (and also allows them to exchange small molecules), thereby SYNCHRONIZING their electrical activity - the oscillations in one will tend to be in phase with the oscillations in the other.
A special and important spiking behavior is called BURSTING, which occurs on the rising slope of the STO, and can be elicited by a sufficiently long hyperpolarization (100 msec usually does it). Here's some bursting behavior, you can see the bursts of spikes riding on the membrane potential:
So what happens is you have a large population of these cells that are electrically coupled, with a membrane potential that signals one thing and a spike frequency that signals another, and a spike time that signals yet another. The network providers call this multiplexing, using different codes to represent different forms of information traveling in the same wire.
Point 2 is, there are two special types of inhibitory neurons, one affects only the cell body, the other affects only the dendrites. A hyperpolarization (inhibitory input) of sufficient magnitude will cause the cell body to start bursting afterwards, and it resets the phase of the STO. There is the time to the start of the burst, the duration of the burst, and the rate of the burst, as well as the specific spike times.
When the cell body is not being hyperpolarized, it tends not to burst. In this mode a synaptic input that coincides with the rising edge of an STO will elicit a spike. So the population will tend to sync to the STO's, while individual neurons may be hyperpolarized and then burst afterwards. Meaning, there are multiple channels of information within a single neuron, and there are multiple channels in the population.
Which renders the "machine learning neuron" entirely obsolete.
The reset of the phase of the STO accomplished by the inhibitory input, is a form of phase modulation. In the digital domain it would be very close to PCM.
Phase modulation is what gives you things like place cells in the hippocampus. There is an enormous literature on "place cell precession", you can Google it.
Phase modulation is particularly resistant to noise. Which is important in a biological context. And it confers other advantages like compressibility, and in the context of self organizing neural networks it enables a "content by key" functionality so multiple hot spots can be processed at once.
Modern AI is nowhere near this level of sophistication. Not even close. It can't handle carrier modulation at all. The closest approach is an approximation of the Poisson rate deviation. (In other words, faking modulation using statistics).