how neurons really work

scruffy

Diamond Member
Mar 9, 2022
18,491
15,004
2,288
During WW2 there was a lot of interest in analog computing, for missiles and radar and stuff like that. They started looking at neurons, and in 1942 a biologist and a mathematician came up with the first artificial neuron. Called the McCulloch-Pitts neuron after its inventors, it was a simple linear integrator.


A lot of people still have this idea about neurons, that they sum their synapses and either fire or don't fire. And, a lot can be done with that. Beginning with the Perceptron in 1953, and proceeding through the popular Hopfield model in 1982, artificial neural networks are very good at linear classification and estimation.


Unfortunately, this is not really how a neuron works, or what it is.

A neuron is an unstable oscillator, operating on the edge of criticality.

It works on different kinds of currents (ionic, mostly), that cross the membrane under controlled conditions. But even the well known Hodgkin-Huxley model is far too simple.

Here is a modern analysis of what a neuron actually looks like, and how it behaves.


They're looking at "post inhibitory rebound", which turns out to be a damped oscillation.

The paper shows how an inhibitory synapse can suddenly and magically turn into an excitatory synapse, depending on what's going on in the postsynaptic cell.

Synapses, it turns out, implement the first several orders of Laguerre filtering. This allows populations of neurons to model the nonlinear relationships between inputs, with what amounts to a Volterra decomposition.


This portion of the neural population behavior is basically the "impulse response". The Volterra method works fine with fading memory (which includes things like adaptation and potentiation and other forms of plasticity), but it doesn't work as well in chaotic contexts. Coupled oscillators obey the Kuramoto dynamics, which looks a lot like the spatial patterns in a Belousov-Zhabotinsky reaction, or the "domains" in an Ising model - so the Volterra method can be adjusted to analyze locally around "hot spots" in the input.


This perspective has forced a reconsideration of almost every aspect of modern neuroscience. For example the "on center" and "off center" receptive fields no longer make sense, there are only bipolar cells that are either depolarized or hyperpolarized by the photoreceptors, and they are nonlinear and have differing time constants. What was once thought to be a simple retina with only 5 cell types, is now known to be highly complex, with 10 layers of neural connections and more than 60 genetically identifiable cell types with specific branching patterns.

Some of the connections in the retina, will cause a ganglion cell to change from an ordinary integrative mode, to a "bursting" mode. Previously these were identified as inhibitory synapses, but now we know they're doing something different. Yes, they inhibit, in the ordinary integrative mode. But in bursting mode, they're responsible for the timing of the inter-spike intervals.

Neurons turn out to be pretty smart. They carry several different types of codes at the same time, and they respond to the time varying properties of the input signal. If you're interested, check out the sections called Dynamical and Functional:

 
Would it be absurd to say in summary that the On--Off synapse reaction and description is far more complex due to multi chemical signalling rather than just a simple On-Off function /jump/switch.
And that it can carry many different instructions / chemical reactions at one point ( a synapse ) simultaneously .

What have I missed ?
 
Would it be absurd to say in summary that the On--Off synapse reaction and description is far more complex due to multi chemical signalling rather than just a simple On-Off function /jump/switch.
And that it can carry many different instructions / chemical reactions at one point ( a synapse ) simultaneously .

What have I missed ?

Neurons are oscillators.

The synapses couple the oscillators.

So, in the brain, there are lots and lots of coupled oscillators.

This is what the activity pattern of lots and lots of coupled oscillators looks like:


This is called the Kuramoto dynamic, it's been studied in many contexts, a Nobel Prize was given for its demonstration in an aqueous chemical reaction

The regions of light and dark are called "domains". They're dynamic, they're constantly shifting. If you look at the third picture in the link, you can see some interesting things at the boundary between domains.

There, at the boundaries, we get "fractal basins", they happen automagically because of the coupling of the oscillators This is what they look like:


This happens in the state space of the system, so, it can start from within a domain where things are nice and smooth, and cross into a boundary where things are chaotic and turbulent. It can also start from the chaotic turbulent point, in which case it probably stays there for a while, bouncing around in the chaos.

Any time a subsystem has to transition from a light domain to a dark domain, or vice versa, it has to cross a boundary. If it has "enough" momentum it can traverse the boundary, if not it'll get sucked into the chaos.

The important thing in how this relates to both behavior and medicine, is that neurons in these types of systems do not display (nor have to display) nicely behaved Gaussian probability distributions and stationary Markov-like dynamics. In fact, neurons in these systems do not undergo random walks, in the sense of Brownian motion.

Instead, the distributions are multimodal (frequently bimodal), and the resulting behavior is called Levy motion rather than Brownian motion. Named after the mathematician Paul Levy, but beware, it's not the same as a Levy process. A Levy motion consists of many small random jumps combined with infrequent but much larger jumps. Here is a pretty good description of how it pertains to neural networks:

 
Follow all of that but it seems I was right -- I have expressed the overall description .You have added the process details . Important though they are .
But as a lay person operator I do not need them .
I employ others to use them in detail, as and when required .
 
Follow all of that but it seems I was right -- I have expressed the overall description .You have added the process details . Important though they are .
But as a lay person operator I do not need them .
I employ others to use them in detail, as and when required .

You are interested in this if, for example, you take antidepressants. Or benzodiazepines (valium and etc). Or if you have Parkinson's.

As a lay person operator you're probably interested in the proper function and health of your system.

So, y'know... nonlinear dynamics are a step up from leeches and phrenology. :p

See, when you retire and you have nothing better to do, you can confound the kiddies with questions like, why is red red? Why isn't it blue? It has nothing to do with photoreceptors, because you can electrically stimulate the cortex and "see" red. And, why is the pinprick felt "in" the finger? Why isn't it felt in the brain, where the neurons are, where the cortex is?

The whole point of having a brain is awareness, experience. Otherwise we just have a dumbass artificial neural network, which is nothing more than a computer.
 

Forum List

Back
Top