scientists discover link between neuron electrical activity and molecules

scruffy

Diamond Member
Joined
Mar 9, 2022
Messages
25,908
Reaction score
22,374
Points
2,288
John Hopfield recently won a Nobel Prize for his work on "annealing" in artificial neural networks.

In short, the system Hamiltonian determines an energy surface and the network finds a local energy minimum, and the basic mechanism is "gradient descent" which occurs every time a neuron changes state.

Now, scientists have discovered a direct link between the energy of a neuron and the configuration of its molecules.


mRNA plays a key role in the link.
 
John Hopfield recently won a Nobel Prize for his work on "annealing" in artificial neural networks.

In short, the system Hamiltonian determines an energy surface and the network finds a local energy minimum, and the basic mechanism is "gradient descent" which occurs every time a neuron changes state.

Now, scientists have discovered a direct link between the energy of a neuron and the configuration of its molecules.


mRNA plays a key role in the link.
1738991531084.webp
 
This is a tad over my head Scruff.......can you do a Cliff note or two? ~S~
Well, you have this stuff called "machine learning" which supports most of today's AI.

But then you have stuff like Alzheimer's, which has to do with clumps of proteins and oddball molecular behavior.

So, what is the relationship between molecules and "cognitive decline"?

How do molecules get involved with information processing?

The theory of machine learning has a lot of handwaving in it, from a biology standpoint. It involves "changes in connection strength" - but connections are synapses. No one really knows how to change "synapse strength", much less how to make such changes permanent. In machines, the connection strength is just a number in a matrix. In a human brain though, synapses are incredibly complicated. They don't usually resolve down to "a number".

But specifically - synapses are "local". One of the mysteries in the brain is, any two neurons can have 100 or more synapses between them. So not "a" number, but 100 different numbers. Why? Machine learning hasn't explored this yet. Because the machine learning algorithms don't (can't) differentiate one synapse from another.

Biological modeling has explored this "a little bit". Like, they put synapses at various locations on a neuron, so each synapse has a slightly different effect. But they haven't figured out how to tie this in with cognitive learning, or what it actually means in terms of the information geometry.

So this research in the OP, is actually a pretty big deal. Because it looks at how global phenomena like neuron firing, affect local behavior at the level of individual connections.

In a real brain, the connections are isolated. Each connection attaches to the neuron by a thin stalk, the configuration is called a "spine". This is what it looks like:

1739054874462.webp


1739054897014.webp


Molecules move mostly one way in the stalk, from the neuron into the synapse. Why? Why do synapses need to be chemically isolated in this way?

If all it is, is "connection strength", this wouldn't make sense. There must be something more going on. So, what if the OP is correct and little bits of RNA enter the stalk in response to neuron firing? RNA determines protein synthesis, therefore each synapse could have its own protein configuration. Which is a big huge deal, because now, instead of just changing the synapse strength, we have a way to permanently change the TIMING of the synaptic response.

Obviously, machine learning works, and therefore connection strengths are "sufficient" at that level. But AI is still a long way from true intelligence. The idea of TIMING is interesting because it means brains can be "clever" and combine information in new ways.

One of the interesting things about DeepSeek is you can see it work. I asked it to prove the Riemann hypothesis by resolving the zeta function to 88% or greater. In the sidebar I could see its "logic" as it attempted the task. It is not "clever", it's very logical and straightforward. It came up with 87.8%, which is very good but not good enough for the Millennium Prize. It is incapable of being "clever", but even a mouse is capable of being clever.

So what does a mouse have that an artificial neural network doesn't? Well, one thing, is "spines", and the ability to isolate sequences of timing.
 
I appreciate your efforts Scruff, i've read it, read it, and will read it again....~S~
Here's another take.

From a biological standpoint.

We know of several ways that synapse strength can be changed. Generally they're called "potentiation" and "depression".

There are short term and long term versions of each. So you have STP and LTP, and STD and LTD.

The problem with these (from a machine learning and cognitive standpoint) is they're ALL temporary. The short term modifications reset themselves after half an hour, and the long term versions take 2-3 days.

Real learning requires a PERMANENT modification. And there is no biological basis for such a thing, as of yet.

However modifiable protein kinetics (as in the OP) provide a possible mechanism.

We mostly know how STP/LTP works. It has to do with short term changes in the numbers of neurotransmitter molecules and receptor-linked ion channels, stuff like that. But to get PERMANENT with these, there has to be a change in the equilibrium point of the protein translation system, that determines where the baseline is.

Lately I've posted a couple of threads about the Hox genes that regulate body segmentation. These genes create a very low number of proteins per cell, like 8 to 10 at any given time. But during development, the numbers change by 1000 times, and this change is brought about by "transcription factors", which are other genes that get expressed at the right time during development.

It makes perfect sense that nerve cells would use this same mechanism to program synaptic strength, because nature re-uses what works. There is precedent in the immune system, where the same thing happens when antibodies suddenly get amplified by 1000's of times in response to a cold or infection.

Only, in the immune system and in embryology, the amplification stops after the cold goes away, or after proper body segmentation has been achieved. Here in the nerve cells, all that's needed is the effect "doesn't stop". It's permanent, it goes on for a lifetime.

So this explains why you still have your memories after a period of unconsciousness. In machine learning, if you remove the power, all the learning goes away and you have to start all over again. But in a human being the memory is permanent because it's genetic, if you stop all electrical activity you still have the underlying chemical equilibria continuing at the same rate even if nothing is making use of it.

This explains why synapses need isolation. Because each synapse has its own genetic programming, its own level of RNA that determines how many proteins are needed for the proper synaptic strength. (In other words, each synapse has local transcription factors).

PROVING this is going to be very difficult. First we'll need to know what the RNA looks like, then we'll need to attach visible markers to it, then we'll need to take pictures of the markers and determine their concentrations. That's 20 years worth of work.
 
ondogeneous
So this explains why you still have your memories after a period of unconsciousness. In machine learning, if you remove the power, all the learning goes away and you have to start all over again. But in a human being the memory is permanent because it's genetic, if you stop all electrical activity you still have the underlying chemical equilibria continuing at the same rate even if nothing is making use of it.
Good one Scuff......


from the article>>>

By integrating the main processes driving mRNA and protein distributions in neurons into a single mathematical model we combined biological plausibility with mathematical tractability

~S~
 

New Topics

Back
Top Bottom