more evidence for Scruffy's theory of consciousness

scruffy

Diamond Member
Joined
Mar 9, 2022
Messages
25,908
Reaction score
22,374
Points
2,288
We talked before about the timeline in the brain.

Well, lookie here:

1729414181598.jpeg



Notice the yellow in figures a, c, and d.

These are maps of the anatomical projections of dopamine neurons in the substantia nigra and ventral tegmental area of the human brain.

You can see the organization along the timeline, it's clear as day.

The substantia nigra (area A8) and ventral tegmentum (areas A9 and A10) are important parts of the egocentric reference frame.

A8 is the one involved in Parkinson's, it's called substantia nigra pars compacta. It is involved in the initiation of voluntary motor movements (and therefore the selection of behavior, which is egocentric in nature). A9 and A10 are the ones involved with the rewarding effects of stimulant drugs, amphetamines, cocaine, like that. Also very egocentric.

A8 maps directly to the superior colliculus, which has an egocentric map of eye position ("gaze"). What happens in SC is a firing neuron will direct the eyes to move to that point in the visual field. The brain says, I want to look at "that", and then it directs the eyes to move to "that" location, which is then translated by downstream neurons into combinations of activity in the oculomotor muscles (lateral, up and down, and lens accommodation).

The egocentric reference frame ("I") is generated by a compactified timeline. It starts with a time mapping based on evoked potentials and premotor potentials -- you can think of it as a linear window of time centered on "now", the current moment. Mathematically it's a line segment whose origin moves along with physical time This construction translates brain activity into time series relative to t=0, which in turns allows the definition of t=0 as the center of the egocentric reference frame.

The egocentric reference frame is the constructed by "compactifying" the timeline interval, which means joining the two ends so the line segment becomes a circle. Mathematically this is called an Alexandroff 1-point compactification, because it involves adding a single point to the interval which is designated as "the point at infinity".

Physicists and computer graphics people will immediately recognize this construction as a projective map, and more specifically an orthographic perspective transformation. Basically we want the camera to live at the point at infinity, so it looks out at the entire timeline, with the greatest resolution at "now".

We previously talked about the ensuing Hawaiian earring, which follows logically from the use of homogeneous coordinates to map the change of basis. In this way, stimulus and response in a neural reflex becomes before and after in the egocentric reference frame.

"I" is a superimposition of all the radii of the Hawaiian earring. Basically what happens is if you're a bug walking along the timeline and you follow an evoked potential when it starts at t=0 and moves off to the left (t<0 direction), eventually you reach the point at infinity, which means the next step you take is going to teleport you to the far right of the timeline (t>0 direction).

In the limit as dt => 0, the point at infinity is defined as memory (the global associative store), because memory is the ultimate destination of all sensory activity, and the ultimate source of all motor activity).

The machine learning types will immediately realize that some support infrastructure is needed to make such a system work in practice, and the good and amazing news is this is exactly what the brain has! If you look for it, it will leap off the page at you. The relationship between the hippocampus and episodic memory is the best studied and best known example. The example in the above pic comes from the opposite side of the timeline, from a brain area called the striatum. It is the part of the brain involved with behavioral selection (and also cognitive selection). You can see very clearly that it's regulating the timeline. What it's specifically doing is filtering the parts of the information it wants to respond to. Part of the response is based on the ongoing sensorimotor activity (that's A8), and part of it is affective and based on the expectation of reward and the determination of which behaviors are most "desirable" (A9 and A10).

The cerebral cortex is an overlay of the timeline with the global associative store. Essentially it "unfolds" the moment called now. In doing so, it uses a section through the plane of Hawaiian earrings of various diameters, from the point at infinity to the point at t=0. To visualize this geometrically we can look at this Hawaiian earring where the point t=0 is on the left and the point at infinity is on the right.

1729417527093.png


Each diameter of the earring is a neural reflex arc, which equates with a compactified timeline. So for example a big diameter might be the reflex arc from your big toe to your primary sensorimotor cortex, that's a long pathway that defines a large interval along the timeline. Whereas, moving leftward, a smaller diameter might be the reflex arc between your eye and your brain, which is a much shorter pathway defining a smaller interval along the timeline. In the limit as dt=>0, the opening and closing of ion channels based on the configuration of local receptor proteins is a very small interval. If we take a section from the point at infinity to the point at t=0, all intervals are represented, and these points can then be expanded into fiber bundles and math can be done on them - not the least of which is the same kind of stochastic optimization that John Hopfield's neural Ising model accomplishes, which he just got the Nobel prize for.

This is how the brain creates "I", the egocentric reference frame.
 
Last edited:
So, for the physicists and mathematicians -

We can work with Riemannian geometry this way.

In the Hawaiian earring pic, each timeline is a vertical line segment centered at the point t=0, which is where all the diameters intersect at the far left of the pic.

When we compactify (each timeline, which is to say each neural reflex arc), we get the apposed points at infinity, on the opposite side of each circle.

The thing to realize is the actual point t=0 is a singularity, the brain can never know what's happening at "exactly" t=0 because of conduction delays. So what the brain does is optimize all its maps (all its compactified timelines) to model this point. This process constitutes our "experience" of "now".

So when we take a horizontal section through all the points at infinity for each diameter, we can actually drop the point "at" t=0, which mathematically is equivalent to "un-comoactifying" the earring.

Done this way, we get a NEW set of timelines, except that they're centered on the points at infinity instead of the point at t=0. Essentially what we're doing is rotating each circle by 180 degrees, and in so doing the point at infinity becomes the point at "now".

Doing this, we can place our camera at t=0 and recompactify, and therefore arrive at the "dual" of the physical world, which I suggest equates with our inner subjective world.

This is one of the clever ways nature does something enormous with very simple parts. We have two sides of the brain, they started as parallel nerve trunks in primitive bilaterian organisms (like earthworms). But here, in this context, they become two different cameras. One side always looks at the other side's t=0. The trick is, the two sides alternate, about 10 times a second. (Approximately at the alpha brainwave frequency). So while one side is handling the physical timeline that's anchored to the real world, the other side is modeling it.

The two sides are not embedded into a higher dimensional space called "the global associative store", which you can envision as a big Hopfield network. From a machine learning standpoint, the processing "along" the timeline consists of a bunch of convolutional layers in series, whereas the processing "between" timelines overlays these layers with a single Hopfield network in an orthogonal axis. This way, all timelines are always embedded into the global associative store. (Who has access to the store at a given point in time is regulated, that's what the support infrastructure is for).

So what we can do mathematically is build what amounts to a Bloch sphere. If we take our horizontal section and build a fiber bundle from all the vertical line segments along the section, the angles become the lengths of each timeline, which means the window size of the time series. We then use information geometry to build optimizations along each fiber, and then we overlay the optimizations with the embedding space. Ordinarily this would be a horrendous effort from a computational standpoint, but here in the brain it all happens in parallel in an augenblick. It is an asynchronous and continuous process, it is entirely physical and there's nothing mysterious about it. It is a natural and inevitable consequence of the evolution of bilaterian brains, which began from the simple need to coordinate the body movements on the left and right sides of an earthworm.

The asynchronous updates are what make the whole thing work, when you get "enough" neurons the process becomes approximately continuous, which results in the continuity of our consciousness. This model explains "just about everything" about our consciousness. The only thing it doesn't explain is why red is red, but at least it provides a mathematical framework for studying the issue

From our Riemannian projection we can use interior and exterior derivatives to easily explain the optimization process (the "gradient" in Hopfield's gradient descent), and what we're seeing in the picture in the OP is part of the support infrastructure used to make decisions on the information space. From a physics standpoint the information geometry gives us the Kullback-Liebler divergence between neighboring fibers, and from this we can derive expressions for the entropy between neighboring hoops of the earring. (Because one hoop might give us a different optimization from another, and we want to know - that is, our brains want to know - why they're different). So a vital part of the control infrastructure is selecting which diameters we want to pay attention to at any given time, and that's why we see behavioral selection neurons being mapped along the timeline axis.

It all makes perfect sense if you're trying to build a brain. In the striatum we have a patchwork matrix of small collections of neurons, each of which receives input from all over the cerebral cortex. Half of this input will be from the timeline or mappings thereof, and the other half will be from the global associative store. The patches that are selected are the ones with the highest rating from the store, meaning that "this" behavior will be allowed to pass through to the spinal cord. For example if a rewarding stimulus is dangled in front of the organism and there are various strategies for obtaining the reward, the selected strategy should be the one with the greatest chance of success. The chances of success can only be determined by matching the information along the timeline with the information in the global associative memory store. For a great example you can think of real time games like ping pong, where the selection between timeline intervals is of primary significance. Another great example is when the ping pong player has to take a pee in the middle of the game :p
 
You can prove that the timeline is in fact projective using the Riemann curvature tensor, and other tools from general relativity. (Ricci tensor, etc).

One neat trick is to deploy a graph neural network horizontally across the timeline, and then use Pauli matrices to connect it into the global system Hamiltonian. The machine learning types are working on this even as we speak.

What you find when you do this is your connection isn't torsion free, nor is it metrically static. So your Levi-Civita connection won't work, and you need to find a new set of Christoffel coefficients.

The thing is, the networks work, even if no one knows why. In Florida they're running this on a hypercube GPU, and here's what they get:

1729579567289.webp



Each square is a GNN attached to a qubit via a Pauli matrix. (Which basically equates with several hoops of the Hawaiian earring, except that the input comes from an MNIST training set instead of a live timeline).
 
Admirable Scruffy .A little too technical for Sheeple .

But enlightened ones already know that the Universe is Consciousness .
Nothing more or less .
They have written about it for thousands of years and those lost ideas have re-emerged as we enter a period of enlightenment and see that the mainstream explanations of Universe are wrong and are dead ends .
imho
 
Admirable Scruffy .A little too technical for Sheeple .

But enlightened ones already know that the Universe is Consciousness .
Nothing more or less .

They have written about it for thousands of years and those lost ideas have re-emerged as we enter a period of enlightenment and see that the mainstream explanations of Universe are wrong and are dead ends .
imho
As you say, this construction shows there is a deep connection between consciousness, relativity, and quantum mechanics.

The basics are in the concept of "spinors" (pronounced spinners).

Spinors describe things like the polarization of light, and the spins associated with quantum states.

They are geometric entities, but also algebraic and topological. A basic understanding involves the idea of a "double cover" of a topological space.

Algebraically it takes us into the world of Clifford algebras, which are "geometric algebras".

There is an excellent and very accessible introduction to these concepts on YouTube, here:



Video #9 in the series ties a lot of it together.

If you're interested in this, I suggest viewing the other informative videos from eigenchris too, including his series on tensors.

Only basic linear algebra and a little bit of calculus are needed to understand this. Two viewings of the videos should be sufficient to get the basics to sink in.

In the model I'm proposing, a straight line timeline is approximately equivalent to the "screen" in a projection space. The eigenchris videos have lots of pretty pictures to help with understanding.

The idea here, is that this construction is more than just an ordinary linear map, it's actually a mutual embedding of the two topological spaces Rn and RPn.

If you'll give me another week I'll be able to show you most of the algebra for how this works. Just like quantum mechanics, a lot of the magic is in i, the sqrt(-1).
 
To show the basic construction of the projective space underlying consciousness, we need 3 things:

1. A linear timeline. This part is easy, it's just treating brain evoked potentials and premotor potentials as time series. The coordinate system of the timeline is defined relative to T=0, which is "now", the current moment. (Using big T to distinguish it from little t which is physical time). Big T is always centered at T=0, which means it's a window moving through physical time. Your brain (your consciousness) lives at T=0, which is to say, "now". Note that the orientation of big T is opposite from physical time. In big T, "the future" is T > 0, and it's on the right of the timeline, whereas "the past" is T < 0, which is on the left in the timeline. Neural signals start with premotor potentials that begin on the far right (in the relative future, before they're actualized), that then travel leftward to T=0 where they're actualized at "now", then continue traveling leftward to T < 0 as they become part of the past. Roughly speaking you can align your head with the timeline, so the future is in front (in the direction of the eyes and the frontal lobe), and the past is in back, in the direction of the occipital lobe and the back of your head. This way, the point T=0 is in the middle of your head, and the middle of your brain. Signals flow leftward along the timeline, which is opposite from the sense of physical time, because little t started at 0 some 14 billion years ago and increments with every passing moment. It is important to understand that big T is a representation space for neural signals, and the reason we do it this way is so we can define an egocentric reference frame that always lives at T = 0.

2. Compactification. This is what actually generates the projective map. We take the timeline interval (which is maybe 2 seconds wide, however long it takes a premotor potential to turn into an evoked potential) and join the ends, so the line becomes a circle. In doing so we are modeling the "monosynaptic reflex", also called reflex arc or reflex loop. When we join the ends, we are mathematically creating a "point at infinity" which doesn't really exist in the original linear timeline, but we're justified in doing so because 99% of brain wiring is circular (feedback loops like a reflex), very little of it is purely feed-forward. When we compactify the timeline, we can then use the same trick the computer graphics people use, to get a "projection" of what the eye might see when it's watching a movie. We place the eye at the point at infinity, and the linear timeline is our movie screen. When we do this, we get a projective map whose transformations can be represented as homogeneous coordinates and calculated using simple matrix multiplication (exactly the same way the computer graphics people do it).

3. The ability to rotate the circle of our compactified timeline. In other words, we can put the camera anywhere we want, along the circumference of the circle, which is the same as translating the timeline by sliding it left or right. The zero point (T = 0, the "origin") is quite arbitrary, for the purposes of centering the timeline at "now" we DEFINE the origin to be the time of the environmental interface. For example if you have a classic monosynaptic reflex consisting of a muscle, a sensory neuron, and a motor neuron, we DEFINE T = 0 to be in the middle of the muscle where the spindle is, this way we're looking at environmental events (signals) occurring "now". But really, we could just as easily rotate the circle and designate the firing rate of the motoneuron as "now". But there's a good reason to equate "now" with the environmental interface, as we'll see when we build the reciprocal relationship between the egocentric reference frame (which is where big T lives), and the allocentric reference frame (which is where little t lives).

Those three things are the essential prerequisites. Timeline, compactified, and rotated using simple matrix multiplication. When we use homogeneous coordinates we see that each ray traced out from the location of the camera is a fiber in a fiber bundle. Every point along the fiber is mapped to the same point in the base space, dovetailing with the way the homogeneous coordinates work. Every point along the ray is mapped to the same point on the screen. "Fiber" equates with "ray" this way.

There is only a TINY difference between this model and a classic description of a projective map. If we want to be nit picky and technical, this compactified timeline is halfway between a movie screen and a pinhole camera. The difference is in the placement of the camera. In a pinhole model the camera is designated as the origin and the film is on the opposite side of the origin from the environment. In a classic projection model the camera is the origin but the screen is between the camera and the environment. In this timeline model the screen is the origin and the camera is the point at infinity. Actually all three models are the same with just a change in the coordinate system. Which is why the fiber bundle construction is useful, it allows us to transition seamlessly from one coordinate system to another.

So now we simply realize that this entire construction is an "unfolding" (in the topological sense) of the point T = 0. Because EVERYTHING in this model exists at "now", the current physical moment. Mathematically we're looking at "roots of now", which in this model can be represented as unitary transformations on the compactified timeline. To get from T to t you reflect and add a scalar. To move the camera you simply rotate.

Next we'll get into how spinors fit into this picture. So far we have a simple 1-dimensional system where a line segment R1 is mapped to a circle S1. In real life it's a little more complicated because the timeline is actually a 2x2 scaffold of layers of artificial neurons. For now we're keeping it simple by representing the entire Information content of a timeline slice as an analog value (a scalar, just a number).
 
So, for the physicists and mathematicians -

We can work with Riemannian geometry this way.

In the Hawaiian earring pic, each timeline is a vertical line segment centered at the point t=0, which is where all the diameters intersect at the far left of the pic.

When we compactify (each timeline, which is to say each neural reflex arc), we get the apposed points at infinity, on the opposite side of each circle.

The thing to realize is the actual point t=0 is a singularity, the brain can never know what's happening at "exactly" t=0 because of conduction delays. So what the brain does is optimize all its maps (all its compactified timelines) to model this point. This process constitutes our "experience" of "now".

So when we take a horizontal section through all the points at infinity for each diameter, we can actually drop the point "at" t=0, which mathematically is equivalent to "un-comoactifying" the earring.

Done this way, we get a NEW set of timelines, except that they're centered on the points at infinity instead of the point at t=0. Essentially what we're doing is rotating each circle by 180 degrees, and in so doing the point at infinity becomes the point at "now".

Doing this, we can place our camera at t=0 and recompactify, and therefore arrive at the "dual" of the physical world, which I suggest equates with our inner subjective world.

This is one of the clever ways nature does something enormous with very simple parts. We have two sides of the brain, they started as parallel nerve trunks in primitive bilaterian organisms (like earthworms). But here, in this context, they become two different cameras. One side always looks at the other side's t=0. The trick is, the two sides alternate, about 10 times a second. (Approximately at the alpha brainwave frequency). So while one side is handling the physical timeline that's anchored to the real world, the other side is modeling it.

The two sides are not embedded into a higher dimensional space called "the global associative store", which you can envision as a big Hopfield network. From a machine learning standpoint, the processing "along" the timeline consists of a bunch of convolutional layers in series, whereas the processing "between" timelines overlays these layers with a single Hopfield network in an orthogonal axis. This way, all timelines are always embedded into the global associative store. (Who has access to the store at a given point in time is regulated, that's what the support infrastructure is for).

So what we can do mathematically is build what amounts to a Bloch sphere. If we take our horizontal section and build a fiber bundle from all the vertical line segments along the section, the angles become the lengths of each timeline, which means the window size of the time series. We then use information geometry to build optimizations along each fiber, and then we overlay the optimizations with the embedding space. Ordinarily this would be a horrendous effort from a computational standpoint, but here in the brain it all happens in parallel in an augenblick. It is an asynchronous and continuous process, it is entirely physical and there's nothing mysterious about it. It is a natural and inevitable consequence of the evolution of bilaterian brains, which began from the simple need to coordinate the body movements on the left and right sides of an earthworm.

The asynchronous updates are what make the whole thing work, when you get "enough" neurons the process becomes approximately continuous, which results in the continuity of our consciousness. This model explains "just about everything" about our consciousness. The only thing it doesn't explain is why red is red, but at least it provides a mathematical framework for studying the issue

From our Riemannian projection we can use interior and exterior derivatives to easily explain the optimization process (the "gradient" in Hopfield's gradient descent), and what we're seeing in the picture in the OP is part of the support infrastructure used to make decisions on the information space. From a physics standpoint the information geometry gives us the Kullback-Liebler divergence between neighboring fibers, and from this we can derive expressions for the entropy between neighboring hoops of the earring. (Because one hoop might give us a different optimization from another, and we want to know - that is, our brains want to know - why they're different). So a vital part of the control infrastructure is selecting which diameters we want to pay attention to at any given time, and that's why we see behavioral selection neurons being mapped along the timeline axis.

It all makes perfect sense if you're trying to build a brain. In the striatum we have a patchwork matrix of small collections of neurons, each of which receives input from all over the cerebral cortex. Half of this input will be from the timeline or mappings thereof, and the other half will be from the global associative store. The patches that are selected are the ones with the highest rating from the store, meaning that "this" behavior will be allowed to pass through to the spinal cord. For example if a rewarding stimulus is dangled in front of the organism and there are various strategies for obtaining the reward, the selected strategy should be the one with the greatest chance of success. The chances of success can only be determined by matching the information along the timeline with the information in the global associative memory store. For a great example you can think of real time games like ping pong, where the selection between timeline intervals is of primary significance. Another great example is when the ping pong player has to take a pee in the middle of the game :p
What theory of consciousness? That everything - even inanimate objects - is conscious? How is this evidence of that?
 
Einstein and Quantum for starters
All the particle stuff .
Well... this is meaningful for machine learning. One of the big issues with AI is aligning local and global segmentation. For example, if you have 5 objects in a visual field, neural networks are real good at reading their features. But if those same 5 objects suddenly start moving away from each other at high velocity, the network freaks out and gives a lot of wrong answers. The two biggest problems are called "the curse of dimensionality" and "vanishing gradients". They both affect the segmentation issue. The issue has to do with local analysis vs global analysis, or local structure vs global structure. The traditional answer to this has been "lots of layers", but our brains don't work that way. Our visual systems have between 5 and 10 layers, whereas the smallest convolutional ANN with geometric invariance has 32 layers. Some computationally intensive solutions have been implemented, like for instance wavelet transforms, but they're conspicuously non-biological.

The idea is we're trying to extract both local structure and global structure from any input. We want to say "that's a table surrounded by 4 chairs" at the same time we're describing the brocade pattern in the fabric of one of the chairs. So, our brains do something clever with the visual input. We build what amounts to "stick figures", where each stick then serves as a hot spot for the gathering of details. We use a "patchwork basis", which if you study topology is one of the best methods for connecting local smoothness with global smoothness. In geometry the relationship between the tangent space at one point of a manifold and the tangent space at another is called the "connection". In relativity the most famous example is the Levi-Civita connection. It tells us how we have to transform our vectors so they still look the same "over there".

But again, brains don't do it this way, it's too computationally intensive. This is where the Hawaiian earring comes in. It is essentially a "patchwork in time", it allows us to look at "now" in all different scales, simultaneously. It's a recurrent neural network on steroids, without any of the global dependency on sampling time or sample size.

Links are forthcoming.
 
Einstein and Quantum for starters
All the particle stuff .

This link here, has an excellent discussion of the problem space:


(download the PDF)

On page 88 of this link it shows you what a patch basis looks like for a "geodesic convolutional network". (Fig 18)

Such a basis requires a circular source, it won't work in a linear context.

Here is the source paper:


This is what happens when the network looks at a Klein bottle:

1729920011669.webp


You'll notice the local connections work fine (up to a point), but the network can't handle the global topology.

We want it to say something like "this is an oddly shaped vase" and ideally it would say "this is an impossible embedding in 3 dimensions".

Instead, it just stops - it goes out to lunch.

Various solutions have been proposed, including applying the highly touted "Graph Neural Networks" to the problem. So far, they don't work. Because, they don't scale.

1729920543050.webp



The general issue is that "basis vectors" have a limited range. This is why for example, in brains, we see "hypercomplex" cells with very large receptive fields. The Hawaiian earring construction takes care of this problem in a neat and intuitive way. What I'm proposing is a scaled basis that looks at many differently sized intervals at the same time - therefore when the network does optimization it can decide for itself which scales are relevant, and it can do so in a self-learned and self-organized manner.

In real brains, we find that the timeline's most powerful computations occur at the ENDS of the interval. For example in a human brain, the hippocampus is at the far left of the timeline (T << 0). This is where we find grid cells and time cells. The next step after the hippocampus teleports you to the prefrontal cortex, on the right side of the timeline. So this connection would occur on a small interval of the compactified circle, near the point at infinity. Therefore when you're making decisions, and you take a slice through the earring, you're getting "views" of the same point at multiple levels of resolution. This structure allows you to control which levels you want and which you don't. When we're looking at a Klein bottle the local patchwork is fine "up to a point", but eventually we want to turn it off because it confuses the optimization. Yet we don't want to lose the fine grained solutions we generated, we just want to place them "in context".

In human brains a lot of this is associated with and controlled by the striomers in the striatum. They provide switch-on/switch-off control over the earring answers coming in from the cerebral cortex. So for instance in the Klein bottle analysis V1 (area 17) is generating your orientation vectors while V2 (area 18) is generating your global topology, therefore the striomers can simply turn off the input from V1 when it's no longer useful. This is why we see the linear pattern of dopamine circuitry in the OP. In the striatum the D1 cells feed the substantia nigra directly, whereas the D2 cells use an indirect pathway through the striomers. The D1 cells are asking the rest of the brain "who has input for me?" and turning those areas on. The D2;cells are saying "I don't want that input" and turning those areas off. The result is a selection of the radii along a horizontal slice through the hoops of the earring.
 
Aha - found the image. Here:

1729923252950.webp


The red lines show you what the basis vectors look like in 3:different networks. The one labeled GCNN on the left is a Geodesic Convolutional Neural Network. You can see how these "patches" operate in the projection space. In the one in the middle labeled ACNN, the patches go all the way through the earring vertically, which brings us very close to a spinor space because it's equating one side of the circle with the other.

Note that this is one way to handle "i" (sqrt -1) in two real dimensions. Compactifying maps imaginary numbers to angles, which was the original idea behind "radial basis vectors" in self organizing neural networks. In terms of a double covering, we have solutions of the form x +/- iy, where the +/- are just the apposing points on the opposite side of the circle. When x is nonzero we're just doing a reflection through the Y axis, which is why the matrix multiplication works. Otherwise, the mapping of i to Y wouldn't work, because real numbers in Rn don't obey the same symmetries as their complex partners
 
So then, to develop this further, we make use of the Pauli vectors, which are described very nicely in video 5 of the spinors for beginners series by eigenchris.

Pauli vectors map ordinary 3-d vectors to 2x2 complex matrices (and vice versa). In a compactified setting an ordinary 3-d vector becomes a 4-d homogeneous vector, which again maps to 2x2 matrices. The thing is, the math only works at one radius because the matrices have to be unitary (determinant = 1), and in the Hawaiian earring model this particular radius will light up like a Christmas tree, making it trivially easy to select for the solution.

You can also see how this model allows the winning radius to be communicated downwards using centrifugal synaptic connections.
 
So then, to develop this further, we make use of the Pauli vectors, which are described very nicely in video 5 of the spinors for beginners series by eigenchris.

Pauli vectors map ordinary 3-d vectors to 2x2 complex matrices (and vice versa). In a compactified setting an ordinary 3-d vector becomes a 4-d homogeneous vector, which again maps to 2x2 matrices. The thing is, the math only works at one radius because the matrices have to be unitary (determinant = 1), and in the Hawaiian earring model this particular radius will light up like a Christmas tree, making it trivially easy to select for the solution.

You can also see how this model allows the winning radius to be communicated downwards using centrifugal synaptic connections.
What's your theory of consciousness?
 
So here's what I have so far, I'll share it with you.

I'm working on a "posture and motion control" system to test this hypothesis.

Human voluntary movements in a postural context can be either in phase or antiphase. In phase means the two sides of the body move together, antiphase means they move in opposite directions. For example for posture maintenance, upright balancing is mostly in phase, whereas when you're walking, arm movements are mostly antiphase. In addition to posture maintenance there can be tasks "while" the posture is being maintained, for example if you're asked to vocalize "ba-ba-ba" while you're moving, those vocalizations tend to occur in sync with antiphase movements, whereas they have little or no time relationship to in phase movements.

We want the system to be entirely self organized, and the easiest way to do that is to force negative reinforcement in the "falls down" situation (or the equivalent of pain). Human movements have many singularities, for example wrist flexion or the opening and closing of the hand - and for that reason, movements are organized in terms of "primitives", so for example, the rhythmic opening and closing of the hand is organized neurally by alternation of the two primitives "opening" and "closing".

So our goal space has a hierarchy, the most important goal is maintaining posture "while" the sub-goals of wrist flexion and vocalization are being achieved.

So, this scenario tests our timeline. The robot has to stay upright while it's performing some tasks. The tasks begin as "voluntary motions" at the far right of the timeline, at T >> 0. They begin as the equivalent of neural "premotor potentials". Then, they are instantiated at T=0, and subsequently the results of the movement become "evoked potentials" that travel leftward toward T << 0. Since the movements are continuous, they involve learning "at" (through) the point at infinity. Another way of saying it is the point at infinity is "generative" for voluntary movements.

So we have a "large" neural matrix at infinity, connected globally across the much smaller networks comprising the timeline. In my simple Python model using keras, the points along the timeline are 16x16 matrices, whereas the point at infinity is 256x256.

So far, I have the robot maintaining an upright posture, and it is able to execute simple goal oriented tasks while maintaining posture. What I'm looking for is mathematical evidence of the autonomous development of an egocentric reference frame, and what exactly that looks like, both at the point at infinity and along the timeline.
 
So here's what I have so far, I'll share it with you.

I'm working on a "posture and motion control" system to test this hypothesis.

Human voluntary movements in a postural context can be either in phase or antiphase. In phase means the two sides of the body move together, antiphase means they move in opposite directions. For example for posture maintenance, upright balancing is mostly in phase, whereas when you're walking, arm movements are mostly antiphase. In addition to posture maintenance there can be tasks "while" the posture is being maintained, for example if you're asked to vocalize "ba-ba-ba" while you're moving, those vocalizations tend to occur in sync with antiphase movements, whereas they have little or no time relationship to in phase movements.

We want the system to be entirely self organized, and the easiest way to do that is to force negative reinforcement in the "falls down" situation (or the equivalent of pain). Human movements have many singularities, for example wrist flexion or the opening and closing of the hand - and for that reason, movements are organized in terms of "primitives", so for example, the rhythmic opening and closing of the hand is organized neurally by alternation of the two primitives "opening" and "closing".

So our goal space has a hierarchy, the most important goal is maintaining posture "while" the sub-goals of wrist flexion and vocalization are being achieved.

So, this scenario tests our timeline. The robot has to stay upright while it's performing some tasks. The tasks begin as "voluntary motions" at the far right of the timeline, at T >> 0. They begin as the equivalent of neural "premotor potentials". Then, they are instantiated at T=0, and subsequently the results of the movement become "evoked potentials" that travel leftward toward T << 0. Since the movements are continuous, they involve learning "at" (through) the point at infinity. Another way of saying it is the point at infinity is "generative" for voluntary movements.

So we have a "large" neural matrix at infinity, connected globally across the much smaller networks comprising the timeline. In my simple Python model using keras, the points along the timeline are 16x16 matrices, whereas the point at infinity is 256x256.

So far, I have the robot maintaining an upright posture, and it is able to execute simple goal oriented tasks while maintaining posture. What I'm looking for is mathematical evidence of the autonomous development of an egocentric reference frame, and what exactly that looks like, both at the point at infinity and along the timeline.
I'm going to patent this idea and then license it to others. My brother is a patent attorney so it should be a snap.
 
Compactification makes the joining of the endpoints a smooth manifold. That means you can do math and logic on it and draw graphs on it. You can't do that across T=0 because it's singular. The only way to fix it is to put the camera at infinity, that way you get a smooth flow you can do math on. Physicists will understand what SINGULAR means. Go ahead and try to extract causality across a singularity lol. Go ahead and try to figure out which causes generated which effects. Here's a big clue: back propagation will not allow you to play your sequences in both directions. You need a projective map to do that.
 
The functional requirement for actualizing this model can be stated in another way, that might make more sense to the machine learning types.

Think of the timeline as a shift register, with discrete conduction delay between stages. Each stage is an encoded representation of the previous stage. The entire sequence is like a stack turned on its side. For example - in the visual system, the distance between layers might be 20 msec or so. The signal from the retina reached the LGN 20 msec later, and V1 20 msec after that, etc. Let's say for purposes of illustration there are 5 stages, 20 msec apart.

Across this timeline there is a horizontal Hopfield network which is 1000 times more resolute than a layer in the timeline. So, if the retina has 1 million outputs, the orthogonal Hopfield layer has 1 billion neurons firing (and therefore learning) asynchronously. Remember that the timeline is time locked to the stimulus, whereas the Hopfield network is not. What will happen in such an architecture is the larger network will learn the gradient descents of the smaller networks, in real time as they happen.

The necessary and sufficient condition is that there are "enough" subdivisions of time in the spanning layer. "Enough" is hard to define, the more there are the smoother the process becomes. Since the activity is asynchronous, coverage is loosely defined. A smooth process results in approximately continuous coverage, but if there aren't "enough" the process becomes jerky. There is no fixed minimum, as near as I can tell so far. "Enough" can in theory provide stochastic coverage with very few neurons.
 
Back
Top Bottom