The machine learning types haven't figured this out yet.
They try to build brain-like machines, but they're too stupid to build them like the brain.
Today's example is feedback. The ML crowd's best take on this is "recurrent neural networks". Which just means sequence learning.
But the human brain is a lot more clever. It uses feedback connections for a higher purpose, and they don't all have to be in the same layer.
A quick look at the wiring diagram of the visual cortex provides a clue.
The outputs from layer 6 feed back to the thalamus, whose inputs arrive in layer 4. Why is that? It's not "recurrent", because there are three other layers between input and feedback.
Here's the answer. The feedback tracks which features make up an object.
Here's an example. Let's say you have a house in your visual scene on the retina. First you get pixel level intensity, contrast, and color. Then you get lines of varying orientations and lengths - and "edges" - corners, and grids. (Like windows, and doors). Eventually there will be a neuron in your convolutional network that says "aha! that's a house". All of this occurs from feedforward connections only.
But now, you have eye movements. Your gaze is still on the house, but your eyes focus on various details using micro-saccades, as they move from one window to another, then to the door, then maybe to the front lawn - you're "studying" the scene that's in front of you. But the house is still a house - it's just that its features have changed position. As your eye moves, the door is now where the window used to be.
So, as a good computer scientist, are you going to recalculate "house" every time your focus shifts by a few degrees? No! What you're going to do is leave "house" running, as long as you're looking at the house All you need to know is "this" door and "these" windows constitute the house - and as long as they're in view, you're still looking at the "house".
But OTHER parts of your brain need the precise feature locations, like for example for targeting. Whereas, your cognitive brain doesn't need them, it just needs to know "house", so it can do logic. (Like maybe "hm, I wonder if it has a pool", "or gee that's a lovely house, I wonder who lives there").
So what the feedback connections do, is they PERSIST the house, while allowing the exact feature set to remain intact. This function can not be performed with memory, because in that case the first feature locations would persist and the updated ones would never be processed.
The feedback connections in the human brain say "this" object consists of "that" set of features, and then track the features as they move around. The process only stops when the object disappears from view.
So in the above circuit diagram of V1, the feedback tracks which retinal receptors make up each line segment in the visual field. The line segments themselves may flutter around a little with micro-saccades, but by and large their relative positions and lengths and angles remain the same. You need TOP-DOWN processing to track all this. That's what the "centrifugal" feedback pathways are for.
They try to build brain-like machines, but they're too stupid to build them like the brain.
Today's example is feedback. The ML crowd's best take on this is "recurrent neural networks". Which just means sequence learning.
But the human brain is a lot more clever. It uses feedback connections for a higher purpose, and they don't all have to be in the same layer.
A quick look at the wiring diagram of the visual cortex provides a clue.
The outputs from layer 6 feed back to the thalamus, whose inputs arrive in layer 4. Why is that? It's not "recurrent", because there are three other layers between input and feedback.
Here's the answer. The feedback tracks which features make up an object.
Here's an example. Let's say you have a house in your visual scene on the retina. First you get pixel level intensity, contrast, and color. Then you get lines of varying orientations and lengths - and "edges" - corners, and grids. (Like windows, and doors). Eventually there will be a neuron in your convolutional network that says "aha! that's a house". All of this occurs from feedforward connections only.
But now, you have eye movements. Your gaze is still on the house, but your eyes focus on various details using micro-saccades, as they move from one window to another, then to the door, then maybe to the front lawn - you're "studying" the scene that's in front of you. But the house is still a house - it's just that its features have changed position. As your eye moves, the door is now where the window used to be.
So, as a good computer scientist, are you going to recalculate "house" every time your focus shifts by a few degrees? No! What you're going to do is leave "house" running, as long as you're looking at the house All you need to know is "this" door and "these" windows constitute the house - and as long as they're in view, you're still looking at the "house".
But OTHER parts of your brain need the precise feature locations, like for example for targeting. Whereas, your cognitive brain doesn't need them, it just needs to know "house", so it can do logic. (Like maybe "hm, I wonder if it has a pool", "or gee that's a lovely house, I wonder who lives there").
So what the feedback connections do, is they PERSIST the house, while allowing the exact feature set to remain intact. This function can not be performed with memory, because in that case the first feature locations would persist and the updated ones would never be processed.
The feedback connections in the human brain say "this" object consists of "that" set of features, and then track the features as they move around. The process only stops when the object disappears from view.
So in the above circuit diagram of V1, the feedback tracks which retinal receptors make up each line segment in the visual field. The line segments themselves may flutter around a little with micro-saccades, but by and large their relative positions and lengths and angles remain the same. You need TOP-DOWN processing to track all this. That's what the "centrifugal" feedback pathways are for.