Ive gathered quite a bit of information on the vestibular myself over the years. I even invented a gyroscope mic/speaker, it was very lo fi but when I listened with a headache it did play in my vestibular and reliev pain slightly.
Please go on with what you were going to say.....
Very interesting! Gyroscope mic/speaker? Please tell me more!
The vestibulum is "somewhat" like the cochlea insofar as it has hair cells being bent in liquid. But every other sense, is "partial" when it comes to 3-d orientation. The eyes can't see in back, and olfaction is forward facing too although there is diffuse localization in back - proprioception is limited to the body surface and interior, and taste is restricted to the tongue and facial area.
The question is, how does the brain align all these diverse inputs so they're in the right place, with respect to the 3-d egocentric world map (where the organism is always the "center")?
The answer is surprising - it doesn't! Precise topographic alignment would be much too difficult, especially around the boundaries. Instead, a clever "phase coding" scheme is used, and alignment occurs implicitly by correlation.
The vestibular sense is richly connected with the cerebellum, "because" tiny head, neck, and body movements are essential for navigation. The navigation itself, specifically the building of the egocentric map of the universe, is done in a different brain area, called hippocampus. But for example (using the visual sense as an example) if you look at the very first visual processing station after the retina (before the information enters the cerebral cortex) called LGN, you discover it connects with cerebellum. Why? What does head angle and body orientation have to do with vision?
The answer has to do with how the brain handles "invariances". Invariances are basically symmetries, for example you can take an object and rotate it 90 degrees, the active areas on the retina will change but it's still the same object. There's a part of the brain that processes "rotational invariance" for visual objects, and there's another that processes scale invariance, and so on.
Weird stuff happens when the senses no longer line up. Seasickness, is an example. Disorientation, migraines, and autonomic disruption are common. Frequently there is also "ringing in the ears" and such. Turns out, the cerebellar pathway and the hippocampal pathway are two completely different systems, and they're joined at the hip but not in the usual way. The usual way is topographic alignment, and this way is phase coded dynamic optimization.
There's a reason it happens this way, and it had to do with working memory. Consider a mouse navigating a maze. Every time the body, head, or neck angle/position changes, the sensory input had to be "re-mapped" into the egocentric universe. If you can imagine a brain process that says "I am now here, and my goal is over there", and then the head position changes - well, the definition of "over there" changes too. By exactly how much and how far, is what the optimization is for.
So in other words, you brain is doing a lot of work, every time your orientation changes. But, the "egocentric map of the universe" stays the same, everything "out there" is still in the same place with the same configuration, it's just that the "viewing angle" has changed. And, when the viewing angle changes, the brain has to figure out, did "I" change position, or did the universe suddenly change?
So what the cerebellum does, is it "notifies the other sensory systems in advance" whenever the organism's own orientation changes, and this information is encoded "with" the sensory signal, before it even hits the cerebral cortex. This helps the visual system for example, interpret parts of objects that are hiding behind other objects. If in the mouse maze, the reward is "hidden" so the mouse has to reach "around" another object to get to it (kind of "guess" exactly where it is), this behavior can be completely disrupted by disabling the pathway from the cerebellum to the LGN. In other words, the organism is relying on a "representation" of the universe, rather than direct sensory input, to reach the goal. This representation is being continually updated, with every small change in head, neck, or body orientation.
Furthermore - there are two vestibulae. They're in stereo. Why? Turns out, each one is optimized separately, and then the results are merged. Even more work! Just like the eyes, eventually the stereo input has to result in a single unified image. Because, what is stored in memory, is an invariant abstraction, not a topography specific snapshot.
Next topic then, would be eye movements. One type is attracted to sensory features, another type is directed to interesting locations.