Oh - the point is, forward and inverse kinematics are two sides of the same coin, and it's the same coin used in generative AI.
Think of a robot engaged in a motion task. Roughly speaking, "forward kinematics" is predicting where the hand will end up after a command to move it, whereas "inverse kinematics" is figuring out which movements need to be accomplished to carry out the command "place hand here". So like, the robot has to push a button. It has to move its hand into position and extend its finger and then apply a small amount of force while moving the finger forward. Inverse kinematics is "how do I get my hand into position", maybe I have to move the shoulder, the elbow, y'know...
In generative AI, these forward and inverse calculations are called "classification" and "maximum likelihood estimation". They work exactly the same way as geometry in 3 dimensions, but in AI there are thousands (maybe millions) of dimensions, and nothing is hard wired.
If you're familiar with the Traveling Salesman Problem, it's pretty easy to solve in a computer if you have 6 or 7 cities, but if you have 50 cities it takes forever. But sometimes you don't need the "single best answer", you just need an answer that works "reasonably well". In other words, you eventually get to a point in your optimization process where errors get small and one solution becomes as good as another. This is how motor targeting works. Imagine all the gazillions of different paths you could take to position your arm so your hand is in front of the push button. At some point "it doesn't matter how you got there", the only thing that matters is getting there "reasonably quickly", which on the scale of human motor movements means a half second or so.
So for example, saccadic eye movements are inverse kinematics. They're targeted, the brain says "focus here" and the hard wiring in the oculomotor system figures out how to actually execute the movement.
Forward kinematics is often a big part of learning. For instance when you first learn to swim, you have to be shown how to do a doggie paddle. But later, someone can just sway "swim to the other side of the pool" and your brain will figure out the details of how to do it.
We can't actually test this in a commercial AI because we can't program them. You'd have to test it in your own AI, one you can program yourself. You don't have to have a GPU, you can get an AI hat for a Raspberry Pi for under 100 bucks.