ran into my first robot restaurant worker

I'm sorry. Did it hurt?

We have them at the local airport Denny's. Friendly little things, they come right up to your table and go ding ding ding, and then they wait for you to take the last bit of food.
/----/ The supermarket Stop & Shop has one large one that roams the aisles looking for spills. It can't clean anything; it just alerts the staff.
It's more annoying than anything. Often blocks traffic when trapped between a floor display and shopping carts.
 
saw my first work robot tonight. It was a table? like a table with wheels and the guy put drinks on it and it delivered the drinks to the patrons then returned back to the front of the house and parked itself. It was quiet amazing.
Did you tip it a squirt of WD40?
 
saw my first work robot tonight. It was a table? like a table with wheels and the guy put drinks on it and it delivered the drinks to the patrons then returned back to the front of the house and parked itself. It was quiet amazing.

They still need a waitress to take the order. My robot will learn to associate the face with the voice with the order. The people can then change seats and it'll still deliver the correct order to the right person.

You really should study up on this stuff, Trevor. You'd enjoy it. The paradigm is predictive coding. Everything that comes out of your robot is a prediction. Fig 1 in this paper shows how it works. It's real simple.


Do you know what a Kalman filter is? It's an adaptive filter, it adapts to the signal while keeping its own internal dynamics under control.

AI says:


The Kalman filter is an optimal, recursive algorithm that estimates the state of a linear system from a series of noisy measurements over time. It operates in two steps—predict and update (correct)—to minimize mean squared error, making it ideal for tracking, navigation, and control systems.

The two-step alternating optimization is exactly the same thing that happens in predictive coding.


Mean squared error, is just the accuracy of your prediction. First you predict, then you compare, then you update the next prediction based on the error.

"Memory", is when the update occurs not just with "this" error, but with all the errors that came before it too. This is also the principle of a Volterra series. You have f(t, t-1, ... t-n).

In real time, there's usually not enough memory to hold all the past states. So what you do is you use a "Bayesian" prediction algorithm so you can update your predictions from just the last prediction and the most recent state (using gamma functions, or gamma distributions).

If you can do the updates fast enough, you can always keep your system tuned to the current behavior of the data. The good thing about the Kalman formalism is it separates measurement noise from process noise. So like, if you have an electrode stuck in a neuron, your signal chain through the preamp and scope is noisy (that's measurement noise), and the neuron itself is noisy (that's process noise).
 
scruffy scruff are you a non drinker? I don't have a drink to share for the last post and its fry-day!! damnit
 
Back
Top Bottom