robotics : dreaming away on the job?

but, time doesn't stand still.
and the books you mention are a bit outdated (given the current pace of tech.dev.)
Yes, but I suggested for that reason, insights into what AI was initially concerned with and the kinds of problems that were tackled. Never dismiss a book because of its publication date, Einstein's The Meaning of Relativity is still the best book on the mathematical development of GR despite being published over 100 years ago.
these days, quantum technology is implemented in fusion power generator test setups.
Well we were talking about artificial intelligence not quantum computing.
i think AI and the rest of technology improvements will save us. from famine, from war, from Orwellian societies, etc.
Why do you you think that? The human race is destined to self destruct.
 
Yes, but I suggested for that reason, insights into what AI was initially concerned with and the kinds of problems that were tackled. Never dismiss a book because of its publication date, Einstein's The Meaning of Relativity is still the best book on the mathematical development of GR despite being published over 100 years ago.

Well we were talking about artificial intelligence not quantum computing.
quantum computing increases computation speed by high margins.
that's why it'll be very popular in all western companies producing AI-based military hardware.
Why do you you think that? The human race is destined to self destruct.
nope :) but we'll probably be left with 500 to 2-thousand years of near-misses (read: (proxy-)wars) before we as a species can call ourselves 'home sapiens anglica', by UN intervention troops and a slight re-focussing on protecting everyone's interests, in stead of just our own.
 
quantum computing increases computation speed by high margins.
that's why it'll be very popular in all western companies producing AI-based military hardware.

nope :) but we'll probably be left with 500 to 2-thousand misses (read: (proxy-)wars) before we as a species can call ourselves 'home sapiens anglica', by UN intervention troops and a slight re-focusing on protecting everyone's interests, in stead of just our own.
Every generation places trust in emerging technologies, I did once. But I was naive, I did not understand how power operates in the world. It is malevolent, ruthless, rapacious. Power exploits technologies in order to control and dominate.

Look at those little drones, nice little machines but now used to drop bombs on hapless soldiers. GPS was created for the military, AI is now used by the military, the human race cannot survive simply on the basis of technology, we must first stop hating and killing.
 
I was born in 1959, grew up immersed in the Apollo 11 moon project. I was making radios and telescopes as a kid, fascinated by the future. The "future" used to mean exciting, utopian, breaking free, saving people from hard work, it was optimistic.

Today the "future" has a different connotation, worry, crime, social breakdown, mental illness, misery. People are changing, seriously changing. They are being morphed into uncritical unthinking consumers of gadgets.

I could walk into a pub in the 1970s in Liverpool and within a few minutes be chatting to people at the bar, laughing, discussing stuff. Today I cannot do that, when I walk in I see people sitting quietly staring at their phones, oblivious to others, even couple in love sit but don't speak to one another, glued to their stupid pointless little phones, that's where technology has taken us.
 
This is a British TV program from 1997, an eccentric (Jonathan Meades) discusses how the future looked to us in the 60s and 70s and how that has changed, the optimism has faded, the disillusionment has increased. That was made 25 years ago and everything he says in the show is truer now than when he made the show.

It's worth watching, if nothing else at least you'll get some insights into British eccentrics!

 
Feelings don't really independently exist at all. It's a construct of our introspective self awareness.

Otherwise it is just stimulus/response, like any plant or animal.

The concept of feelings derives from our ability to conceptualize a physical sensation and to inspect it.
 
This was a descriptive approach, to aid with intuition. We can be more formal if you wish.

Hopfield networks are full analog computers, they are capable of universal computation in the Turing sense.

To show this, we can frame the network dynamics in terms of a Lyapunov function that is guaranteed to converge monotonically to an attractor state.

Part of the magic is an S-shaped transfer function in every neuron. To build a universal computer, we simply change the shape of the S.

Doing so gives us an associative memory with a capacity that scales exponentially with the number of neurons,

The Hopfield network has Glauber dynamics, which describe state changes in an Ising spin system at 0 temperature. The difference in a Boltzmann machine is it operates at non-0 temperature. To get it to converge you lower the temperature over time

You can describe the entire network and it's evolution in one equation, which uses a Legendre transform of the Hamiltonian/Lagrangian.

View attachment 996167

Even a basic search of the literature or web undermines your dogmatic claim.

No, it doesn't.

There's a lot of ignorant fools on the internet. I'm not one of them.


What do you know about them that I don't?

I have an Ivy League degree in neuroscience, for starters.

Anyone, perhaps even you if you were to try, can easily look up the definition of random number.

I also have an advanced degree in mathematics.


Sigh.

I suggest you open a book on basic decorum and communication skills. Your obvious need to descend into insults and condescension serves no purpose other than to deflect from the topic, perhaps that's why you do it.

I'm stating a simple fact.

You are completely ignorant about neural networks.

Go crack a book and learn something, instead of flaunting your ignorance.

I'm done with you. I gave you enough to get started.
 
i wonder; is this the only way to build a neural net?
There are a gazillion useful behaviors.

You can surf through Google's Deep Mind site for some ideas.

There are convolutional networks, geometric networks, topological networks, networks restricted by physics of other hard-coded "laws", attentive networks, self-attentive networks, transformers, multi-headed transformers, ... on and on.

The science is quite young, all things considered. It was set back about 20 years by that idiot Marvin Minsky, who was a blowhard negativist.

One of the promising areas right now is "causality", networks that learn outside of the traditional models of likelihood.

Another promising area is non-Abelian computing, which means non-commutative. It applies to simple matrix math, but the deeper meaning goes well beyond that.

The model I described in the other thread is brand new, based on topological compactification. It is a way of "lifting" representations into higher dimensions, operating on them, and then projecting them back down.

Another new thing is phase coding, no one's really studied that in any great detail yet.
 
This was a descriptive approach, to aid with intuition. We can be more formal if you wish.

Hopfield networks are full analog computers, they are capable of universal computation in the Turing sense.

To show this, we can frame the network dynamics in terms of a Lyapunov function that is guaranteed to converge monotonically to an attractor state.

Part of the magic is an S-shaped transfer function in every neuron. To build a universal computer, we simply change the shape of the S.

Doing so gives us an associative memory with a capacity that scales exponentially with the number of neurons,

The Hopfield network has Glauber dynamics, which describe state changes in an Ising spin system at 0 temperature. The difference in a Boltzmann machine is it operates at non-0 temperature. To get it to converge you lower the temperature over time

You can describe the entire network and it's evolution in one equation, which uses a Legendre transform of the Hamiltonian/Lagrangian.



No, it doesn't.

There's a lot of ignorant fools on the internet. I'm not one of them.




I have an Ivy League degree in neuroscience, for starters.



I also have an advanced degree in mathematics.




I'm stating a simple fact.

You are completely ignorant about neural networks.

Go crack a book and learn something, instead of flaunting your ignorance.

I'm done with you. I gave you enough to get started.
The simplest neural network is a single two input node, that has an activation function perhaps like this:

1723821299612.png


Now please explain how that activation function is non-deterministic?
 
The simplest neural network is a single two input node, that has an activation function perhaps like this:

View attachment 996598

Now please explain how that activation function is non-deterministic?

Okay.

The activation function you're suggesting, first started with the Perceptron, in 1952. Before that there were only McCulloch-Pitts neurons, which had no activation function at all.

The function you're suggesting is an f(x), which as you say, is deterministic. The reason there's an activation function at all, is to make the neuron's response nonlinear. Because, linear classification is rather boring.

Sometimes around 1972, Shun-Ichi Amari introduced the idea of a stochastic activation function. Instead of f(x), it is P(f(x)), where P is a probability. He did this to make the network "more biological", to align more closely with real behavior. Because, this is the way ion channels behave in real life. There is only a "probability" that they will let the ion through.

The genius of Hopfield in the early 80's, was the realization that population behavior is much more important than single neuron behavior. Basically neurons classify patterns into subspaces of the total state space. If the neurons are linear, so are the subspaces. If the neurons are probabilistic, so are the subspaces.

Hopfield DELIBERATELY used a static transfer function to demonstrate the importance of asynchronous updating. His original network is highly non biological, and simple to the point of ridiculousness, yet he was able to derive an enormous amount of computational power from it, so much so that the entire world took notice. Only a year later physicists at UPenn had built an optical version of his network, that was able to solve Traveling Salesman problems in under a second. This was astounding not only to physicists, but to mathematicians, biologists, and computer scientists alike.

Today, such power is commonplace. Amari who introduced the stochastic transfer function, is now considered the godfather of Information Geometry. Whereas a Hopfield network can memorize and approximate any continuous function, an Amari network can memorize and approximate any probability distribution. That's quite an achievement, if you're familiar with Brownian motion and Wiener processes.

Today there is focus on extracting the Volterra kernels from discontinuous nonlinear time series. This would have been completely impossible with deterministic networks. In real life this translates into "how to confuse an AI". If you say "I take my coffee with cream and sugar", most networks can figure out what you're saying. But if you say "I take my coffee with cream and.... (delay).... (cough, stutter).... um.... dog", most networks will become horribly confused.

However an Amari network will gracefully respond with "that makes no sense, would you mind repeating that please". It has enough smarts to recognize that the probability of drinking a dog is near zero -;whereas a linear classifier will try to show you a picture of a white dog.

Take a quick look at these slides for an intuitive understanding:

 
To orient to these slides, you should know what a "Fisher information metric" is.


Pay particular attention to the slide about "belief propagation". Beliefs are high level constructs learned from experience.

With perceptrons (f(x)), the only way to change a belief was to brute-force "more than" the original amount of learning. (Basically overwriting the original belief by overwhelming it with contradictory input).

What these slides are showing you, is a formal way of measuring the "distance between beliefs", and then moving from one belief to another along a geodesic.

For humans, such reasoning often begins with "chances are good that". For example, if it's raining, "chances are good that" it will also be humid. How do you get the AI to change that belief? You have to make it understand the concept of a "dry rain", which is very difficult with sparse examples.
 
Oh - so about 2/3 of the way down you'll see a section on multi layer Perceptrons. That's the f(x) you're talking about.

Amari is showing you what happens when you try to represent "complex beliefs" in such systems. Basically an f(x) can't handle anything north of a smooth convex surface. But every day, humans deal with twisted beliefs, a great example being the relationships between stock prices in various industries.

To learn such relationships, a network "back propagates" outcomes against predictions. (This is what "gradient descent" is about). If the outcomes are sufficiently divergent from the current beliefs, static classifiers fail completely. Because they can't handle geometries that require oddball probability surfaces. You'll notice the S shaped surface with a twist, which can not be handled "at all" with a simple polynomial.
 
Last edited:
If you're following this, you understand that everything in a neural network is learned by CORRELATION. The simplest and most common learning rule ("Hebbian") states that the synapse is reinforced whenever the input and output are active at the same time.

This is where asynchronous updates come in. If all your neurons are updating at the same time, it doesn't much matter what your transfer function is, for individual neurons. Because the correlations will still behave the same way. In that case you're taking snapshot f(t) and correlating it with snapshot f(t-1).

A "belief" in simple form is an expectation based on input. The belief is learned from prior correlations. To discern between two inputs your subspaces have to be "orthogonal" in some way. So two methods are useful: 1) update "much faster than" the input changes, and 2) update "piecemeal" so you can detect bit-wise correlations.

Real input is never static, things move from one frame to the next. Hence "invariances" are important. You want to correlate on the invariances, which need to be somehow extracted and mapped into coordinates that make the orthogonalities visible and accessible. Individual neurons do this by "firing", which is an asynchronous activity that causes a microscopic correlation to occur. Enough of these, and you have a macroscopic approximation, which ends up being a "belief".
 
Test of electronics knowledge:

1723912123588.webp


What is the output at op amp 3?
 
Okay.

The activation function you're suggesting, first started with the Perceptron, in 1952. Before that there were only McCulloch-Pitts neurons, which had no activation function at all.

The function you're suggesting is an f(x), which as you say, is deterministic. The reason there's an activation function at all, is to make the neuron's response nonlinear. Because, linear classification is rather boring.

Sometimes around 1972, Shun-Ichi Amari introduced the idea of a stochastic activation function. Instead of f(x), it is P(f(x)), where P is a probability. He did this to make the network "more biological", to align more closely with real behavior. Because, this is the way ion channels behave in real life. There is only a "probability" that they will let the ion through.

The genius of Hopfield in the early 80's, was the realization that population behavior is much more important than single neuron behavior. Basically neurons classify patterns into subspaces of the total state space. If the neurons are linear, so are the subspaces. If the neurons are probabilistic, so are the subspaces.

Hopfield DELIBERATELY used a static transfer function to demonstrate the importance of asynchronous updating. His original network is highly non biological, and simple to the point of ridiculousness, yet he was able to derive an enormous amount of computational power from it, so much so that the entire world took notice. Only a year later physicists at UPenn had built an optical version of his network, that was able to solve Traveling Salesman problems in under a second. This was astounding not only to physicists, but to mathematicians, biologists, and computer scientists alike.

Today, such power is commonplace. Amari who introduced the stochastic transfer function, is now considered the godfather of Information Geometry. Whereas a Hopfield network can memorize and approximate any continuous function, an Amari network can memorize and approximate any probability distribution. That's quite an achievement, if you're familiar with Brownian motion and Wiener processes.

Today there is focus on extracting the Volterra kernels from discontinuous nonlinear time series. This would have been completely impossible with deterministic networks. In real life this translates into "how to confuse an AI". If you say "I take my coffee with cream and sugar", most networks can figure out what you're saying. But if you say "I take my coffee with cream and.... (delay).... (cough, stutter).... um.... dog", most networks will become horribly confused.

However an Amari network will gracefully respond with "that makes no sense, would you mind repeating that please". It has enough smarts to recognize that the probability of drinking a dog is near zero -;whereas a linear classifier will try to show you a picture of a white dog.

Take a quick look at these slides for an intuitive understanding:

Thank you for that summary, it was informative and I acknowledge your expertise. My only difficulty interacting with you is the ease with which you slip into ad-hominem, one should know one's audience and not speak to them disparagingly if they appear less expert than oneself.

Insulting a person with glib suggestions they should read this or that book, get an education and all the other frankly rude comments will not help get your ideas across and in my case can disincline me to want to interact at all - FYI.
 
Last edited:
Thank you for that summary, it was informative and I acknowledge your expertise. My only difficulty interacting with you is the ease with which you slip into ad-hominem, one should know one's audience and not speak to them disparagingly if they appear less expert than oneself.

You made an untrue claim.

Insulting a person with glib suggestions they should read this or that book, get an education and all the other frankly rude comments will not help get your ideas across and in my case can disincline me to want to interact at all - FYI.

I am not responsible for hard heads.

The same thing goes on with evolution, there's a bunch of people making ignorant claims (like "impossibility").

I'm a scientist, and I learned from scientists. When I was young I thought I knew something, then I got called ignorant by people who really knew something. How I grew a brain is by studying more and harder than they did. And spending countless hours in the lab.

I consider claims of ignorance to be a challenge, rather than an insult. If I speak on a topic it indicates interest, and if I'm interested I want to know. Sometimes if I'm really interested I want to know enough to converse with the experts, which means state of the art.

Sometimes, experts turn out to be charlatans. The way to discern, is to know a lot. When talking with people who make untrue claims, I like to cut through the BS, otherwise we could be arguing about trivialities for days.

So now you know a little more than you did before. There is a model of neurons as oscillators, which is why I posted the circuit. It's called the Fitzhugh-Nagumo model. It ties in with the Kuramoto model of coupled oscillators from physics. It suggests the same kinds of hidden attractors that Chua discovered. They look something like this in the phase space:

1723932438984.webp
 
You made an untrue claim.
There, that's better, that's a much better way to say it, I think you've learned something.
I am not responsible for hard heads.
I don't recall saying you were, but you are responsible for the words you choose yes?
The same thing goes on with evolution, there's a bunch of people making ignorant claims (like "impossibility").
Right, but you'd need to tell me exactly what the claim is/was for me to comment on that.
I'm a scientist, and I learned from scientists. When I was young I thought I knew something, then I got called ignorant by people who really knew something. How I grew a brain is by studying more and harder than they did. And spending countless hours in the lab.
Yes self improvement can be an effort.
I consider claims of ignorance to be a challenge, rather than an insult.
Saying "you are wrong" is not an insult, but saying "Go crack a book and learn something, instead of flaunting your ignorance" is regarded by most educated people as condescension, tactless - are you possibly ignorant of these words and their meaning? Have you ever been advised to "never alienate one's audience"?
If I speak on a topic it indicates interest, and if I'm interested I want to know. Sometimes if I'm really interested I want to know enough to converse with the experts, which means state of the art.

Sometimes, experts turn out to be charlatans. The way to discern, is to know a lot. When talking with people who make untrue claims, I like to cut through the BS, otherwise we could be arguing about trivialities for days.
An untrue claim is an error of reasoning though not of personality. Poorly chosen words can do great harm to some minds, it's always best to be objective and impersonal, if something is worth saying its worth saying well, saying in such a way the listener is encouraged rather that discouraged.
So now you know a little more than you did before.
A little, I learn new things every day, or I hope I do.
There is a model of neurons as oscillators, which is why I posted the circuit. It's called the Fitzhugh-Nagumo model. It ties in with the Kuramoto model of coupled oscillators from physics. It suggests the same kinds of hidden attractors that Chua discovered. They look something like this in the phase space:

View attachment 997232
OK but what does this imply for human consciousness? How does a conscious system differ from an unconscious one? are there tests we can perform on a system to determine if it is conscious or not? We have the Turing test (I suppose) is there anything else?

When you say "there's a model of neurons as oscillators" do you mean they exhibit periodicity? As you know a periodic function can always be decomposed into some kind of trig function(s), so does the neuron model you refer to have that behavior?
 
Last edited:
Back
Top Bottom