robotics : dreaming away on the job?

This is the next stage in robotics.

This device is called a memristor.


Here are its characteristics:
1723526118789.webp


The Chinese just came up with an optical one of these. It consumes virtually zero power.

What this is, is a resistor with a non volatile memory. In other words, an artificial synapse.


Here's what a neural network made of memristors looks like:

1723526560207.webp


But here's the really interesting part:

Memristors can be made from proteins.


Variations of ferritin seem to work exceedingly well

This discovery has led to an explosion of research in neuroscience, to understand how long term memory is retained at the synaptic level.

A synapse turns out to be very complicated. Little bits of RNA travel from the cell body to the synapse via microtubules, where they make proteins. These proteins insert themselves into the cell membrane on either side of the synapse,then travel into the middle of the synapse in an activity dependent manner. There, they bind with receptors to regulate the strength and effectiveness of synaptic transmission.

The difference between natural and artificial memristors, is that the natural ones are stochastic (probabilistic). What is euphemistically called the "opening and closing of ion channels" depends on the instantaneous configuration of a polymer (it usually has four or six subunits). This is where asynchronous timing is introduced and propagated. An ion can sit "near" a channel for a long time before it's admitted through. This is one reason why the field strength across a nerve membrane is so powerful, it can reach millions of volts per meter, enough to cause a complete dielectric breakdown in the membrane.

Asynchronous timing is very hard to achieve without biomolecules. Fifty years ago the answer was a super-fast clock. Today, it's a quantum process. Tomorrow it will be mixing biomolecules with the regular lattice you see in the above pic. Picosecond timing is not required, all that's needed is for the synapses to update at slightly different times . In computer models this means using the Monte Carlo method, which is time consuming and computationally expensive. But memristors made of biomolecules can do it in real time, just like the human brain.
 
Again: neural networks are not digital.
They are deterministic.
Otay... If you say so... :p

A neural network can do it too.
Do what exactly? If the neural network can be simulated by computer then by definition its behavior can be described algorithmically, as a Turning machine.
AI has already solved problems that vexxed mathematicians for centuries.
AI (as defined today, as opposed to say the 60s and 70s) is good at analysis of large data, image recognition and the like. But that's not really intelligence, certainly not as we see it in humans. Mathematicians do not achieve their advances algorithmically. This point is crystalized in the Gödel theorems. He was able to show that there are true statements (in some domain, some language) that cannot be proven true using just the axioms of that domain - yet they are true, he (his mind) was able to discern that, true statements but unprovable, unprovable means there's no algorithm for determining the truth.

Take a look at the famous Halting Problem for a similar thing.
You are ignorant.

I told you already, it's STOCHASTIC optimization.
What is?
By definition, that is the exact opposite of deterministic.
You seem to be conflating determinism and predictability.
I just spent half a thread showing you why asynchronicity is important. Maybe you should read it.
I'm a software engineer, former electronics engineer and programming language designer, why is "asynchronicity" relevant to the question of consciousness?
 
Last edited:
and predicting the future is only possible for futurists (Iron Man) and the Kwizat Hiderrach (Dune), and even then : very often inaccurate and next to impossible.
There's an interesting article out there somewhere about who was the best at predicting the future between Isaac Asimov and Arthur C. Clarke, I can't recall who they said was best though.
 
They are deterministic.

No, they're not.

You should stop flaunting your ignorance.

Do what exactly? If the neural network can be simulated by computer then by definition its behavior can be described algorithmically, as a Turning machine.

Sigh.

You obviously don't know the first thing about random numbers.


AI (as defined today, as opposed to say the 60s and 70s) is good at analysis of large data, image recognition and the like. But that's not really intelligence, certainly not as we see it in humans. Mathematicians do not achieve their advances algorithmically. This point is crystalized in the Gödel theorems. He was able to show that there are true statements (in some domain, some language) that cannot be proven true using just the axioms of that domain - yet they are true, he (his mind) was able to discern that, true statements but unprovable, unprovable means there's no algorithm for determining the truth.

Take a look at the famous Halting Problem for a similar thing.

What is?

You seem to be conflating determinism and predictability.

I'm not conflating anything. I do this stuff every day. I publish papers about it. You obviously have no idea what you're talking about.


I'm a software engineer, former electronics engineer and programming language designer, why is "asynchronicity" relevant to the question of consciousness?

Double sigh.

Crack a book.

Get back to me when you know the answer.
 
Here, I'd like to help you understand this.

A neural network consists of neurons and synapses. Neurons have activities and synapses have strengths("weights").

If all your neurons get updated at the same time (which means, the computer loops through the entire network calculating the "next" state), then you have a synchronous network.

A synchronous network is just a glorified Perceptron, no matter how complex it is. Synchronous networks suffer from artifactual oscillations, that show up in the dynamics.

If however, you set it up so each neuron updates at a different time, you have an asynchronous network. In software you have to use the Monte Carlo method to do this, which is very slow. The computer randomly selects which neuron gets updated next. This is the way John Hopfield did it, in the PNAS paper from 1982. That paper is the foundation of modern networks. Hopfield was a physicist at Princeton, a student of John Wheeler.

Asynchronous networks no longer suffer from computational artifacts. Hopfield showed that there is a Hamiltonian associated with the total energy in the network at any given time. Another mathematician Terry Sejnowski at UCSD subsequently showed the formal similarity to molecules colliding in a gas cloud. He called his network a Boltzmann machine. It was the first artificial neural network that could learn to read and talk on its own.

These asynchronous networks are stochastic, the opposite of deterministic. They are self organizing, they don't require a teacher. They only require feedback. (They are "recurrent"). How they work is, the synaptic matrix determines an energy surface, which can be calculated from the Hamiltonian. In turn, the neural activity levels act like a ball, bouncing around on the energy surface. The randomness in the update times ensures that the ball never bounced the same way twice. Overall the ball will always seek the lowest energy ("local minimum"), but the path it takes is completely random.

Depending on how you set up the network and how much energy you give the ball, you can make the ball bounce harder or softer, this way you can adjust whether the ball settles in local minima or bounces out of them. It is "almost impossible" to guarantee that the ball will always find the global minimum, especially if it's associated with a boundary. Sometimes if the energy surface is relatively flat the ball won't find a minimum at all, it'll just keep bouncing around.

You see? Non deterministic. The bouncing is due to the randomness. You're dealing with stochastic kinetics just like in a gas cloud. The Boltzmann dynamics say the cloud will eventually reach the lowest energy state, "most" of the time. When it doesn't, chances are good you're stuck in a local minimum and don't have enough energy to bounce out of it. The idea is to tune the network to achieve useful behavior. Useful has many meanings, depending on the context. Sometimes you want your ball to bounce, other times you just want it to roll smoothly along the surface.

Modern networks have ways to "kick" the ball, to get it out of a local minimum. This can be done with control systems that resemble the behavior of serotonin and dopamine in the brain. They are "extra inputs" with diffuse connectivity, that impart extra energy to either the whole network or just a small region of it. The idea is to kick the ball without changing the shape of the energy surface.

And SOMETIMES, the ball will find an "orbit", which is not "at" a minimum but rather "around" it. Such orbits can be stable or not, some will decay and others will amplify.

1723694306397.png
 
Last edited:
i can't claim to actually understand anything that i just read in the previous posts,
but i'm giving it an A+ anyways, for effort and just in case you're right :D
 
Here, I'd like to help you understand this.

A neural network consists of neurons and synapses. Neurons have activities and synapses have strengths("weights").

If all your neurons get updated at the same time (which means, the computer loops through the entire network calculating the "next" state), then you have a synchronous network.

A synchronous network is just a glorified Perceptron, no matter how complex it is. Synchronous networks suffer from artifactual oscillations, that show up in the dynamics.

If however, you set it up so each neuron updates at a different time, you have an asynchronous network. In software you have to use the Monte Carlo method to do this, which is very slow. The computer randomly selects which neuron gets updated next. This is the way John Hopfield did it, in the PNAS paper from 1982. That paper is the foundation of modern networks. Hopfield was a physicist at Princeton, a student of John Wheeler.

Asynchronous networks no longer suffer from computational artifacts. Hopfield showed that there is a Hamiltonian associated with the total energy in the network at any given time. Another mathematician Terry Sejnowski at UCSD subsequently showed the formal similarity to molecules colliding in a gas cloud. He called his network a Boltzmann machine. It was the first artificial neural network that could learn to read and talk on its own.

These asynchronous networks are stochastic, the opposite of deterministic. They are self organizing, they don't require a teacher. They only require feedback. (They are "recurrent"). How they work is, the synaptic matrix determines an energy surface, which can be calculated from the Hamiltonian. In turn, the neural activity levels act like a ball, bouncing around on the energy surface. The randomness in the update times ensures that the ball never bounced the same way twice. Overall the ball will always seek the lowest energy ("local minimum"), but the path it takes is completely random.

Depending on how you set up the network and how much energy you give the ball, you can make the ball bounce harder or softer, this way you can adjust whether the ball settles in local minima or bounces out of them. It is "almost impossible" to guarantee that the ball will always find the global minimum, especially if it's associated with a boundary. Sometimes if the energy surface is relatively flat the ball won't find a minimum at all, it'll just keep bouncing around.

You see? Non deterministic. The bouncing is due to the randomness. You're dealing with stochastic kinetics just like in a gas cloud. The Boltzmann dynamics say the cloud will eventually reach the lowest energy state, "most" of the time. When it doesn't, chances are good you're stuck in a local minimum and don't have enough energy to bounce out of it. The idea is to tune the network to achieve useful behavior. Useful has many meanings, depending on the context. Sometimes you want your ball to bounce, other times you just want it to roll smoothly along the surface.

Modern networks have ways to "kick" the ball, to get it out of a local minimum. This can be done with control systems that resemble the behavior of serotonin and dopamine in the brain. They are "extra inputs" with diffuse connectivity, that impart extra energy to either the whole network or just a small region of it. The idea is to kick the ball without changing the shape of the energy surface.

And SOMETIMES, the ball will find an "orbit", which is not "at" a minimum but rather "around" it. Such orbits can be stable or not, some will decay and others will amplify.

View attachment 995985
i wonder; is this the only way to build a neural net?
 
No, they're not.
1723740979395.png


Even a basic search of the literature or web undermines your dogmatic claim.
You should stop flaunting your ignorance.

Sigh.

You obviously don't know the first thing about random numbers.
What do you know about them that I don't? Anyone, perhaps even you if you were to try, can easily look up the definition of random number.
I'm not conflating anything. I do this stuff every day. I publish papers about it. You obviously have no idea what you're talking about.
Sigh.
Double sigh.

Crack a book.

Get back to me when you know the answer.
I suggest you open a book on basic decorum and communication skills. Your obvious need to descend into insults and condescension serves no purpose other than to deflect from the topic, perhaps that's why you do it.
 
Here, I'd like to help you understand this.

A neural network consists of neurons and synapses. Neurons have activities and synapses have strengths("weights").

If all your neurons get updated at the same time (which means, the computer loops through the entire network calculating the "next" state), then you have a synchronous network.

A synchronous network is just a glorified Perceptron, no matter how complex it is. Synchronous networks suffer from artifactual oscillations, that show up in the dynamics.

If however, you set it up so each neuron updates at a different time, you have an asynchronous network. In software you have to use the Monte Carlo method to do this, which is very slow. The computer randomly selects which neuron gets updated next. This is the way John Hopfield did it, in the PNAS paper from 1982. That paper is the foundation of modern networks. Hopfield was a physicist at Princeton, a student of John Wheeler.

Asynchronous networks no longer suffer from computational artifacts. Hopfield showed that there is a Hamiltonian associated with the total energy in the network at any given time. Another mathematician Terry Sejnowski at UCSD subsequently showed the formal similarity to molecules colliding in a gas cloud. He called his network a Boltzmann machine. It was the first artificial neural network that could learn to read and talk on its own.

These asynchronous networks are stochastic, the opposite of deterministic. They are self organizing, they don't require a teacher. They only require feedback. (They are "recurrent"). How they work is, the synaptic matrix determines an energy surface, which can be calculated from the Hamiltonian. In turn, the neural activity levels act like a ball, bouncing around on the energy surface. The randomness in the update times ensures that the ball never bounced the same way twice. Overall the ball will always seek the lowest energy ("local minimum"), but the path it takes is completely random.

Depending on how you set up the network and how much energy you give the ball, you can make the ball bounce harder or softer, this way you can adjust whether the ball settles in local minima or bounces out of them. It is "almost impossible" to guarantee that the ball will always find the global minimum, especially if it's associated with a boundary. Sometimes if the energy surface is relatively flat the ball won't find a minimum at all, it'll just keep bouncing around.

You see? Non deterministic. The bouncing is due to the randomness. You're dealing with stochastic kinetics just like in a gas cloud. The Boltzmann dynamics say the cloud will eventually reach the lowest energy state, "most" of the time. When it doesn't, chances are good you're stuck in a local minimum and don't have enough energy to bounce out of it. The idea is to tune the network to achieve useful behavior. Useful has many meanings, depending on the context. Sometimes you want your ball to bounce, other times you just want it to roll smoothly along the surface.

Modern networks have ways to "kick" the ball, to get it out of a local minimum. This can be done with control systems that resemble the behavior of serotonin and dopamine in the brain. They are "extra inputs" with diffuse connectivity, that impart extra energy to either the whole network or just a small region of it. The idea is to kick the ball without changing the shape of the energy surface.

And SOMETIMES, the ball will find an "orbit", which is not "at" a minimum but rather "around" it. Such orbits can be stable or not, some will decay and others will amplify.

View attachment 995985
TLDR FYI.
 
i wonder; is this the only way to build a neural net?
Don't listen to these wafflers, they like to show off, copying pasting huge articles with diagrams and everything. Nerual networks are hardly new too. This isn't a complex subject, lots of easy to grasp explanations out there:



These forums are best used for discussions, sharing opinions, informal debating, some people though like to use them as a platform for their egos, showing off how "clever" they are, that's self indulgent. I could write reams here about say programming language design, compiler implementation for example but why? only a show-off does that kind of thing in a forum like this.
 
Last edited:
  • give robots 2 visual cortexes and 2 main CPUs
  • the 'real work' (in a warehouse for instance) is not what the robot will focus on (it'll focus on input-output on that 2nd CPU and visual cortex), UNLESS that real-work CPU reports problems.

now why would you want to do this?
  • to be nice to the robots, who WILL feel the same about boring repetitive work as human workers would.
  • by being nice to robots, we can delay their call for voting rights (dogs don't get to vote either after all).
I don’t accept the notion that robots will “feel.”

That would take an assload of programming if it were even possible.
 
I don’t accept the notion that robots will “feel.”

That would take an assload of programming if it were even possible.
robot butlers would need to gauge emotions, by experiencing them.

and the markets won't hire robots that can not emphasize with their clientele,
who are likely to be the elderly, given how the birth rate is dropping.
so that justifies grants to AI research companies, imo.
(right when every coder company is experimenting with their own implementation of AI).

in other words : it is a very quick-moving field, that AI.
i doubt Asimov or the creators of the Dune book series and all those other futurists who live in/near Hollywood, will be 'right' in the end.
 
robot butlers would need to gauge emotions, by experiencing them.

and the markets won't hire robots that can not emphasize with their clientele,
who are likely to be the elderly, given how the birth rate is dropping.
so that justifies grants to AI research companies, imo.
(right when every coder company is experimenting with their own implementation of AI).

in other words : it is a very quick-moving field, that AI.
i doubt Asimov or the creators of the Dune book series and all those other futurists who live in/near Hollywood, will be 'right' in the end.
AI is artificial intelligence not artificial feelings.
 
robot butlers would need to gauge emotions, by experiencing them.

and the markets won't hire robots that can not emphasize with their clientele,
who are likely to be the elderly, given how the birth rate is dropping.
so that justifies grants to AI research companies, imo.
(right when every coder company is experimenting with their own implementation of AI).

in other words : it is a very quick-moving field, that AI.
i doubt Asimov or the creators of the Dune book series and all those other futurists who live in/near Hollywood, will be 'right' in the end.

The recent popularization of "AI" is all hype (Chomsky describes it as automated plagiarism), has very little to do with actual intelligence or even the original goals that AI researchers pursued starting in the 1950s

An excellent little book about robots is this one, hard to find now, but bought this in the late 1970s when I was learning all about microprocessors and electronics, it's a very good book.

1723742732731.png

I strongly recommend this book as a great way to get insight into what AI originally meant, what kinds of problems the engineers were trying to solve.

The book discusses at length the famous Shakey project:

1723742887841.png

Click either image for further details.
 
Last edited:
The recent popularization of "AI" is all hype (Chomsky describes it as automated plagiarism), has very little to do with actual intelligence or even the original goals that AI researchers pursued starting in the 1950s

An excellent little book about robots is this one, hard to find now, but bought this in the late 1970s when I was learning all about microprocessors and electronics, it's a very good book.

View attachment 996188

I strongly recommend this book as a great way to get insight into what AI originally meant, what kinds of problems the engineers were trying to solve.

The book discusses at length the famous Shakey project:

View attachment 996190

Click either image for further details.
but, time doesn't stand still.
and the books you mention are a bit outdated (given the current pace of tech.dev.)

these days, quantum technology is implemented in fusion power generator test setups.

i think AI and the rest of technology improvements will save us. from famine, from war, from Orwellian societies, etc.
 
Back
Top Bottom