consciousness precedes real time

consciousness precedes real time​

scruffy the troll trapper.
I'm sort of hijacking this thread to bring it back to the OP. I have been interested in consciousness as a sideline to ANN. I would have titled the OP as Ersatz consciousness precedes real time, but I'm just being a nitpicker. Have you studied blind sight? I think the late Oliver Sacks had a few cases in his famous book. It seems that there is a loss of connection between the visual cortex and whatever produces consciousness. The patient is encouraged to guess what he sees. One patient was especially good at it. He went back to work as a CEO guessing his way through life. A strong pitcher can toss the ball to the batter in 400 mS that requires a strong ability of preceding consiousness.

Here are a few other misc. observations.

I read somewhere that the Hopfield net can be thought of as a set of matched filters, where the filter with the largest output is the winner. That is still consistent with the idea of an associative memory. In image processing the various filters are trained with small image fragment exemplars. In pattern recognition, it is used to clean up noisy images, the Hopfield net can be used as a pre-filter where the winning pixel area is replaced by the exemplar.

I also used translation invariant sparse ANNs for looking at the surface of IC chips to find their angle and location for die bonding. They had to be trained in the field by an unskilled operator within a few seconds. During a manufacturing run they were able to detect the angle and location in 10 to 20 mS. The systems used a three layer network, and were sort of modeled after an extremely lobotomized version of Hubel and Wiesel's work. We sold thousands of systems.

That was done in the 90's when the sophistication was far more primitive. I have no idea what industrial computer vision is today.
 
I'm surprised you haven't shot yourself. And I don't believe you either.
You're a troll. You're not supposed to believe. You're not good at it. Stick to trolling, you're good at that.
 
You're a troll. You're not supposed to believe. You're not good at it. Stick to trolling, you're good at that.
I'm here to outsmart the people who are trying to outsmart the people. Why are you here?
 

consciousness precedes real time​

scruffy the troll trapper.
I'm sort of hijacking this thread to bring it back to the OP. I have been interested in consciousness as a sideline to ANN. I would have titled the OP as Ersatz consciousness precedes real time, but I'm just being a nitpicker. Have you studied blind sight?

No, not really. I'll look into it. Thanks. :)


I think the late Oliver Sacks had a few cases in his famous book. It seems that there is a loss of connection between the visual cortex and whatever produces consciousness. The patient is encouraged to guess what he sees. One patient was especially good at it. He went back to work as a CEO guessing his way through life. A strong pitcher can toss the ball to the batter in 400 mS that requires a strong ability of preceding consiousness.

Here are a few other misc. observations.

I read somewhere that the Hopfield net can be thought of as a set of matched filters, where the filter with the largest output is the winner.

In an abstract conceptual sense, yes. The individual memories tend to the corners of a hypercube.

The real power of Hopfield is when you combine it with an ordinary feed forward or recurrent network. That's when you get the sophisticated adaptive filtering you're alluding to.

Essentially the Hopfield portion learns the adaption path "faster than" the filters adapt, so it's able to guide and control the filters.

That is still consistent with the idea of an associative memory. In image processing the various filters are trained with small image fragment exemplars. In pattern recognition, it is used to clean up noisy images, the Hopfield net can be used as a pre-filter where the winning pixel area is replaced by the exemplar.

Yes, that is possible. I'd go the other way though. For instance here is Fukushima's "Neo-Cognitron", a translation invariant machine used for (Japanese) handwriting analysis.

1725156165560.webp


If you can imagine a Hopfield network sitting horizontally beneath this, with the cells connecting along the long axis - so for example cell 1 connects at the far left and cell N connects at the far right, and the rest of the cells "tap" various points along the cascade.

You have to have "enough" connections into the cascade, because Hopfield updates asynchronously. If you have "enough", then the cascade becomes just another version of Hopfield pattern learning.

I also used translation invariant sparse ANNs for looking at the surface of IC chips to find their angle and location for die bonding. They had to be trained in the field by an unskilled operator within a few seconds. During a manufacturing run they were able to detect the angle and location in 10 to 20 mS. The systems used a three layer network, and were sort of modeled after an extremely lobotomized version of Hubel and Wiesel's work.

Spatial filtering? Sounds right, angle and location. Far out.

That was done in the 90's when the sophistication was far more primitive. I have no idea what industrial computer vision is today.

Today we have things like the Luma Dream Machine, that will generate video from stills. It uses transformer technology. In an industrial capacity, the same technology is used to "pay attention to" the details of an assembly operation. For instance if you have a checklist of 30 items for QA, the transformer will traverse the list and make the needed adjustments at each step, even "as" it's being assembled.

Another version involves knowledge of "production runs", so for example each run may have an idiosyncratic set of glitches, etc - the goal being to inform the technicians and increase production quality and efficiency, in addition to correcting or patching the problems during QC.

Cool stuff. Thanks for getting the thread back on track.
 
Your responses to my posts says otherwise. I'm in your head.
No, I'm just deciding whether to feed you or shoot you. You're disrupting the rest of the class, and the principal is busy.
 
No, I'm just deciding whether to feed you or shoot you. You're disrupting the rest of the class, and the principal is busy.
Thank you for proving my point.

#winning
 
Thank you for proving my point.

#winning
Nuckin futz. 🤡

Pretty full of yourself, aren't you?

I was right in the first place, the best answer is just to ignore you trolls.
 
Nuckin futz. 🤡

Pretty full of yourself, aren't you?

I was right in the first place, the best answer is just to ignore you trolls.
Nothing special about me at all. :)
 
What an asshole. ^^^

Oh look, the little troll wants to be an asshole, cause it's the only way he can get any attention.

No more attention for you. Iggy.
 
In most time series analysis, past performance is used to predict the future.

Statistics is based on the idea of "stationary" generators (Markov processes being an example).

In probability theory, the Chapman-Kolmogorov equation describes two types of behaviors, one is smoothe an continuous like Brownian motion, the other is discontinuous and is called a "jump" process for that reason.

The economist Robert Merton proposed hybrid behavior for a stock ticker, where prices show large jumps but are approximately continuous in between.

But there is an entirely different way of looking at time series. Instead of using past behavior to predict the future, we use actual outcomes (models of future behavior) to interpret the recent past. This is equivalent under translation, to "selecting from possible futures", which is something quite different from determining historical mean and variance.

When we ask the system to generate possible outcomes we are looking for orthogonal results. Instead of predicting the most likely outcome we are looking for a set of distinctly different outcomes, where the differences are orthogonal. In essence we are asking what are the "factor" that are relevant for future outcomes.

We can thus create a "space" of possible outcomes, where each factor is like a dimension. This way, predicting the most likely outcome is like placing a point in the space, whereas assessing the range of outcomes is like drawing the coordinate axes.

In the brain there are two sources of information about sensory events, one is from the receptors and the other is from motor activity (usually modeled as efference copy). At some point the motor output becomes irreversible, it can no longer be stopped. At this point we can predict the future with near 100% accuracy, if everything else remains stable.

The salient feature of the irreversibility is it occurs "before" now. It actually occurs in the future, so to speak.

The only way to model this is with a system that continually generates the future, "in real time". It has to do the same thing as a room full of analysts at Goldman Sachs.

Is it acheivable with AI? Absolutely. Certainly. It can be done with the $50 AI hat from Raspberry Pi.

The interesting and important piece is the relationship between awareness and irreversibility. This is an entirely new concept, and we'll have to see how it plays out in the next few years.

If everything is alive and everything is conscious, then whatever preceded the universe was also alive and conscious, right?
 
Back
Top Bottom