how is 'Machine Learning' done?

i'm interested in the topic :)

can anyone point me to a good basics-only tutorial on youtube?
First, you have to become a machine. Then, they expand your matrix until you run out of memory. Just like always. All you can do is that point is return "Error 7 out of memory."
 
Muhammed
let's let this thread rest here for a bit..
see if anyone actually has a link to such an Excellent Tutorial in their bookmarks or something ;)

thanks for the entertainment though.. i needed that.
 
IMO youtube is only going to get you so far, a quick tutorial probably isn't useful, you should pursue some education on it. You can find numerous intro courses for free on Coursera or EDX. I liked Andrew Ngs second course, but there are so many intro courses.

Good luck. I'd help further with this but I have been less interested in A.I of late.
 
Muhammed
let's let this thread rest here for a bit..
see if anyone actually has a link to such an Excellent Tutorial in their bookmarks or something ;)

thanks for the entertainment though.. i needed that.
First, you have to become a machine. Then, they expand your matrix until you run out of memory. Just like always. All you can do is that point is return "Error 7 out of memory."
 
Muhammed
let's let this thread rest here for a bit..
see if anyone actually has a link to such an Excellent Tutorial in their bookmarks or something ;)

thanks for the entertainment though.. i needed that.
What kind of machine learning do you want to learn about?

There is, for example, logic and causality.

There is optimal control, as in machines learning the quickest and easiest way to do something.

There is self organization, like a neural network learning grammar by reading.

There is navigation, like a robot avoiding obstacles in unfamiliar terrain.

There is pattern recognition and classification, all kinds of stuff, and each one of the line items is a whole science unto itself and a lifetime's study

Where would you like to start?
 
Here's (only) one example of how machine learning is done. The machine learns the shape of the surface, using a learning rule. In the case of navigation, the peaks may be obstacles and the valley may be the goal. If you roll a marble on this energy surface starting at any corner, you can see that it will avoid obstacles and eventually reach the goal.

1669459548917.png
 
What kind of machine learning do you want to learn about?

There is, for example, logic and causality.

There is optimal control, as in machines learning the quickest and easiest way to do something.

There is self organization, like a neural network learning grammar by reading.

There is navigation, like a robot avoiding obstacles in unfamiliar terrain.

There is pattern recognition and classification, all kinds of stuff, and each one of the line items is a whole science unto itself and a lifetime's study

Where would you like to start?

I assumed he meant the underlying mechanisms of ML not the specific application.

So, in basic computer laymans terms for me, it would be "pick a programming language (usually R or Python for resources, but also C++, javascript etc).

Run code on a dataset in an effort to "learn" by applying appropriate algorithms depending on the dataset type and objective (linear regression, logistical regression, known nearest neighbour, SVM etc).

Merge the computer prediction to the actual data through trial, error, intuition etc.

Interpret what it means for a specific problem.

A more high level approach as an understanding of broad theory without the specific math and coding (until comfortable), I would generally just suggest understanding " in the context of ML, how does supervised, unsupervised learning, semi supervised, or reinforcement learning work"?
 
Here's (only) one example of how machine learning is done. The machine learns the shape of the surface, using a learning rule. In the case of navigation, the peaks may be obstacles and the valley may be the goal. If you roll a marble on this energy surface starting at any corner, you can see that it will avoid obstacles and eventually reach the goal.

View attachment 731033

That's probably too complex for someone first learning. Or it would have been for me I suppose. That to me in regards to machine learning is "reduce the cost function to identify the global (or local) minima".

I'm not disregarding your intent, i'm just wondering if a newbie would understand what this contour graph would represent in the context of datasets and learning points.

P.S I want to thank the OP and any responders in this thread for inspiring me again. I am going to return to my machine learning, A.I studying which I had avoided since coivd while also learning Mandarin at the same time. There is no reason I can't do both.

Just the discussion of this vast subject with unending challenges has reminded me of how much I enjoy the subject and the flexibility of coding.
 
Last edited:
That's probably too complex for someone first learning. Or it would have been for me I suppose. That to me in regards to machine learning is "reduce the cost function to identify the global (or local) minima".

I'm not disregarding your intent, i'm just wondering if a newbie would understand what this contour graph would represent in the context of datasets and learning points.

An interesting alternative is Judea Pearl's "do" calculus.

It's an interventionist approach to causality, kind of like a scientist formulating hypotheses and then doing experiments.

Distinctly different from Granger's directional covariance
 
An interesting alternative is Judea Pearl's "do" calculus.

It's an interventionist approach to causality, kind of like a scientist formulating hypotheses and then doing experiments.

Distinctly different from Granger's directional covariance

Interesting thought. When I was doing my degree, one of the approaches we were forced to take with each experiment was to formulate a hypothesis, then test it and write our conclusion notes to determine if the hypothesis or the null hypothesis was confirmed.

I never thought of the distinction really between that approach and the machine learning approach which is more objective and uninterested in a hypothesis of sorts, but instead to build the model either by allowing the algorithm to find patterns or to hopefully replicate the prediction with the actual.

When I coded and built a model I didn't really hypothesis, I either already had the data confirmed and built a model to match that (and theoretically apply elsewhere), or, I hoped the network would find patterns in a black box manner.

My favourite is Reinforcement Learning and I believe it is the future of so many applications we have yet to identify, from military to consumer conveniences. I don't have the local hardware to delve too deep into it, even through the cloud, outside of simple, premade libraries which I messed around with a couple of years ago.
 
i'm interested in the topic :)

can anyone point me to a good basics-only tutorial on youtube?
The problem is, there isn't just one.

You could start for example, historically, with the Perceptron, which was middle to late 50's. It didn't do much of anything interesting, but it could classify patterns and do rudimentary feature detection. It was "passive", it had no autonomous activity of its own and only responded to external inputs. The learning rule, amounted to a "burning in" of the feature set into memory.

 
Interesting thought. When I was doing my degree, one of the approaches we were forced to take with each experiment was to formulate a hypothesis, then test it and write our conclusion notes to determine if the hypothesis or the null hypothesis was confirmed.

I never thought of the distinction really between that approach and the machine learning approach which is more objective and uninterested in a hypothesis of sorts, but instead to build the model either by allowing the algorithm to find patterns or to hopefully replicate the prediction with the actual.

When I coded and built a model I didn't really hypothesis, I either already had the data confirmed and built a model to match that (and theoretically apply elsewhere), or, I hoped the network would find patterns in a black box manner.

My favourite is Reinforcement Learning and I believe it is the future of so many applications we have yet to identify, from military to consumer conveniences. I don't have the local hardware to delve too deep into it, even through the cloud, outside of simple, premade libraries which I messed around with a couple of years ago.
"Babies babble". :)

The emission of random behavior is invariably followed by its sensory consequences. The machine thus builds a library of capabilities related to (desired) outcomes.

Children start asking "why" around the same time they start getting sophisticated grammatical input (when parents have to stop with the baby talk and start actually explaining things).

The other (usually painful) introduction to cause and effect is when children get punished, which is usually accompanied by an explanation of some sort. Curiously enough, that sometimes starts right around the same age, the terrible two's when kids will get into anything that's not locked (and even some things that are).
 
What kind of machine learning do you want to learn about?

There is, for example, logic and causality.

There is optimal control, as in machines learning the quickest and easiest way to do something.

There is self organization, like a neural network learning grammar by reading.

There is navigation, like a robot avoiding obstacles in unfamiliar terrain.

There is pattern recognition and classification, all kinds of stuff, and each one of the line items is a whole science unto itself and a lifetime's study

Where would you like to start?
well, i want to enable obfuscation of JS and PHP sources, and auto-translation between JS and C++ as well, so i'm thinking self-organisation could use some clarification at this point.
 

Forum List

Back
Top