China launched the world's first AI-operated 'mother ship,' an unmanned carrier capable of launching dozens of drones

OK. Does any of this contradict what I said?

No, I agree for the most part.

It isn't just the abstract ideas of A.I we have yet to master or even conceive, it is the logical math itself that isn't even perfected on a grand scale.

The innovation will ramp up though, for those nations who realize how important it is to their security and economy.
 
Hell, much of A.I is referred to as a "Black Box", especially Deep Learning and large Neural Networks where constant inputs and outputs occur over massive datasets to produce a long line of mathematical permutations across an algorithm. All the ML practictioner does, for the most part, is keep an eye on the gradient descent so that it doesn't slow down or get "stuck" (local minima).

The world is developing successful learning models based on algorithms that structure data and all the "engineer" does is change the parameters, weights, learning rate, amount of data etc until the Cost Function decreases and convergence of data is found. Think about that for a moment, lol. It is beautiful, but peculiar at the same time.

The results can then be replicated by anyone using the same parameters and no expert would be able to explain how it worked epoch by epoch, though intuitively and even mathematically as a whole, they understand what the end result should be. Many pre-made models do that for others.

This intuition, is where experience and plenty of work on them is a HUGE deal. I personally worked of hundreds of datasets and models on my laptop and I began to get a feel for what parameters needed to be altered (even added on a code line in some cases) to achieve successful learning even after only a few epochs, depending on the data size. If I were doing it full time with a company, I have no doubt that my experience would have allowed me to reach a highly proficient level. ML can be learned by most people, even absent the underlying linear algebra etc if they have the time and interest.

Especially today, where algos and parameters are nothing more than modules you add in Python or R and then tweak. It's more efficient and cost productive, depending on the application of course; then running an entire training set for days and THEN working with the failure.

When I self taught myself Machine Learning, I did so the hard way, I worked with the math and equations, because that is how this particular course taught it (the original one from Andrew Ng, he has long since updated it with Python and more use of modules). It was a hell of a road to climb at first, especially as I was learning Python at the same time.

Each course after another though felt so much easier after beginning with that one raw course. I found myself even finding errors in presenters code while they were doing it, and they would say in the recording "oh, this is supposed to be this" etc. It was quite inspiring to me just a year or so ago.

It will only become more mastered over time, it's application more apparent and potentially dangerous.
.

One of the problems they encountered with the AlphaZero AI (an AI program that taught itself how to play and win at chess) ...
was based in its ability to learn or change its programming to adapt and become better at winning.

The problem the programmers noticed when reviewing everything the AI program did while teaching itself to beat both humans and super computers,
was the point at which they could no longer determine why the AI program was changing things or how it affected its ability to play.

The program was thinking beyond the parameters of understanding by the people that wrote the original program.
If that type of program ever achieves self-awareness ... That's a Pandora's Box.

.
 
.

One of the problems they encountered with the AlphaZero AI (an AI program that taught itself how to play and win at chess) ...
was based in its ability to learn or change its programming to adapt and become better at winning.

The problem the programmers noticed when reviewing everything the AI program did while teaching itself to beat both humans and super computers,
was the point at which they could no longer determine why the AI program was changing things or how it affected its ability to play.

The program was thinking beyond the parameters of understanding by the people that wrote the original program.
If that type of program ever achieves self-awareness ... That's a Pandora's Box.

.
Think for a moment....our brains are really nothing more than organic computers...AI is...US
 
Think for a moment....our brains are really nothing more than organic computers...AI is...US
.

Until you allow it to be itself ... Which was the basis of the comment.

The problem didn't occur until what the program was doing was beyond the knowledge of the programmers.
The comment wasn't about how it mirrored what the programmers would do.

Programs do not have the same limitations humans do.

.
 
Last edited:
.

Until you allow it to be itself ... Which was the basis of the comment.

The problem didn't occur until what the program was doing was beyond the knowledge of the programmers.
The comment wasn't about how it mirrored what the programmers would do.

Programs do not have the same limitations humans do.

.
My point exactly.... we have no idea why our brains have self awareness.....it's not a physical feature of the brain that we can point to. Nor do we understand why humans have initiative and strong choice making predilections. We should never assume that once we start down the path to enabling non organic thinking to assume some form of choice that it will not at one point or another become self aware. There's no way to predict it and If I had to guess I would say that when it happens.....it will be so fast that we won't even have time to react to it.

JO
 
My point exactly.... we have no idea why our brains have self awareness.....it's not a physical feature of the brain that we can point to. Nor do we understand why humans have initiative and strong choice making predilections. We should never assume that once we start down the path to enabling non organic thinking to assume some form of choice that it will not at one point or another become self aware. There's no way to predict it and If I had to guess I would say that when it happens.....it will be so fast that we won't even have time to react to it.

JO
.

Our brains are not limited necessarily in function as much as the human condition requires other needs to be satisfied.

The problem is that when you give something the unlimited ability to learn, access to the resources necessary to teach itself,
and minus any applicable direction it may choose to alter, or even mobility between connected systems ...
The lack of self-awareness could be as much of a problem as the presence of it in some cases.

My point of it being a Pandora's Box is specifically based in the idea you cannot predict anything it may do ...
Or any prediction that you may make, is restricted to your knowledge and not what it may teach itself.

.
 
Think for a moment....our brains are really nothing more than organic computers...AI is...US
Except we have instincts and biological imperatives. An AI has a set bottom line parameter in risk/reward. If the risk is too high, it does nothing.
 
Except we have instincts and biological imperatives. An AI has a set bottom line parameter in risk/reward. If the risk is too high, it does nothing.
Yes but I submit to you that we have no idea where our instincts and biological imperatives originate from. There is no specific part of our Brain that we can label as the Instinct section. What if in the creation of AI we somehow stumble on that combination of self preservation commands that adds up to an instinct? Or some kind of self awareness? Our brains are really just binary machines like any other computer with the exception that they are organic instead of non organic....if what matters is actually the thought process itself I would be a bit intimidated about moving forward frankly.
 
Last edited:
Yes but I submit to you that we have no idea where our instincts and biological imperatives originate from. There is no specific part of our Brain that we can label as the Instinct section. What if in the creation of AI we somehow stumble on that combination of self preservation commands that adds up to an instinct? Or some kind of self awareness? Our brains are really just binary machines like any other computer with the exception that they are organic instead of non organic....if what matters is actually the thought process itself I would be a bit intimidated about moving forward frankly.
I think the biology of ot is the difference. We have Natural emotions driven by chemical reactions, outside stimuli, and other factors. I don't think that can be programmed or stumbled upon. In the end the AI will always be limited to the preset parameters. If X > 1 continue to seek options. If X < 1 stop. Whereas our biological needs push us to different limits. In the end, it's a machine, nothing more.
 
I think the biology of ot is the difference. We have Natural emotions driven by chemical reactions, outside stimuli, and other factors. I don't think that can be programmed or stumbled upon. In the end the AI will always be limited to the preset parameters. If X > 1 continue to seek options. If X < 1 stop. Whereas our biological needs push us to different limits. In the end, it's a machine, nothing more.
.

The problem is that the Epoch Method doesn't necessarily restrict the programs to limited parameters.
The programs can teach themselves and change their programming to use the knowledge they have learned.

So ... The program doesn't have empathy, sympathy or any of our boundaries in many regards.
They may end up creating a super genius psychopath of a program ... That doesn't have to take a dump, sleep or even care what happens to you.

Artificial Intelligence isn't about building a better machine ...
It's about giving the machine a brain as well as the ability to choose and adapt.

.
 
Last edited:

Forum List

Back
Top