China launched the world's first AI-operated 'mother ship,' an unmanned carrier capable of launching dozens of drones

shockedcanadian

Diamond Member
Aug 6, 2012
34,573
32,114
2,905
As the West worries about Woke policies, China is wisely improving their military capabilities, probably with the assistance by many Western A.I experts. Certainly I imagine many Canadian A.I experts have left for China,

Maybe America First wasn't so bad for the West after all. China is doing what super powers do, they are expanding.

I found it interesting, as I bolded below, that they claimed the vessel is "epoch making". Epoch, in A.I terms, is basically a single cycle of training, for which the application is "learning", adjusting their method and improving. Thus, they are saying that this vessel is in perpetual learning and improvement mode.

There is no defense in preventing the improved efficiency and accuracy of this product unless they stop the application.



China has launched the world's first crewless drone carrier that uses artificial intelligence to navigate autonomously in open water.

Beijing has officially described it as a maritime research tool, but some experts have said the ship has the potential to be used as a military vessel.

The autonomous ship, the Zhu Hai Yun (pictured here) is around 290 feet long, 45 feet wide, and 20 feet deep and can carry dozens of air, sea, and submersible drones equipped with different observation instruments, according to the shipbuilder, CSSC Huangpu Wenchong Shipping Co.

It describes the vessel as "epoch making" and the "world's first intelligent unmanned system mother ship."
 
Certainly I imagine many Canadian A.I experts have left for China,
No reason, they own half of B.C....and the politicians.

Epoch, in A.I terms, is basically a single cycle of training, for which the application is "learning", adjusting their method and improving.
Dangerous.

The problem with AI 'learning' happens once it learns it no longer needs us.

Which probably won't be too distant.

It's also kinda scary when a totalitarian state often provides more transparency than the 'free west'.
 
When you hear about China having something menacing you should remind yourself that they are consistently 20 years behind our actual capability.

May be, but it is this self belief, even arrogance in some cases, that prevents progress and allows other nations to catch up.

Just look at the smarmy Germans who believed they could "trade Russia to peace". Now they play catch up with a $100B military investment. I hope much of this is in A.I, because China has openly embraced this technology. As an amateur A.I dabbler (more past than present as clearly there aren;t job prospects for me in Canada) myself, it's a very wise decision IMO.
 
Not sure how this ship is a real threat?
One... it would be instantly a floating hunk of metal if we launch a emp-bomb near it.
Unless China hardened the electronics, which is unlikely. They are known for cutting corners.
Second drones, are slow moving aircraft and easily blown out of the sky with our broad choice of anti-aircraft weaponry.

This is a big nothing burger.
 
No reason, they own half of B.C....and the politicians.


Dangerous.

The problem with AI 'learning' happens once it learns it no longer needs us.

Which probably won't be too distant.

It's also kinda scary when a totalitarian state often provides more transparency than the 'free west'.

Well, A.I learning will always be restricted to it's "goal". It's just coding that directs the application to what it should learn. Now bad coding could become a REAL problem. As simple as a "less than" < symbol, instead of a "greater than" > symbol, or the like, and suddenly you have software learning an entirely different goal.

What is particularly lethal might be Reinforcement Learning approaches, which allow more exploratory, "learning by practicing" approaches by A.I, again, though still defined by programmer.

Now imagine Quantum Computing eventually being more real-life applicable and not reserved to a highly cooled environment, paired with Reinforcement Learning? Then, maybe we will see the begginings of The Terminator. Though still restrained by energy need.
 
Last edited:
No reason, they own half of B.C....and the politicians.


Dangerous.

The problem with AI 'learning' happens once it learns it no longer needs us.

Which probably won't be too distant.

It's also kinda scary when a totalitarian state often provides more transparency than the 'free west'.
The scenario portrayed in the terminator movies is not going to just spontaneously happen. So-called learning AIs cannot change their base programming.
 
Well, A.I learning will always be restricted to it's "goal". It's just coding that directs the application to what it should learn. Now bad coding could become a REAL problem. As simple as a "less than" < symbol, instead of a "greater than" > symbol, or the like, and suddenly you have software learning an entirely different goal.

What is particularly lethal might be Reinforcement Learning approaches, which allow more exploratory, "learning by practicing" approaches by A.I, again, though still defined by programmer.

Now imagine Quantum Computing eventually being more real-life applicable and not reserved to a highly cooled environment, paired with Reinforcement Learning? Then, maybe we will see the begginigs of The Terminator. Though still restrained by energy need.
AI is very dangerous stuff.

The possibilities are endless, and as you indicated, the difference between 'helpful' AI and 'unhelpful' AI is just a few key strokes away.
 
The scenario portrayed in the terminator movies is not going to just spontaneously happen. So-called learning AIs cannot change their base programming.
Well, yeah...nuclear bombs didn't just build themselves as well.

Where there is a distinct potential to hang himself, man will try.
 
Not sure how this ship is a real threat?
One... it would be instantly a floating hunk of metal if we launch a emp-bomb near it.
Unless China hardened the electronics, which is unlikely. They are known for cutting corners.
Second drones, are slow moving aircraft and easily blown out of the sky with our broad choice of anti-aircraft weaponry.

This is a big nothing burger.

What about the database software and backup of the learned information?

Yes, the hardware can be destroyed, but they must have fail safe data collection and this learning can simply be placed in the next A.I app they build, assuming the same hardware schematics.
 
Well, yeah...nuclear bombs didn't just build themselves as well.

Where there is a distinct potential to hang himself, man will try.
You're right to be worried about autonomous weapons systems on a small scale but nuclear bombs will always need a human finger on a button to launch.
 
I found it interesting, as I bolded below, that they claimed the vessel is "epoch making". Epoch, in A.I terms, is basically a single cycle of training, for which the application is "learning", adjusting their method and improving. Thus, they are saying that this vessel is in perpetual learning and improvement mode.
.

If you create something with the unlimited ability to learn and adapt itself, while it is not bound to your wisdom or condition ...
You will eventually be its slave ... :thup:

.
 
You're right to be worried about autonomous weapons systems on a small scale but nuclear bombs will always need a human finger on a button to launch.
Not with AI sentience they won't

We will eventually turn these things over to AI because of the shit heads who are in charge.

Once that happens...we become the slaves.

It will happen.

AI is dangerous shit.
 
Not with AI sentience they won't

We will eventually turn these things over to AI because of the shit heads who are in charge.

Once that happens...we become the slaves.

It will happen.

AI is dangerous shit.
We are many, many decades away from the kind of sentient self-serving A.I. you seem to be imagining. I have my doubts it will ever be a thing even if someone wanted to build such a thing.
 
We are probably only 10 years away from sentient AI.
And it won't be China, unless they recruit outside engineers.
China has never been innovative. They are very good however at reverse engineering, and building upon technology that other people developed first. In other words... piracy and copying.
 
Noooooo suh!


Don't buy the hype. We are no closer to understanding what sentience actually is than the first time some ancient philosopher asked himself what it is to be self-conscious. Without that understanding there is no way to write a set of instructions to emulate it. That's not to say that we could not construct a system that has amazing cogitative capabilities but it would still not be able to exceed it's programmed instructions.
 
Don't buy the hype. We are no closer to understanding what sentience actually is than the first time some ancient philosopher asked himself what it is to be self-conscious. Without that understanding there is no way to write a set of instructions to emulate it. That's not to say that we could not construct a system that has amazing cogitative capabilities but it would still not be able to exceed it's programmed instructions.

Hell, much of A.I is referred to as a "Black Box", especially Deep Learning and large Neural Networks where constant inputs and outputs occur over massive datasets to produce a long line of mathematical permutations across an algorithm. All the ML practictioner does, for the most part, is keep an eye on the gradient descent so that it doesn't slow down or get "stuck" (local minima).

The world is developing successful learning models based on algorithms that structure data and all the "engineer" does is change the parameters, weights, learning rate, amount of data etc until the Cost Function decreases and convergence of data is found. Think about that for a moment, lol. It is beautiful, but peculiar at the same time.

The results can then be replicated by anyone using the same parameters and no expert would be able to explain how it worked epoch by epoch, though intuitively and even mathematically as a whole, they understand what the end result should be. Many pre-made models do that for others.

This intuition, is where experience and plenty of work on them is a HUGE deal. I personally worked of hundreds of datasets and models on my laptop and I began to get a feel for what parameters needed to be altered (even added on a code line in some cases) to achieve successful learning even after only a few epochs, depending on the data size. If I were doing it full time with a company, I have no doubt that my experience would have allowed me to reach a highly proficient level. ML can be learned by most people, even absent the underlying linear algebra etc if they have the time and interest.

Especially today, where algos and parameters are nothing more than modules you add in Python or R and then tweak. It's more efficient and cost productive, depending on the application of course; then running an entire training set for days and THEN working with the failure.

When I self taught myself Machine Learning, I did so the hard way, I worked with the math and equations, because that is how this particular course taught it (the original one from Andrew Ng, he has long since updated it with Python and more use of modules). It was a hell of a road to climb at first, especially as I was learning Python at the same time.

Each course after another though felt so much easier after beginning with that one raw course. I found myself even finding errors in presenters code while they were doing it, and they would say in the recording "oh, this is supposed to be this" etc. It was quite inspiring to me just a year or so ago.

It will only become more mastered over time, it's application more apparent and potentially dangerous.
 
Hell, much of A.I is referred to as a "Black Box", especially Deep Learning and large Neural Networks where constant inputs and outputs occur over massive datasets to produce a long line of mathematical permutations across an algorithm. All the ML practictioner does, for the most part, is keep an eye on the gradient descent so that it doesn't slow down or get "stuck" (local minima).

The world is developing successful learning models based on algorithms that structure data and all the "engineer" does is change the parameters, weights, learning rate, amount of data etc until the Cost Function decreases and convergence of data is found. Think about that for a moment, lol. It is beautiful, but peculiar at the same time.

The results can then be replicated by anyone using the same parameters and no expert would be able to explain how it worked epoch by epoch, though intuitively and even mathematically as a whole, they understand what the end result should be. Many pre-made models do that for others.

This intuition, is where experience and plenty of work on them is a HUGE deal. I personally worked of hundreds of datasets and models on my laptop and I began to get a feel for what parameters needed to be altered (even added on a code line in some cases) to achieve successful learning even after only a few epochs, depending on the data size. If I were doing it full time with a company, I have no doubt that my experience would have allowed me to reach a highly proficient level. ML can be learned by most people, even absent the underlying linear algebra etc if they have the time and interest.

Especially today, where algos and parameters are nothing more than modules you add in Python or R and then tweak. It's more efficient and cost productive, depending on the application of course; then running an entire training set for days and THEN working with the failure.

When I self taught myself Machine Learning, I did so the hard way, I worked with the math and equations, because that is how this particular course taught it (the original one from Andrew Ng, he has long since updated it with Python and more use of modules). It was a hell of a road to climb at first, especially as I was learning Python at the same time.

Each course after another though felt so much easier after beginning with that one raw course. I found myself even finding errors in presenters code while they were doing it, and they would say in the recording "oh, this is supposed to be this" etc. It was quite inspiring to me just a year or so ago.

It will only become more mastered over time, it's application more apparent and potentially dangerous.
OK. Does any of this contradict what I said?
 

Forum List

Back
Top