AI (Artificial Intelligence) - how soon before it enters the battlefield?

R

rdean

Guest
Artificial%20Intelligence.jpg


Weaponized AI Under Debate

A former US Army officer penned a response to the open letter, stating that a ban on armed AI would “cripple” the military. Development of such is inevitable, he said, putting any country that enacts laws against them at a tactical disadvantage.

-----------------

Are they ethical?
 
1) Ethical or not, you will never end the arms race until there is a quantum change in human nature.

2) Human nature does not change.
 
"After careful consideration I have come to the conclusion that your system sucks!"

-- Name the movie from whence this quote comes......
 
AI, to me, refers to a self-aware sentient being that can think on its own. There is no such thing, as all machines are created and operate based on programming and operate within the parameters of that programming.

Some of the most sophisticated weapons systems we have, for example, are ones that, once fired, look for targets and then strike them. Those systems are programmed to search for certain defined electronic or otherwise 'signatures' or programmed inputs then react as programmed.

I can see where a machine could be programmed and react in an unanticipated way based on the flaws written into a program. Fir example, a 'robot' programmed to 'stop' anyone attacking another could end up attacking a policeman if the right parameters and protocols are not specified - doesn't mean it actually thought for itself.

IMO, we are a long way off, if ever, from seeing a 'Terminator' scenario / atmosphere. (Even in that fictional scenario the Terminator is nothing more than a machine programed to operate within a specific set of parameters and carry out specific programmed mission.)
 
AI, to me, refers to a self-aware sentient being that can think on its own. There is no such thing, as all machines are created and operate based on programming and operate within the parameters of that programming.

Some of the most sophisticated weapons systems we have, for example, are ones that, once fired, look for targets and then strike them. Those systems are programmed to search for certain defined electronic or otherwise 'signatures' or programmed inputs then react as programmed.

I can see where a machine could be programmed and react in an unanticipated way based on the flaws written into a program. Fir example, a 'robot' programmed to 'stop' anyone attacking another could end up attacking a policeman if the right parameters and protocols are not specified - doesn't mean it actually thought for itself.

IMO, we are a long way off, if ever, from seeing a 'Terminator' scenario / atmosphere. (Even in that fictional scenario the Terminator is nothing more than a machine programed to operate within a specific set of parameters and carry out specific programmed mission.)
While human thinking is sophisticated and experienced 'first hand', enhancing how sophisticated it seems to us, we also have to be programmed by others and through experience. Intelligence isn't limited to human-grade cognition. So in the case of a landmine, there's some simple button that determines the presence of a presumed enemy and blows itself up. The independent 'decision' here is AI, albeit people may expect a minefield to express guilt from an inadvertent human dismemberment for it to be 'intelligent'.
 
While human thinking is sophisticated and experienced 'first hand', enhancing how sophisticated it seems to us, we also have to be programmed by others and through experience. Intelligence isn't limited to human-grade cognition. So in the case of a landmine, there's some simple button that determines the presence of a presumed enemy and blows itself up. The independent 'decision' here is AI, albeit people may expect a minefield to express guilt from an inadvertent human dismemberment for it to be 'intelligent'.


I see your point but would argue human beings can learn on their own. Machines are limited to the programming they are given.

I would argue that there are also some twisted individuals out there who are not 'burdened' by conscience or feelings, but this lack of emotion does not make them a 'machine'.
 
While human thinking is sophisticated and experienced 'first hand', enhancing how sophisticated it seems to us, we also have to be programmed by others and through experience. Intelligence isn't limited to human-grade cognition. So in the case of a landmine, there's some simple button that determines the presence of a presumed enemy and blows itself up. The independent 'decision' here is AI, albeit people may expect a minefield to express guilt from an inadvertent human dismemberment for it to be 'intelligent'.


I see your point but would argue human beings can learn on their own. Machines are limited to the programming they are given.

I would argue that there are also some twisted individuals out there who are not 'burdened' by conscience or feelings, but this lack of emotion does not make them a 'machine'.
Machines can be built with the ability to learn on their own. We're born with this, but what about the first 9 months? Are we similarly built with the ability to learn on our own?

We're can't be machines because I think that excludes biological stuff. We're not the lone in intelligence in the bio column, however. Even though emotions of our complexity don't seem to exist in a lobster, there's some complex nervous activity that would be hard to mimic with a machine that size. As you said, that's a long way off. Our machines have to be more focused on a purpose to seem impressive.
 
Machines can be built with the ability to learn on their own.
- I would agree that a machine could be programmed to obtain, collect, and apply data / instructions byt do not agree that there are machines that can 'learn'...just my opinion

We're born with this, but what about the first 9 months? Are we similarly built with the ability to learn on our own?
- Some things are natural, part of instinct, and DNA. Machines do not have that ability. A piece of metal is not 'born' with such inherent abilities.

We're can't be machines because I think that excludes biological stuff. We're not the lone in intelligence in the bio column, however. Even though emotions of our complexity don't seem to exist in a lobster, there's some complex nervous activity that would be hard to mimic with a machine that size.
- We are talking about Intelligence, though, not 'all' of the 'biological stuff'. Again, even a lobster is born with inherent instincts and behavioral patterns due to DNA...which machines are / do not.

As you said, that's a long way off. Our machines have to be more focused on a purpose to seem impressive.
- In a way man's search / attempt - in some small, possibly unconscious way - to build a true AI seems to be his attempt to become equal with God, to show he can create 'life' as well. THAT feat is never going to happen.
 
Learning is using memory to determine a future decision or to add context to how a future situation is processed and recursively remembered. Machines can be made to do this. in a way that's basic to our own ability.

To get around the illusions of instinct, hard-genetic programming, and the misunderstanding of birth vs conception, let's say machines are 'born' when they're turned on the first time and the way they're programmed to behave by default comprise their instincts... for the sake of argument. At conception, both are pieces of tissue/metal respectively, and with no chance of intelligence at all.

'True AI' is not a scientific notion.. Again, machines will be built for purpose, a reflection of our intelligence overshadowing the mad-scientist notion of competing with god.
 
Learning is using memory to determine a future decision or to add context to how a future situation is processed and recursively remembered. Machines can be made to do this. in a way that's basic to our own ability.
Memory is installed into machines in the form of a chip and is limited to the amount of memory available on the chip while human memory and the ability to learn is limitless. Yes, like a machine, our memory can become faulty and be erased.
 
Machines may give us calculated ways to kill each other, or the most likely way to achieve an objective, but things like empathy and maternal feelings and such emotions, will be forever beyond their grasp.
 
Trust me, for military applications, these will be minimal memory, learning and decision-making devices. Most of our military gadgets are semi-autonomous. Those aren't p51s we fly now. A very nuanced decision is being made to interpret the pilot's inputs relative to their environment. Mercedes roll like that.
 
I think this question is about responsibility. Can a robot really take responsibility for what it can do in war. Is it ok for one country to deploy a robotic force against another country with a conventional force. Otherwise, there's plenty room in a battle tank for it to compute the warfare.
 
Learning is using memory to determine a future decision or to add context to how a future situation is processed and recursively remembered. Machines can be made to do this. in a way that's basic to our own ability.
Memory is installed into machines in the form of a chip and is limited to the amount of memory available on the chip while human memory and the ability to learn is limitless. Yes, like a machine, our memory can become faulty and be erased.
We're talking about cases where we don't want people involved 'on the front lines'. Why subject human memory to warfare when you can subject it to development and deployment of war devices. To Ben Carsonize: "I don't advocate that a robot leads our armed forces." What about a soldier? "Now, a soldier a pilot, Ok".
 
I think AI is just the beginning I can't imagine what kind of weapons lay ahead of us. All I Know is that we more than just AI we need weapons and devices that can reach other planets within minutes. I think that we really need to give are military an upgrade and weaponizing AI is just a thought, but you really need to learn how to have better control over AI first before you do that.
 
A Marine battlefield officer was charged with pissing on enemy corpses and dragged to the U.S. to face court martial. A Green Beret faces a dishonorable discharge for preventing the rape of an Afghan boy. Forget "artificial intelligence" I'm waiting for some example of real intelligence from the fat asses in the Pentagon.
 
A Marine battlefield officer was charged with pissing on enemy corpses and dragged to the U.S. to face court martial. A Green Beret faces a dishonorable discharge for preventing the rape of an Afghan boy. Forget "artificial intelligence" I'm waiting for some example of real intelligence from the fat asses in the Pentagon.
Raping little boys is so abhorrent to normal Americans it's simply unfathomable. But then, so is eating eyeballs.
This is only one of the reasons Republicans should never have invaded Afghanistan. To the Afthans, culturally, it's not so bad. They do both. If the country, and Iraq, had been studied BEFORE that unfortunate invasion, I suspect it would have been recognized for the primitive cesspool it is.
Do you think Bush or the GOP knew that was going on?
Do you think Bush or the GOP knew, or even knows the difference between Sunni and Shiite? Or even cares? Of course not. Ignorance doubles down.

Now, suddenly, Republicans are outraged. But to Afghans, it's their culture.
Look at this:
5ec2bfed2c1cfb930db12e274d51d888.jpg

If these women are found to be unfaithful, their neck rings are removed. They die shortly after.
Foot binding in China gave them such beautiful feet. See?
FootBinding.jpg

The practice was outlawed after only a couple of thousand years.
You can bully people, something the GOP is especially good at. But they are much better doing that to other Americans than shipping their righteous indignation overseas.
Stop them from being molested over there, but ignore them over here. How the GOP views children.
 
"AI (Artificial Intelligence) - how soon before it enters the battlefield"

Ever hear of "smart" bombs. Or drones? Or JDAM? Or 155 shells that know more about where they are and where they are going (and are able to self guide) than the people who fired them? AI is already there and has been slowly increasing for decades now.
 
Artificial%20Intelligence.jpg


Weaponized AI Under Debate

A former US Army officer penned a response to the open letter, stating that a ban on armed AI would “cripple” the military. Development of such is inevitable, he said, putting any country that enacts laws against them at a tactical disadvantage.

-----------------

Are they ethical?

This is also not funny, just it seems: Chimp With AK-47 Shoots at Soldiers

 

Forum List

Back
Top