Google AI Imaging Algorithm Cannot Differentiate Blacks From Primates...

The ‘A’ in AI stands for ‘Artificial’.

Duh

That is the point.
The term Artificial Intelligence should not be used.
It could be valid if everyone understood it as clever human programmers intending to get computers to simulate intelligence.
But that is not how anyone takes it.
Instead, everyone takes it as if it really were the start of some sort of computer sentience, which is never ever going to be remotely possible.
It is all just an amusing fake.
And as such, the term AI then has become a fraud.
 
The ‘A’ in AI stands for ‘Artificial’.

Duh

That is the point.
The term Artificial Intelligence should not be used.
It could be valid if everyone understood it as clever human programmers intending to get computers to simulate intelligence.
But that is not how anyone takes it.
Instead, everyone takes it as if it really were the start of some sort of computer sentience, which is never ever going to be remotely possible.
It is all just an amusing fake.
And as such, the term AI then has become a fraud.

Actually, it is the current pop culture oxymoron.

In time, something else will become more fashionable.
 
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

View attachment 247490

But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
You mean that IA couldn't differentiate between them when the test was done. Software improves over time. 30 years ago computers could not beat the grand master at chess. Now it's simple for them.


There really is no such thing as AI, as Artificial Intelligence implies the computer is supplying the recognition process.
In reality the computer is just manipulating ones and zeros in simple math, and all the intelligence is supplied by teams of human programmers. And the appearance of recognition is totally dependent upon how clever these human programmers were in anticipating scenarios.
But computers could always beat grand masters at chess.
It was just a matter of how much times anyone wanted to invest in the programming.
Obviously a team of a thousand programmers working for years, can come up with a program much better than one that was written by a single programmer in a week.
The computer had nothing to do with it.
And there has been no advancement in the basic technology in the computer or programming in the last 70 years.
Making them smaller and faster only made them cheaper, not better.
Programming languages have actually gotten worse, and no they only teach bad scripting languages like Python instead of real programming languages like C or Pascal.
You don't know a thing about AI.

I not only have multiple degrees in computer science, but have implemented more artificial programs likely than almost anyone in the world. If there is one thing I am, it is one of the most experience artificial intelligence programmers in the world.
And anyone who thinks computers can ever have any amount of independent thought, sentience, or anything else approaching what we call intelligence, they are totally wrong and ignorant.
And when programmers apply their own intelligence to making a program that is intended to anticipate and appear to make decisions, that is not at all artificial.
Instead of AI, then should be calling it Human Programmer Simulation of Fake Intelligence.
 
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

View attachment 247490

But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
You mean that IA couldn't differentiate between them when the test was done. Software improves over time. 30 years ago computers could not beat the grand master at chess. Now it's simple for them.


There really is no such thing as AI, as Artificial Intelligence implies the computer is supplying the recognition process.
In reality the computer is just manipulating ones and zeros in simple math, and all the intelligence is supplied by teams of human programmers. And the appearance of recognition is totally dependent upon how clever these human programmers were in anticipating scenarios.
But computers could always beat grand masters at chess.
It was just a matter of how much times anyone wanted to invest in the programming.
Obviously a team of a thousand programmers working for years, can come up with a program much better than one that was written by a single programmer in a week.
The computer had nothing to do with it.
And there has been no advancement in the basic technology in the computer or programming in the last 70 years.
Making them smaller and faster only made them cheaper, not better.
Programming languages have actually gotten worse, and no they only teach bad scripting languages like Python instead of real programming languages like C or Pascal.
You don't know a thing about AI.

I not only have multiple degrees in computer science, but have implemented more artificial programs likely than almost anyone in the world. If there is one thing I am, it is one of the most experience artificial intelligence programmers in the world.
And anyone who thinks computers can ever have any amount of independent thought, sentience, or anything else approaching what we call intelligence, they are totally wrong and ignorant.
And when programmers apply their own intelligence to making a program that is intended to anticipate and appear to make decisions, that is not at all artificial.
Instead of AI, then should be calling it Human Programmer Simulation of Fake Intelligence.
How do you explain IBM Watson winning Jeopardy?
 
The ‘A’ in AI stands for ‘Artificial’.

Duh

That is the point.
The term Artificial Intelligence should not be used.
It could be valid if everyone understood it as clever human programmers intending to get computers to simulate intelligence.
But that is not how anyone takes it.
Instead, everyone takes it as if it really were the start of some sort of computer sentience, which is never ever going to be remotely possible.
It is all just an amusing fake.
And as such, the term AI then has become a fraud.

Actually, it is the current pop culture oxymoron.

In time, something else will become more fashionable.

Yes, like jumbo shrimp that is not really large, and Machine Learning when in reality machines can learn anything.
 
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

View attachment 247490

But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
You mean that IA couldn't differentiate between them when the test was done. Software improves over time. 30 years ago computers could not beat the grand master at chess. Now it's simple for them.


There really is no such thing as AI, as Artificial Intelligence implies the computer is supplying the recognition process.
In reality the computer is just manipulating ones and zeros in simple math, and all the intelligence is supplied by teams of human programmers. And the appearance of recognition is totally dependent upon how clever these human programmers were in anticipating scenarios.
But computers could always beat grand masters at chess.
It was just a matter of how much times anyone wanted to invest in the programming.
Obviously a team of a thousand programmers working for years, can come up with a program much better than one that was written by a single programmer in a week.
The computer had nothing to do with it.
And there has been no advancement in the basic technology in the computer or programming in the last 70 years.
Making them smaller and faster only made them cheaper, not better.
Programming languages have actually gotten worse, and no they only teach bad scripting languages like Python instead of real programming languages like C or Pascal.
You don't know a thing about AI.

I not only have multiple degrees in computer science, but have implemented more artificial programs likely than almost anyone in the world. If there is one thing I am, it is one of the most experience artificial intelligence programmers in the world.
And anyone who thinks computers can ever have any amount of independent thought, sentience, or anything else approaching what we call intelligence, they are totally wrong and ignorant.
And when programmers apply their own intelligence to making a program that is intended to anticipate and appear to make decisions, that is not at all artificial.
Instead of AI, then should be calling it Human Programmer Simulation of Fake Intelligence.
How do you explain IBM Watson winning Jeopardy?

No computer can actually win a contest, but programmers can feed enough information into a database so that a human running the right programs can get winning answers back out during a game of Jeopardy.
Are you claiming that you actually believe the computer understood the game?
 
You mean that IA couldn't differentiate between them when the test was done. Software improves over time. 30 years ago computers could not beat the grand master at chess. Now it's simple for them.


There really is no such thing as AI, as Artificial Intelligence implies the computer is supplying the recognition process.
In reality the computer is just manipulating ones and zeros in simple math, and all the intelligence is supplied by teams of human programmers. And the appearance of recognition is totally dependent upon how clever these human programmers were in anticipating scenarios.
But computers could always beat grand masters at chess.
It was just a matter of how much times anyone wanted to invest in the programming.
Obviously a team of a thousand programmers working for years, can come up with a program much better than one that was written by a single programmer in a week.
The computer had nothing to do with it.
And there has been no advancement in the basic technology in the computer or programming in the last 70 years.
Making them smaller and faster only made them cheaper, not better.
Programming languages have actually gotten worse, and no they only teach bad scripting languages like Python instead of real programming languages like C or Pascal.
You don't know a thing about AI.

I not only have multiple degrees in computer science, but have implemented more artificial programs likely than almost anyone in the world. If there is one thing I am, it is one of the most experience artificial intelligence programmers in the world.
And anyone who thinks computers can ever have any amount of independent thought, sentience, or anything else approaching what we call intelligence, they are totally wrong and ignorant.
And when programmers apply their own intelligence to making a program that is intended to anticipate and appear to make decisions, that is not at all artificial.
Instead of AI, then should be calling it Human Programmer Simulation of Fake Intelligence.
How do you explain IBM Watson winning Jeopardy?

No computer can actually win a contest, but programmers can feed enough information into a database so that a human running the right programs can get winning answers back out during a game of Jeopardy.
Are you claiming that you actually believe the computer understood the game?
How can you claim no computer can win a contest when Watson clearly did exactly that? There was only one program giving the answers: Watson.

Weather you can claim Watson "understood" the game is irrelevant. From the point of an observer, it behaved exactly as if it understood the game. If a computer shows all the signs of having general intelligence, how can you demonstrate that it doesn't?
 
The software could have been smart enough to know that "gorillas" don´t graduate.
You would think it would be a simple matter to broaden the scope of the AIs view, to incorporate more than just the face. The recognition that the subject was wearing garments of any kind, could help it differentiate blacks from gorillas.
It´s apparently not simple. I guess, if a panda wears that garment and hat, it would still be a "graduation".
I guess your right. Still... The articles never really delve into what causes this anomaly. And apparently even Google, and others don’t know. As all come up with this same result, and no legitimate fix has been proposed...
They probably don´t know or don´t want to talk about it. Monkeys and humans have similar faces, if now the color fits, the AI can misinterpret the image. No matter how cunning an AI is, it will always remain a stupid set of texts, and doesn´t see the images.

Not really.
Other primates have no real nose to speak of, have general facial hair, no forehead, and a protruding jaw.
Humans are nothing at all like other primates in facial characteristics.
You´re going into a level of detail that also applies if you compare humans. Other similar species like pigs look almost entirely different.
 
There really is no such thing as AI, as Artificial Intelligence implies the computer is supplying the recognition process.
In reality the computer is just manipulating ones and zeros in simple math, and all the intelligence is supplied by teams of human programmers. And the appearance of recognition is totally dependent upon how clever these human programmers were in anticipating scenarios.
But computers could always beat grand masters at chess.
It was just a matter of how much times anyone wanted to invest in the programming.
Obviously a team of a thousand programmers working for years, can come up with a program much better than one that was written by a single programmer in a week.
The computer had nothing to do with it.
And there has been no advancement in the basic technology in the computer or programming in the last 70 years.
Making them smaller and faster only made them cheaper, not better.
Programming languages have actually gotten worse, and no they only teach bad scripting languages like Python instead of real programming languages like C or Pascal.
You don't know a thing about AI.

I not only have multiple degrees in computer science, but have implemented more artificial programs likely than almost anyone in the world. If there is one thing I am, it is one of the most experience artificial intelligence programmers in the world.
And anyone who thinks computers can ever have any amount of independent thought, sentience, or anything else approaching what we call intelligence, they are totally wrong and ignorant.
And when programmers apply their own intelligence to making a program that is intended to anticipate and appear to make decisions, that is not at all artificial.
Instead of AI, then should be calling it Human Programmer Simulation of Fake Intelligence.
How do you explain IBM Watson winning Jeopardy?

No computer can actually win a contest, but programmers can feed enough information into a database so that a human running the right programs can get winning answers back out during a game of Jeopardy.
Are you claiming that you actually believe the computer understood the game?
How can you claim no computer can win a contest when Watson clearly did exactly that? There was only one program giving the answers: Watson.

Weather you can claim Watson "understood" the game is irrelevant. From the point of an observer, it behaved exactly as if it understood the game. If a computer shows all the signs of having general intelligence, how can you demonstrate that it doesn't?

No, what you are not aware of is that the categories, questions, etc. were all entered into the computer database in special languages the programmers devised in order to get the right answers out of the computer. It was the operators who understood the game, and the computers just retrieved data at run time that the programmers had previously entered. That is no accomplishment at all. The computer showed zero signs of intelligence.
 
You would think it would be a simple matter to broaden the scope of the AIs view, to incorporate more than just the face. The recognition that the subject was wearing garments of any kind, could help it differentiate blacks from gorillas.
It´s apparently not simple. I guess, if a panda wears that garment and hat, it would still be a "graduation".
I guess your right. Still... The articles never really delve into what causes this anomaly. And apparently even Google, and others don’t know. As all come up with this same result, and no legitimate fix has been proposed...
They probably don´t know or don´t want to talk about it. Monkeys and humans have similar faces, if now the color fits, the AI can misinterpret the image. No matter how cunning an AI is, it will always remain a stupid set of texts, and doesn´t see the images.

Not really.
Other primates have no real nose to speak of, have general facial hair, no forehead, and a protruding jaw.
Humans are nothing at all like other primates in facial characteristics.
You´re going into a level of detail that also applies if you compare humans. Other similar species like pigs look almost entirely different.

Its pretty easy to differentiate between humans and other primates
Humans have about as much forehead above the eyes as they do mouth and chin below the eyes, so the eyes are right about the middle.
Not at all true with monkeys and chimps. Their eyes are near the top of their head, because their brains are much smaller.
They also have no lips.
Humans have big noses that are mostly decorative.
Some other primates like baboons also have large decorative noses, but not many.
 
You don't know a thing about AI.

I not only have multiple degrees in computer science, but have implemented more artificial programs likely than almost anyone in the world. If there is one thing I am, it is one of the most experience artificial intelligence programmers in the world.
And anyone who thinks computers can ever have any amount of independent thought, sentience, or anything else approaching what we call intelligence, they are totally wrong and ignorant.
And when programmers apply their own intelligence to making a program that is intended to anticipate and appear to make decisions, that is not at all artificial.
Instead of AI, then should be calling it Human Programmer Simulation of Fake Intelligence.
How do you explain IBM Watson winning Jeopardy?

No computer can actually win a contest, but programmers can feed enough information into a database so that a human running the right programs can get winning answers back out during a game of Jeopardy.
Are you claiming that you actually believe the computer understood the game?
How can you claim no computer can win a contest when Watson clearly did exactly that? There was only one program giving the answers: Watson.

Weather you can claim Watson "understood" the game is irrelevant. From the point of an observer, it behaved exactly as if it understood the game. If a computer shows all the signs of having general intelligence, how can you demonstrate that it doesn't?

No, what you are not aware of is that the categories, questions, etc. were all entered into the computer database in special languages the programmers devised in order to get the right answers out of the computer. It was the operators who understood the game, and the computers just retrieved data at run time that the programmers had previously entered. That is no accomplishment at all. The computer showed zero signs of intelligence.
Wrong. That is not how it was done.

https://seekingalpha.com/article/4087604-much-artificial-intelligence-ibm-watson

After Watson interprets the question, it searches its myriad sources for answers, much like Google’s conventional search engine does. Watson can sift through unstructured data, such as Wikipedia and newswires, as well as structured databases and data.
 
You are VERY naive if you believed that fake marketing article.

For example, it said Deep Learning was one of the methods used, and that is ridiculous.
Deep Learning is using trial and error to recognize patterns.
And there is absolutely no way to use trial and error in something like Jeopardy because there are no patterns and the game would be over before you could begin to find them if there were.

Watson could not possibly have understood the categories or the questions.
For example, traditional category names for some of its categories, such as "Name's The Same" for 2 or more items with the same name, "Unreal Estate" for fictional places, and both "Potpourri" and "Hodgepodge" for general knowledge. Category names that include quotation marks about part of the name indicate that correct responses will include the letters or word enclosed in the quotes.
And there just is no way to program a computer to figure that out on its own. So very specific code has to be written for it. And when it is not a traditional category, a human is going to have to skip the computer doing the query searches, and a human is going to have to pen them on the fly. Which is what they actually did and hid that fact. Otherwise a category like "Potpourri" would likely return a number of scents used in warm water, to make a room smell better. Human language is just way to ambiguous for a computer to even begin to be able to deal with complexities like this.
 

Similar threads

Forum List

Back
Top