Google AI Imaging Algorithm Cannot Differentiate Blacks From Primates...

Vastator

Platinum Member
Oct 14, 2014
22,104
9,671
950
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

6A40CC73-FCD6-412E-8E32-7B1B73C172EE.jpeg


But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
 
The software could have been smart enough to know that "gorillas" don´t graduate.
You would think it would be a simple matter to broaden the scope of the AIs view, to incorporate more than just the face. The recognition that the subject was wearing garments of any kind, could help it differentiate blacks from gorillas.
 
The software could have been smart enough to know that "gorillas" don´t graduate.
You would think it would be a simple matter to broaden the scope of the AIs view, to incorporate more than just the face. The recognition that the subject was wearing garments of any kind, could help it differentiate blacks from gorillas.
It´s apparently not simple. I guess, if a panda wears that garment and hat, it would still be a "graduation".
 
The software could have been smart enough to know that "gorillas" don´t graduate.
You would think it would be a simple matter to broaden the scope of the AIs view, to incorporate more than just the face. The recognition that the subject was wearing garments of any kind, could help it differentiate blacks from gorillas.
It´s apparently not simple. I guess, if a panda wears that garment and hat, it would still be a "graduation".
I guess your right. Still... The articles never really delve into what causes this anomaly. And apparently even Google, and others don’t know. As all come up with this same result, and no legitimate fix has been proposed...
 
The software could have been smart enough to know that "gorillas" don´t graduate.
You would think it would be a simple matter to broaden the scope of the AIs view, to incorporate more than just the face. The recognition that the subject was wearing garments of any kind, could help it differentiate blacks from gorillas.
It´s apparently not simple. I guess, if a panda wears that garment and hat, it would still be a "graduation".
I guess your right. Still... The articles never really delve into what causes this anomaly. And apparently even Google, and others don’t know. As all come up with this same result, and no legitimate fix has been proposed...
They probably don´t know or don´t want to talk about it. Monkeys and humans have similar faces, if now the color fits, the AI can misinterpret the image. No matter how cunning an AI is, it will always remain a stupid set of texts, and doesn´t see the images.
 
Isn't this similar to the reason that some countries are moving to ban black plastics--automated -recycling sorters' computers do not handle them well for some reason.
 
But the software has never made a mistake similar to the example you post. Nor are there any references to misidentifying say... a coal miner, or "blackface". The only real correlation offered was that between negros, and gorrillas, and chimps. Other primates seemed readily identifiable by the software.
 
Isn't this similar to the reason that some countries are moving to ban black plastics--automated -recycling sorters' computers do not handle them well for some reason.
Not sure... Do you have a link?
 
Isn't this similar to the reason that some countries are moving to ban black plastics--automated -recycling sorters' computers do not handle them well for some reason.
Not sure... Do you have a link?

Black plastic: what's the problem? | Greenpeace UK

"Here’s why: the method used to colour the plastic means it can’t be recognised by the sorting systems used in most recycling plants. If you’ve ever seen images of a recycling sorting system, you’ll have seen the “waste” moving along a conveyor belt, with different types of material being removed at different stages.


As black plastic can’t be recognised by optical sorting systems, products made with it usually end up reaching the end of the processing line as ‘residue’ –
which means they’re headed straight to landfill."
 
But the software has never made a mistake similar to the example you post. Nor are there any references to misidentifying say... a coal miner, or "blackface". The only real correlation offered was that between negros, and gorrillas, and chimps. Other primates seemed readily identifiable by the software.
The "similarity" is the same, though. There was another issue when iPhones could not distinguish between Asians.
 
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

View attachment 247490

But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
You mean that IA couldn't differentiate between them when the test was done. Software improves over time. 30 years ago computers could not beat the grand master at chess. Now it's simple for them.
 
The software could have been smart enough to know that "gorillas" don´t graduate.
You would think it would be a simple matter to broaden the scope of the AIs view, to incorporate more than just the face. The recognition that the subject was wearing garments of any kind, could help it differentiate blacks from gorillas.
It´s apparently not simple. I guess, if a panda wears that garment and hat, it would still be a "graduation".
I guess your right. Still... The articles never really delve into what causes this anomaly. And apparently even Google, and others don’t know. As all come up with this same result, and no legitimate fix has been proposed...
They probably don´t know or don´t want to talk about it. Monkeys and humans have similar faces, if now the color fits, the AI can misinterpret the image. No matter how cunning an AI is, it will always remain a stupid set of texts, and doesn´t see the images.

Not really.
Other primates have no real nose to speak of, have general facial hair, no forehead, and a protruding jaw.
Humans are nothing at all like other primates in facial characteristics.
 
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

View attachment 247490

But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
You mean that IA couldn't differentiate between them when the test was done. Software improves over time. 30 years ago computers could not beat the grand master at chess. Now it's simple for them.


There really is no such thing as AI, as Artificial Intelligence implies the computer is supplying the recognition process.
In reality the computer is just manipulating ones and zeros in simple math, and all the intelligence is supplied by teams of human programmers. And the appearance of recognition is totally dependent upon how clever these human programmers were in anticipating scenarios.
But computers could always beat grand masters at chess.
It was just a matter of how much times anyone wanted to invest in the programming.
Obviously a team of a thousand programmers working for years, can come up with a program much better than one that was written by a single programmer in a week.
The computer had nothing to do with it.
And there has been no advancement in the basic technology in the computer or programming in the last 70 years.
Making them smaller and faster only made them cheaper, not better.
Programming languages have actually gotten worse, and no they only teach bad scripting languages like Python instead of real programming languages like C or Pascal.
 
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

View attachment 247490

But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
You mean that IA couldn't differentiate between them when the test was done. Software improves over time. 30 years ago computers could not beat the grand master at chess. Now it's simple for them.


There really is no such thing as AI, as Artificial Intelligence implies the computer is supplying the recognition process.
In reality the computer is just manipulating ones and zeros in simple math, and all the intelligence is supplied by teams of human programmers. And the appearance of recognition is totally dependent upon how clever these human programmers were in anticipating scenarios.
But computers could always beat grand masters at chess.
It was just a matter of how much times anyone wanted to invest in the programming.
Obviously a team of a thousand programmers working for years, can come up with a program much better than one that was written by a single programmer in a week.
The computer had nothing to do with it.
And there has been no advancement in the basic technology in the computer or programming in the last 70 years.
Making them smaller and faster only made them cheaper, not better.
Programming languages have actually gotten worse, and no they only teach bad scripting languages like Python instead of real programming languages like C or Pascal.
You don't know a thing about AI.
 
Despite countless attempts to resolve this issue; the only “fix” they could come up with... Was to disable this algorithm from identifying primates at all. Not really a fix. But apparently some people were getting rather upset at the algorithms short comings.
Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech

View attachment 247490

But it wasn’t just Googles AI that demonstrated this anomaly...
Google's solution to accidental algorithmic racism: ban gorillas

Flickr released a similar feature, auto-tagging – which had an almost identical set of problems.”

And there are many more. Given this glaring shortcoming in AI recognition technology; is the facial recognition software currently used by many agencies really trustworthy?
the only, "cost effective fix"?
 

Similar threads

Forum List

Back
Top