As an Uglo-American, this new technology offends me. I have attempted to use facial recognition software many times on my smartphone, only to have it burst into flames each time. Fingerprint recognition should be enough.
The PC nonsense spreads. If the computers had a harder time detecting white faces for some technical reason, the Leftist Outrage Factory would claim that was discrimination too, especially when used in “law enforcement”.
It amazes me that so-called “liberals” can’t see where this is headed: the Left is going to eat the Left in a frenzy to see who is the most Politically Correct.
(See Joe Biden as exhibit A, a man being taken apart by women who are nearly universally far-left progressives).
MARCH 20, 2019 AT 12:37 PM
Craig Cornell says:
LIKE OR DISLIKE:
1
1
Partisan . . . Hack. Do you even realize how boring you are? How predictable and lacking in any interesting thoughts? Anything even remotely amusing? So full of hate. Sad.
I do have to wonder why all of Craig’s posts involve race &/or party. A little sensitive Craig? Not everything is black and white (no pun intended) or a partisan issue. You may need a new pair of blinders. Maybe some with peripheral vision?
Did you read the article? Hello! The ARTICLE tied race to facial recognition. In fact, that was the only point of the story.
It is the Left that wants to find racism in everything, not me. If you knew me, you would laugh at how diverse (boring word) my friends and family are. And how tired everyone I know is of hearing race brought into every issue, in order to divide us further.
The computer can’t recognize dark faces very well? Big deal. I am sure the software has a lot of limitations beyond skin color that will never make it into an Insurance Journal article.
(And is this insurance related at all? No. More lefty propaganda from IJ).
(“Hey look, that cup is racist! And that rock! What about that tree over there!”)
What does this article have to do with racism and not the failing of the recognition system? It is supposed to be a facial recognition system, it doesn’t work on specific people at a large percentage.. It’s brought to the attention of the manufacturers and should be worked on being fixed.
Please, please show me where the article points out flaws in the system OTHER than being unable to accurately read dark faces. That is the ONLY issue cited in the article, and then the article goes on to extrapolate the horrible racial consequences (law enforcement).
The article didn’t point out any other limitations of the facial recognition software. Eye shape and color? Size of nose? Length of chin? Ear shape? Any other limitations in the software mentioned in the article?
Nope. Dark skins. Welcome to America, 2019, where everything is racist, including your computer program.
That is the exact flaw with the system… it misreads dark faces 34% of time… White faces less than 1%.. that’s an issue. What system would you want working for you that fails 34% of the time!?…. “Oh autonomous cars are dangerous and can lead to accidents because their system isn’t functioning correctly.. Yea but what OTHER Issues are there!? This country is mechanically prejudiced!”
The problem can negatively affect a large number of people.
There needs to be additional problems for you to care?
You are such an ignorant racist. Pull your pants up your bias is showing.
Ah! Now I feel better. A lefty resorting to insults when his comments are exposed as false.
To quote you, “what does this article have to do with racism . . .?”
And then later, “you are such an ignorant racist . . .”
A question for the name-caller: why would anyone give a damn if the software didn’t identify black faces as accurately as white faces if it didn’t have to do with race?!!!??? And why are there no other identified flaws in the software in the article?
Please inform me, oh race-blind Wonder Liberal.
April 14, 2019 at 7:09 pm
A Computer Scientist says:
Like or Dislike:
3
3
It’s probably not a flaw in the system at all but an error in what training data was used. An AI trained to recognize only black cats is gonna problems when you throw in White and Stripes.
April 9, 2019 at 10:33 am
CL PM says:
Well-loved. Like or Dislike:
18
4
When my brother died a few years ago, as executor of his estate, I needed to get into his laptop. He had facial recognition software on it as part of the log in process. Even though I didn’t think we looked that much alike, we are both hair-challenged and the software recognized me as him and I was able to break into his laptop. Must have liked all that pale skin with our very high foreheads. I’ve been wary of the accuracy of this software ever since.
I think that “racially biased” is an inappopriate word for software which is less accurate for some groups of people. It’s quite appropriate to bring up any such flaws – just as it would be if the software was easily fooled by hats or changes in eyeglasses. But the term “racially biased” is emotionally charged and suggests negative effect, even negative intention in the public mind.
Many stories note that AI systems are created by human programmers and humans have implicit bias, so their creations will obviously reflect those biases. That narrative is misleading – as reported to date, none involve programmers sneaking their personal racial biases into the code, nor is that likely to have. But very few readers understand technology, nor take the time to read and parse carefully – they just pick up that AI is biased against blacks too, because it’s created by whites.
The flaws in facial recognition systems likely stem from a combination of lower brightness and contrast in some faces, and perhaps from the training sets having more people of some races/ethnicities than others. Yes, it’s good to fix those flaws, if possible; or at least reduce them. But that’s in no way intentional or due to unconscious human bias.
And the tenuous potential link to damaging outcomes is always nebulous. If it’s a real world danger, describe the incidents where it results in real harm. Is the problem that criminals of some races are less often personally identifiable in security footage at the scen of a crime, and so go uncaught more often for some races than for others (failure to match)? Or being unable to enter secured areas they should be authorizef for? Or are people being frequently detained briefly or extensively at airports or stadiums because of a false positive match with a banned person? Or are banned people of certain races being undetected more often? Or what?
Some of those might be minor or major hassles for people with darker skin tones, and some might be advantages.
I suspect that if the facial recognition software had been more accurate for darker skin tones, there would be cries of bias because it oversurveilled people of color compared to whites; or on the flip side, one could say that the facial recognition is racially biased against whites because it’s higher accuracy makes it more intrusive on their privacy (again, I have little doubt this would be argued in the other direction).
Let’s skip all that misdirected politicization. Just report that the technology has a harder time with darker faces and has more false positive matches, false negative matches, and
failed searches for them; it’s strongly in the interests of the vendors to continue improving their offerings in regard to any source of error.
Let’s not make it a source of racial tension just to get headlines, by labeling it with the highly emotionally charged term “racial bias” when that’s not the most accurate description in the first place.
Or else describe how and how often it causes actual harm; perhaps there really is a significant differential racial impact, which hasn’t yet been reported; in which case I may reconsider. I have no sympathy for real racial discrimination, I just don’t want us to be misled by constantly bringing in racial concepts when they aren’t needed.
As an Uglo-American, this new technology offends me. I have attempted to use facial recognition software many times on my smartphone, only to have it burst into flames each time. Fingerprint recognition should be enough.
I hear ya, Rob. I’ve got the same issue. That’s why I don’t let people take pictures of me anymore – their lenses kept breaking! :D
Good one, Rob!
Hidden due to low comment rating. Click here to see.
MARCH 20, 2019 AT 12:37 PM
Craig Cornell says:
LIKE OR DISLIKE:
1
1
Partisan . . . Hack. Do you even realize how boring you are? How predictable and lacking in any interesting thoughts? Anything even remotely amusing? So full of hate. Sad.
Hidden due to low comment rating. Click here to see.
How boilerplate of you to reply with ‘Date/ Commenter Name/ Like or dislike’ as opening comment!
I do have to wonder why all of Craig’s posts involve race &/or party. A little sensitive Craig? Not everything is black and white (no pun intended) or a partisan issue. You may need a new pair of blinders. Maybe some with peripheral vision?
Did you read the article? Hello! The ARTICLE tied race to facial recognition. In fact, that was the only point of the story.
It is the Left that wants to find racism in everything, not me. If you knew me, you would laugh at how diverse (boring word) my friends and family are. And how tired everyone I know is of hearing race brought into every issue, in order to divide us further.
The computer can’t recognize dark faces very well? Big deal. I am sure the software has a lot of limitations beyond skin color that will never make it into an Insurance Journal article.
(And is this insurance related at all? No. More lefty propaganda from IJ).
(“Hey look, that cup is racist! And that rock! What about that tree over there!”)
What does this article have to do with racism and not the failing of the recognition system? It is supposed to be a facial recognition system, it doesn’t work on specific people at a large percentage.. It’s brought to the attention of the manufacturers and should be worked on being fixed.
Please, please show me where the article points out flaws in the system OTHER than being unable to accurately read dark faces. That is the ONLY issue cited in the article, and then the article goes on to extrapolate the horrible racial consequences (law enforcement).
The article didn’t point out any other limitations of the facial recognition software. Eye shape and color? Size of nose? Length of chin? Ear shape? Any other limitations in the software mentioned in the article?
Nope. Dark skins. Welcome to America, 2019, where everything is racist, including your computer program.
What are you talking about!?
That is the exact flaw with the system… it misreads dark faces 34% of time… White faces less than 1%.. that’s an issue. What system would you want working for you that fails 34% of the time!?…. “Oh autonomous cars are dangerous and can lead to accidents because their system isn’t functioning correctly.. Yea but what OTHER Issues are there!? This country is mechanically prejudiced!”
The problem can negatively affect a large number of people.
There needs to be additional problems for you to care?
You are such an ignorant racist. Pull your pants up your bias is showing.
Hidden due to low comment rating. Click here to see.
It’s probably not a flaw in the system at all but an error in what training data was used. An AI trained to recognize only black cats is gonna problems when you throw in White and Stripes.
When my brother died a few years ago, as executor of his estate, I needed to get into his laptop. He had facial recognition software on it as part of the log in process. Even though I didn’t think we looked that much alike, we are both hair-challenged and the software recognized me as him and I was able to break into his laptop. Must have liked all that pale skin with our very high foreheads. I’ve been wary of the accuracy of this software ever since.
Hidden due to low comment rating. Click here to see.
I think that “racially biased” is an inappopriate word for software which is less accurate for some groups of people. It’s quite appropriate to bring up any such flaws – just as it would be if the software was easily fooled by hats or changes in eyeglasses. But the term “racially biased” is emotionally charged and suggests negative effect, even negative intention in the public mind.
Many stories note that AI systems are created by human programmers and humans have implicit bias, so their creations will obviously reflect those biases. That narrative is misleading – as reported to date, none involve programmers sneaking their personal racial biases into the code, nor is that likely to have. But very few readers understand technology, nor take the time to read and parse carefully – they just pick up that AI is biased against blacks too, because it’s created by whites.
The flaws in facial recognition systems likely stem from a combination of lower brightness and contrast in some faces, and perhaps from the training sets having more people of some races/ethnicities than others. Yes, it’s good to fix those flaws, if possible; or at least reduce them. But that’s in no way intentional or due to unconscious human bias.
And the tenuous potential link to damaging outcomes is always nebulous. If it’s a real world danger, describe the incidents where it results in real harm. Is the problem that criminals of some races are less often personally identifiable in security footage at the scen of a crime, and so go uncaught more often for some races than for others (failure to match)? Or being unable to enter secured areas they should be authorizef for? Or are people being frequently detained briefly or extensively at airports or stadiums because of a false positive match with a banned person? Or are banned people of certain races being undetected more often? Or what?
Some of those might be minor or major hassles for people with darker skin tones, and some might be advantages.
I suspect that if the facial recognition software had been more accurate for darker skin tones, there would be cries of bias because it oversurveilled people of color compared to whites; or on the flip side, one could say that the facial recognition is racially biased against whites because it’s higher accuracy makes it more intrusive on their privacy (again, I have little doubt this would be argued in the other direction).
Let’s skip all that misdirected politicization. Just report that the technology has a harder time with darker faces and has more false positive matches, false negative matches, and
failed searches for them; it’s strongly in the interests of the vendors to continue improving their offerings in regard to any source of error.
Let’s not make it a source of racial tension just to get headlines, by labeling it with the highly emotionally charged term “racial bias” when that’s not the most accurate description in the first place.
Or else describe how and how often it causes actual harm; perhaps there really is a significant differential racial impact, which hasn’t yet been reported; in which case I may reconsider. I have no sympathy for real racial discrimination, I just don’t want us to be misled by constantly bringing in racial concepts when they aren’t needed.
Honestly such a big issue! Would never expect a big corporation such as Amazon to mess up this badly. DO BETTER BEZOS!!!!