MIT scholars reveal that facial recognition technology has serious racial gender bias
Face recognition technology is becoming more widely used, but MIT researcher Branwini found that even software developed by well-known technology companies has an error rate when identifying dark human faces. Also high.
According to a study by Joy Buolamwini, facial recognition software developed by technology giants such as Amazon has a much higher error rate than the gender of a light-skinned male in identifying the gender of a dark-skinned woman.
Branwinnie has succeeded in stimulating companies such as Microsoft and IBM to improve their systems, but Amazon has been so angry that it has publicly criticized her research methods. A group of experts in the field of artificial intelligence (AI) publicly supported Branwini yesterday, calling on Amazon to stop selling its own facial recognition software to the police.
In addition to professionals and the corporate world, Branwini’s research has also attracted the attention of politicians, some of whom argue that the scope of use of facial recognition technology should be limited.
Branwini once pointed out that most of the relevant technologies are currently being used in unsupervised or even secret situations, and once the public is alert, it may be too late.
In addition, many researchers have pointed out that artificial intelligence systems use big data to find and identify patterns and patterns, but in the process they also replicate the institutional biases implicit in these data.
For example, if the learning materials of the artificial intelligence system are mostly white male images, it is most suitable for recognizing white male faces.
Such differences can sometimes be a matter of life and death. A recently published study showed that computer vision systems that help self-driving “seeing the road” are obviously difficult to detect dark-skinned pedestrians.