Google is looking for a new skin tone palette to avoid racial biases in its products




Google is working on the development of Alternative measurements to the industry standard skin tone grading system with the aim of ensuring proper operation of its products with all types of people.



Currently, companies use the so-called Fitzpatrick scale, a classification for skin color developed in 1975 that simplifies the different tones into six - four for "white" skin, one for "brown" and one for "black" - and is designed from a dermatological point of view . However, researchers and professionals in this specialty consider that it is inadequate to evaluate possible biases due to its lack of attention to diversity.






The scale of skin tones that technology companies usually use to evaluate their products includes four tones for "white" skin, one for "brown" and one for "black"




So, those in Mountain View are working on alternative measures, as they have told Reuters.



A scale with a lack of diversity



Researchers from the U.S. Department of Homeland Security recommended dropping the Fitzpatrick scale to assess facial recognition because misrepresents color gamut in diverse populationsthey said at a conference on federal technology standards held last year. A question that the news agency asked Google about.



"We are working on alternative, more inclusive measures that could be helpful in the development of our products, and we will collaborate with scientific and medical experts, as well as groups that work with communities of color."they explained, declining to provide further details.




Investigators from the U.S. Department of Homeland Security recommended dropping the Fitzpatrick scale




The problems derived from the use of this scale and, above all, from racial and gender biases in general, come from afar: they have been carried on since the 80s. In May 2019, in fact, the Californian company claimed to be working to fix artificial intelligence biases. Amazon, in 2018, had to ditch a recruiting AI for their bias against women. And the problem, as usual, is found in the database that serves as a reference.







Mya wants to end discrimination when hiring: let a bot with AI without biases do it






Google has also been a protagonist in the media due to its biases. The artificial intelligence that analyzes the images uploaded to Google Photos labeled two people of color as gorillas. They apologized for it, but their solution temporary, while they found a definitive one, was to remove the tag gorilla of the application. Three years later, the new solution It was similar: preventing Photos from identifying gorillas. On Wired they noted that the service simply did not return results when searching for the terms "gorilla," "chimpanzee," and "monkey," even though there were photos of those animals.




Racial and gender biases creep in since the 1980s




More recently, Google fired a researcher who attributed the decision to her claim of bias in artificial intelligence. Timnit Gebru, the worker, is known for the research she carried out together with Joy Buolamwini that showed how commercially available facial recognition systems correctly identified the gender of individuals when they were light-skinned males and they failed notably when it came to dark-skinned people, reaching an error rate of 35% in the case of women.



By finding an alternative measure of skin tones more effective than the scale used so far, it is hoped that Google and other companies will be able to better test your artificial intelligence algorithms and products such as sensors related to the health of products such as smart watches.