More than 70 researchers and academics ask the Government for a moratorium on the use of facial recognition until its regulation

Some seventy professors and specialists in computer science ethics, design and philosophy have signed a letter addressed to the Government of Spain in which they ask for an investigation commission to study the need to establish a moratorium on the use and commercialization of facial recognition and analysis systems by public and private companies.

The signatories request this postponement "until the Cortes Generales and the European legislative institutions debate which, in what way, under what conditions, with what guarantees and with what objectives, if possible, the use of these systems should be allowed." The reason? The serious deficiencies and risks presented by these systems, they assure.

"Fundamental issues of social justice, human dignity, equity, equal treatment and inclusion are at stake"

Facial recognition has "serious problems"

Flat 3252983 1280

The letter signed by professors, professors, researchers and other professionals related in one way or another to the subject is motivated by the concern they feel in relation to "with the potential pernicious effects that these systems may have on the well-being, interests and fundamental needs and rights of the Spanish population". In his opinion, "fundamental questions of social justice, human dignity, equity, equal treatment and inclusion are at stake."

The brief presents, as a summary, a selection of the "serious problems" that have the systems for recognizing and analyzing images of people and, by extension, the machine learning algorithms that support them computationally.

The idea is not so much that the technology itself is regulated, it is to regulate what decisions are made thanks to its use

The examples of the problematic use of these technologies go, they explain, from the association to a person of a certain characteristic or trend based on population statistics to the linking of physical traits or characteristics of a person with certain assumptions through the opacity of the algorithms as to the reasons why a person has been classified in a certain way. These and others show are problematic and potentially dangerous, express the signatories.

The Civil Guard wants to test facial recognition and other surveillance technologies in mass events

For all this, and with the case of Renfe's surveillance system in mind, urge the Executive to intervene quickly "before these systems continue to expand and become de facto standards" and highlight the need to create a commission to investigate the need for a moratorium that, in their opinion, is "expendable and urgent."

"We suggest that this commission is independent and is composed of scientists, lawyers, experts in ethics and artificial intelligence, and members of civil society, especially of those groups that can be seen prima facie affected by these systems, "they write. The idea is not so much to regulate the technology itself, it is to regulate what decisions are made thanks to its use.

Facial recognition systems have proven to be unreliable on the international scene, they have been warned about their use in education in Spain and IBM, Microsoft and Amazon have quarantined their use by police forces. "Technology can increase transparency and help police protect communities, but it should not promote racial discrimination or injustice.", defended from IBM.

© Best Of Giz India. All rights reserved. Distributed by . Distributed by