Reclaim Your Face

The “Reclaim Your Face” alliance, which calls for a ban on biometric facial recognition in public space, has been registered as an official European Citizens’ Initiative. One of the goals is to establish transparency: “Facial recognition is being used across Europe in secretive and discriminatory ways. What tools are being used? Is there evidence that it’s really needed? What is it motivated by?” (Website RYF) Another one is to draw red lines: “Some uses of biometrics are just too harmful: unfair treatment based on how we look, no right to express ourselves freely, being treated as a potential criminal suspect.” (Website RYF) Finally, the initiative demands respect for human: “Biometric mass surveillance is designed to manipulate our behaviour and control what we do. The general public are being used as experimental test subjects. We demand respect for our free will and free choices.” (Website RYF) In recent years, the use of facial recognition techniques have been the subject of critical reflection, such as in the paper “The Uncanny Return of Physiognomy” presented at the 2018 AAAI Spring Symposia or in the chapter “Some Ethical and Legal Issues of FRT” published in the book “Face Recognition Technology” in 2020. More information at reclaimyourface.eu.

IBM will Stop Developing or Selling Facial Recognition Technology

IBM will stop developing or selling facial recognition software due to concerns the technology is used to support racism. This was reported by MIT Technology Review on 9 June 2020. In a letter to Congress, IBM’s CEO Arvind Krishna wrote: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” (Letter to Congress, 8 June 2020) The extraordinary letter “also called for new federal rules to crack down on police misconduct, and more training and education for in-demand skills to improve economic opportunities for people of color” (MIT Technology Review, 9 June 2020). A talk at Stanford University in 2018 warned against the return of physiognomy in connection with face recognition. The paper is available here.

Opportunities and Risks of Facial Recognition

The book chapter “The BESTBOT Project” by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the “Handbuch Maschinenethik”, edited by Oliver Bendel. From the abstract: “The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.” (Abstract) The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.

Towards an Anti Face

Face recognition in public spaces is a threat to freedom. You can defend yourself with masks or with counter-technologies. Even make-up is a possibility. Adam Harvey demonstrated this in the context of the CV Dazzle project at the hacker congress 36C3 in Leipzig. As Heise reports, he uses biological characteristics such as face color, symmetry and shadows and modifies them until they seem unnatural to algorithms. The result, according to Adam Harvey, is an “anti face”. The style tips for reclaiming privacy could be useful in Hong Kong, where face recognition is widespread and used against freedom fighters. Further information can be found on the CV Dazzle website. “CV Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition.” (Website CV Dazzle)

The New Dangers of Face Recognition

The dangers of face recognition are discussed more and more. A new initiative is aimed at banning the use of technology to monitor the American population. The AI Now Institute already warned of the risks in 2018, as did Oliver Bendel. The ethicist had a particular use in mind. In the 21st century, face recognition is increasingly attempted to connect to the pseudoscience of physiognomy, which has its origins in ancient times. From the appearance of persons, a conclusion is drawn to their inner self, and attempts are made to identify character traits, personality traits and temperament, or political and sexual orientation. Biometrics plays a role in this concept. It was founded in the eighteenth century, when physiognomy under the lead of Johann Caspar Lavater had its dubious climax. In his paper “The Uncanny Return of Physiognomy”, Oliver Bendel elaborates the basic principles of this topic; selected projects from research and practice are presented and, from an ethical perspective, the possibilities of face recognition are subjected to fundamental critique in this context, including the above examples. The philosopher presented his paper on 27 March 2018 at Stanford University (“AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents”, AAAI 2018 Spring Symposium Series). The whole volume is available here.

The System that Detects Fear

Amazon Rekognition is a well-known software for facial recognition, including emotion detection. It is used in the BESTBOT, a moral machine that hides an immoral machine. The immoral is precisely caused by facial recognition, which endangers the privacy of the user and his or her informational autonomy. The project is intended not least to draw attention to this risk. Amazon announced on 12 August 2019 that it has improved and expanded its system: “Today, we are launching accuracy and functionality improvements to our face analysis features. Face analysis generates metadata about detected faces in the form of gender, age range, emotions, attributes such as ‘Smile’, face pose, face image quality and face landmarks. With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’.” (Amazon, 12 August 2019) Because the BESTBOT accesses other systems such as MS Face API and Kairos, it can already recognize fear. So the change at Amazon means no change for this artifact of machine ethics.