The conference program for ACI’22 will be available in the course of November. In the meantime, the website lists the accepted papers in alphabetical order. Among them are the papers “A Face Recognition System for Bears: Protection for Animals and Humans in the Alps” (Oliver Bendel and Ali Yürekkirmaz), “A Framework for Training Animals to Use Touchscreen Devices for Discrimination Tasks” (Jennifer Cunha and Corinne Renguette), “Politicising Animal-Computer Interaction: an Approach to Political Engagement with Animal-Centred Design” (Clara Mancini, Orit Hirsch-Matsioulas, and Daniel Metcalfe), and “TamagoPhone: A framework for augmenting artificial incubators to enable vocal interaction between bird parents and eggs” (Rebecca Kleinberger, Megha Vemuri, Janelle Sands, Harpreet Sareen, Janet M. Baker). ACI2022 will take place 5-8 December 2022, hosted by Northumbria University, Newcastle upon Tyne, UK.
Towards Animal Face Recognition
Face recognition for humans is very controversial, especially when it comes to surveillance or physiognomy. However, there are also other possible applications, for example in relation to animals. At the moment, individuals are mainly tracked with the help of chips and transmitters. However, these are disturbing for some of the animals. Further, the question is whether one should interfere with living beings in this way. In addition, animals are constantly being born that escape monitoring. The project “ANIFACE: Animal Face Recognition” will develop a concept of a facial recognition system that can identify individuals of bears and wolves. These are advancing more and more in Switzerland and need to be monitored to protect them and affected people (and their agriculture). Facial recognition can be used to identify the individual animals and also to track them if there are enough stations, which of course must be connected with each other. An interesting sidebar would be emotion recognition for animals. The system could find out how bears and wolves are feeling and then trigger certain actions. The project was applied for in July 2021 by Prof. Dr. Oliver Bendel, who has already designed and implemented several animal-friendly machines with his teams. In August, it will be decided whether he can start the work.
Reclaim Your Face
The “Reclaim Your Face” alliance, which calls for a ban on biometric facial recognition in public space, has been registered as an official European Citizens’ Initiative. One of the goals is to establish transparency: “Facial recognition is being used across Europe in secretive and discriminatory ways. What tools are being used? Is there evidence that it’s really needed? What is it motivated by?” (Website RYF) Another one is to draw red lines: “Some uses of biometrics are just too harmful: unfair treatment based on how we look, no right to express ourselves freely, being treated as a potential criminal suspect.” (Website RYF) Finally, the initiative demands respect for human: “Biometric mass surveillance is designed to manipulate our behaviour and control what we do. The general public are being used as experimental test subjects. We demand respect for our free will and free choices.” (Website RYF) In recent years, the use of facial recognition techniques have been the subject of critical reflection, such as in the paper “The Uncanny Return of Physiognomy” presented at the 2018 AAAI Spring Symposia or in the chapter “Some Ethical and Legal Issues of FRT” published in the book “Face Recognition Technology” in 2020. More information at reclaimyourface.eu.
IBM will Stop Developing or Selling Facial Recognition Technology
IBM will stop developing or selling facial recognition software due to concerns the technology is used to support racism. This was reported by MIT Technology Review on 9 June 2020. In a letter to Congress, IBM’s CEO Arvind Krishna wrote: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” (Letter to Congress, 8 June 2020) The extraordinary letter “also called for new federal rules to crack down on police misconduct, and more training and education for in-demand skills to improve economic opportunities for people of color” (MIT Technology Review, 9 June 2020). A talk at Stanford University in 2018 warned against the return of physiognomy in connection with face recognition. The paper is available here.
Towards an Anti Face
Face recognition in public spaces is a threat to freedom. You can defend yourself with masks or with counter-technologies. Even make-up is a possibility. Adam Harvey demonstrated this in the context of the CV Dazzle project at the hacker congress 36C3 in Leipzig. As Heise reports, he uses biological characteristics such as face color, symmetry and shadows and modifies them until they seem unnatural to algorithms. The result, according to Adam Harvey, is an “anti face”. The style tips for reclaiming privacy could be useful in Hong Kong, where face recognition is widespread and used against freedom fighters. Further information can be found on the CV Dazzle website. “CV Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition.” (Website CV Dazzle)
Permanent Record
The whistleblower Edward Snowden spoke to the Guardian about his new life and concerns for the future. The reason for the two-hour interview was his book “Permanent Record”, which will be published on 17 September 2019. “In his book, Snowden describes in detail for the first time his background, and what led him to leak details of the secret programmes being run by the US National Security Agency (NSA) and the UK’s secret communication headquarters, GCHQ.” (Guardian, 13 September 2019) According to the Guardian, Snowden said: “The greatest danger still lies ahead, with the refinement of artificial intelligence capabilities, such as facial and pattern recognition.” (Guardian, 13 September 2019) The number of public appearances by and interviews with him is rather manageable. On 7 September 2016, the movie “Snowden” was shown as a preview in the Cinéma Vendôme in Brussels. Jan Philipp Albrecht, Member of the European Parliament, invited Viviane Reding, the Luxembourg politician and journalist, and authors and scientists such as Yvonne Hofstetter and Oliver Bendel. After the preview, Edward Snowden was connected to the participants via videoconferencing for almost three quarters of an hour.
The New Dangers of Face Recognition
The dangers of face recognition are discussed more and more. A new initiative is aimed at banning the use of technology to monitor the American population. The AI Now Institute already warned of the risks in 2018, as did Oliver Bendel. The ethicist had a particular use in mind. In the 21st century, face recognition is increasingly attempted to connect to the pseudoscience of physiognomy, which has its origins in ancient times. From the appearance of persons, a conclusion is drawn to their inner self, and attempts are made to identify character traits, personality traits and temperament, or political and sexual orientation. Biometrics plays a role in this concept. It was founded in the eighteenth century, when physiognomy under the lead of Johann Caspar Lavater had its dubious climax. In his paper “The Uncanny Return of Physiognomy”, Oliver Bendel elaborates the basic principles of this topic; selected projects from research and practice are presented and, from an ethical perspective, the possibilities of face recognition are subjected to fundamental critique in this context, including the above examples. The philosopher presented his paper on 27 March 2018 at Stanford University (“AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents”, AAAI 2018 Spring Symposium Series). The whole volume is available here.
With Low-Tech against Digital Mass Surveillance
The resistance movement in Hong Kong uses different means of defence, communication and information. This is reported by the German news portal Heise on 1 September 2019. Demonstrators direct laser beams at the video cams, which are installed everywhere, not least in intelligent street lamps. The pointers are intended to interfere with the facial recognition systems used by the police to analyse some of the video streams. On television, one could see how the civil rights activists were able to use chain saws and ropes to bring down the high-tech street lights. Services such as Telegram and Firechat are used for communication and coordination. According to Quartz, “Hong Kong’s protesters are using AirDrop, a file-sharing feature that allows Apple devices to send photos and videos over Bluetooth and Wi-Fi, to breach China’s Great Firewall in order to spread information to mainland Chinese visitors in the city” (Quartz, 8 July 2019). You could read on a digital sticker distributed by AirDrop at subway stations: “Don’t wait until [freedom] is gone to regret its loss. Freedom isn’t god-given; it is fought for by the people.” (Quartz, 8 July 2019)
The System that Detects Fear
Amazon Rekognition is a well-known software for facial recognition, including emotion detection. It is used in the BESTBOT, a moral machine that hides an immoral machine. The immoral is precisely caused by facial recognition, which endangers the privacy of the user and his or her informational autonomy. The project is intended not least to draw attention to this risk. Amazon announced on 12 August 2019 that it has improved and expanded its system: “Today, we are launching accuracy and functionality improvements to our face analysis features. Face analysis generates metadata about detected faces in the form of gender, age range, emotions, attributes such as ‘Smile’, face pose, face image quality and face landmarks. With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’.” (Amazon, 12 August 2019) Because the BESTBOT accesses other systems such as MS Face API and Kairos, it can already recognize fear. So the change at Amazon means no change for this artifact of machine ethics.