Should we Trust Conversational Agents?

A group of about 50 scientists from all over the world worked for one week (September 19 – 24, 2021) at Schloss Dagstuhl – Leibniz-Zentrum für Informatik on the topic „Conversational Agent as Trustworthy Autonomous System (Trust-CA)“. Half were on site, the other half were connected via Zoom. Organizers of this event were Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester), and Björn Schuller (University of Augsburg). On-site participants from Germany and Switzerland included Elisabeth André (University of Augsburg), Stefan Schaffer (DFKI), Sebastian Hobert (University of Göttingen), Matthias Kraus (University of Ulm), and Oliver Bendel (School of Business FHNW). The complete list of participants can be found on the Schloss Dagstuhl website, as well as some pictures. Oliver Bendel presented projects from ten years of research in machine ethics, namely GOODBOT, LIEBOT, BESTBOT, MOME, and SPACE-THEA. Further information is available here.

Meet SPACE THEA

SPACE THEA was developd by Martin Spathelf at the School of Business FHNW from April to August 2021. The client and supervisor was Prof. Dr. Oliver Bendel. The voice assistant is supposed to show empathy and emotions towards astronauts on a Mars flight. Technically, it is based on Google Assistant and Dialogflow. The programmer chose a female voice with Canadian English. SPACE THEA’s personality includes functional and emotional intelligence, honesty, and creativity. She follows a moral principle: to maximize the benefit of the passengers of the spacecraft. The prototype was implemented for the following scenarios: conduct general conversations; help the user find a light switch; assist the astronaut when a thruster fails; greet and cheer up in the morning; fend off an insult for no reason; stand by a lonely astronaut; learn about the voice assistant. A video on the latter scenario is available here. Oliver Bendel has been researching conversational agents for 20 years. With his teams, he has developed 20 concepts and artifacts of machine ethics and social robotics since 2012.

AI and Society

The AAAI Spring Symposia at Stanford University are among the community’s most important get-togethers. The years 2016, 2017, and 2018 were memorable highlights for machine ethics, robot ethics, ethics by design, and AI ethics, with the symposia “Ethical and Moral Considerations in Non-Human Agents” (2016), “Artificial Intelligence for the Social Good” (2017), and “AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents” (2018) … As of 2019, the proceedings are no longer provided directly by the Association for the Advancement of Artificial Intelligence, but by the organizers of each symposium. As of summer 2021, the entire 2018 volume of the conference has been made available free of charge. It can be found via www.aaai.org/Library/Symposia/Spring/ss18.php or directly here. It includes contributions by Philip C. Jackson, Mark R. Waser, Barry M. Horowitz, John Licato, Stefania Costantini, Biplav Srivastava, and Oliver Bendel, among others.

Wildlife Conservation at a Garden Level

Sophie Lund Rasmussen and her co-authors published the interesting and enlightening article “Wildlife Conservation at a Garden Level: The Effect of Robotic Lawn Mowers on European Hedgehogs (Erinaceus europaeus)” in April 2021. In their “Simple Summary” they write: “Injured European hedgehogs are frequently admitted to hedgehog rehabilitation centres with different types of cuts and injuries. Although not rigorously quantified, a growing concern is that an increasing number of cases may have been caused by robotic lawn mowers. Research indicates that European hedgehogs are in decline. It is therefore important to identify and investigate the factors responsible for this decline to improve the conservation initiatives directed at this species. Because hedgehogs are increasingly associated with human habitation, it seems likely that numerous individuals will encounter several robotic lawn mowers during their lifetimes. Consequently, this study aimed to describe and quantify the effects of robotic lawn mowers on hedgehogs, and we tested 18 robotic lawn mowers in collision with dead hedgehogs. Some models caused extensive damage to the dead hedgehogs, but there were noteworthy differences in the degree of harm inflicted, with some consistently causing no damage. None of the robotic lawn mowers tested was able to detect the presence of dead, dependent juvenile hedgehogs, and no models could detect the hedgehog cadavers without physical interaction. We therefore encourage future collaboration with the manufacturers of robotic lawn mowers to improve the safety for hedgehogs and other garden wildlife species.” (Rasmussen et al. 2021) In 2019/2020, Oliver Bendel and his team developed the prototype HAPPY HEDGHEHOG. This robot lawnmower stops working as soon as it detects hedgehogs. A thermal recognition sensor and a camera with image recognition are used. The paper was presented at the AAAI Spring Symposia in March 2021.

Robots that Spare Animals

Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments, more structured environments like the household or more unstructured environments like cultural landscapes or the wilderness. There they encounter domestic animals, farm animals, working animals, and wild animals. These creatures could be disturbed, displaced, injured, or killed by the machines. Within the context of machine ethics and social robotics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral and social machines in the spirit of these disciplines. In 2019-20, a team led by Prof. Dr. Oliver Bendel developed a prototype robot lawnmower that can recognize hedgehogs, interrupt its work for them and thus protect them. Every year many of these animals die worldwide because of traditional service robots. HAPPY HEDGEHOG (HHH), as the invention is called, could be a solution to this problem. This article begins by providing an introduction to the background. Then it focuses on navigation (where the machine comes across certain objects that need to be recognized) and thermal and image recognition (with the help of machine learning) of the machine. It also presents obvious weaknesses and possible improvements. The results could be relevant for an industry that wants to market their products as animal-friendly machines. The paper “The HAPPY HEDGEHOG Project” is available here.

Artificial Intelligence and its Siblings

Artificial intelligence (AI) has gained enormous importance in research and practice in the 21st century after decades of ups and downs. Machine ethics and machine consciousness (artificial consciousness) were able to bring their terms and methods to the public at the same time, where they were more or less well understood. Since 2018, a graphic has attempted to clarify the terms and relationships of artificial intelligence, machine ethics and machine consciousness. It is constantly evolving, making it more precise, but also more complex. A new version has been available since the beginning of 2021. In it, it is made even clearer that the three disciplines not only map certain capabilities (mostly of humans), but can also expand them.

The Morality Menu Project

From 18 to 21 August 2020, the Robophilosophy conference took place. Due to the pandemic, participants could not meet in Aarhus as originally planned, but only in virtual space. Nevertheless, the conference was a complete success. At the end of the year, the conference proceedings were published by IOS Press, including the paper “The Morality Menu Project” by Oliver Bendel. From the abstract: “The discipline of machine ethics examines, designs and produces moral machines. The artificial morality is usually pre-programmed by a manufacturer or developer. However, another approach is the more flexible morality menu (MOME). With this, owners or users replicate their own moral preferences onto a machine. A team at the FHNW implemented a MOME for MOBO (a chatbot) in 2019/2020. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu could be a valuable extension for certain moral machines.” The book can be ordered on the publisher’s website. An author’s copy is available here.

Evolutionary Machine Ethics

Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague The Anh Han he wrote the article “Evolutionary Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). Editor is Oliver Bendel (Zurich, Switzerland). From the abstract: “Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019.  Since then it has been downloaded thousands of times.

Research Program on Responsible AI

“HASLER RESPONSIBLE AI” is a research program of the Hasler Foundation open to research institutions within the higher education sector or non-commercial research institutions outside the higher education sector. The foundation explains the goals of the program in a call for project proposals: “The HASLER RESPONSIBLE AI program will support research projects that investigate machine-learning algorithms and artificial intelligence systems whose results meet requirements on responsibility and trustworthiness. Projects are expected to seriously engage in the application of the new models and methods in scenarios that are relevant to society. In addition, projects should respect the interdisciplinary character of research in the area of RESPONSIBLE AI by involving the necessary expertise.” (CfPP by Hasler Foundation) Deadline for submission of short proposals is 24 January 2021. More information at haslerstiftung.ch.

About the “Handbuch Maschinenethik”

The “Handbuch Maschinenethik” (ed. Oliver Bendel) was published by Springer VS over a year ago. It brings together contributions from leading experts in the fields of machine ethics, robot ethics, technology ethics, philosophy of technology and robot law. It has become a comprehensive, exemplary and unique book. In a way, it forms a counterpart to the American research that dominates the discipline: Most of the authors (among them Julian Nida-Rümelin, Catrin Misselhorn, Eric Hilgendorf, Monika Simmler, Armin Grunwald, Matthias Scheutz, Janina Loh and Luís Moniz Pereira) come from Europe and Asia. They had been working on the project since 2017 and submitted their contributions continuously until it went to print. The editor, who has been working on information, robot and machine ethics for 20 years and has been doing intensive research on machine ethics for nine years, is pleased to report that 53,000 downloads have already been recorded – quite a lot for a highly specialized book. The first article for a second edition is also available, namely “The BESTBOT Project” (in English like some other contributions) …