Conversational Agent as Trustworthy Autonomous System

The Dagstuhl seminar “Conversational Agent as Trustworthy Autonomous System (Trust-CA)” will take place from September 19 – 24, 2021. According to the website, Schloss Dagstuhl – Leibniz-Zentrum für Informatik “pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers”. Organizers of this event are Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester) and Björn Schuller (University of Augsburg). They outline the background as follows: “CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI). This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.” (Website Dagstuhl) Regarding the goal of the workshop, the organizers write: “The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.” (Website Dagstuhl) Oliver Bendel (School of Business FHNW) will talk about his chatbot and voice assistant projects. These emerge since 2013 from machine ethics and social robotics. Further information is available here (photo: Schloss Dagstuhl).

WHO Fights COVID-19 Misinformation with Viber Chatbot

A new WHO chatbot on Rakuten Viber aims to get accurate information about COVID-19 to people in several languages. “Once subscribed to the WHO Viber chatbot, users will receive notifications with the latest news and information directly from WHO. Users can also learn how to protect themselves and test their knowledge on coronavirus through an interactive quiz that helps bust myths. Another goal of the partnership is to fight misinformation.” (Website WHO) Some days ago, the Centers for Disease Control and Prevention of the United States Department of Health and Human Services have launched a chatbot that helps people decide what to do if they have potential Coronavirus symptoms such as fever, cough, or shortness of breath. However, this dialog system is only intended for people who are permanently or temporarily in the USA. The new WHO chatbot is freely available in English, Russian and Arabic with more than 20 languages to be added.

The Coronavirus Chatbot

The Centers for Disease Control and Prevention of the United States Department of Health and Human Services have launched a chatbot that will help people decide what to do if they have potential Coronavirus symptoms such as fever, cough, or shortness of breath. This was reported by the magazine MIT Technology Review on 24 March 2020. “The hope is the self-checker bot will act as a form of triage for increasingly strained health-care services.” (MIT Technology Review, 24 March 2020) According to the magazine, the chatbot asks users questions about their age, gender, and location, and about any symptoms they’re experiencing. It also inquires whether they may have met someone diagnosed with COVID-19. On the basis of the users’ replies, it recommends the best next step. “The bot is not supposed to replace assessment by a doctor and isn’t intended to be used for diagnosis or treatment purposes, but it could help figure out who most urgently needs medical attention and relieve some of the pressure on hospitals.” (MIT Technology Review, 24 March 2020) The service is intended for people who are currently located in the US. International research is being done not only on useful but also on moral chatbots.

Towards a Human-like Chatbot

Google is currently working on Meena, a particular chatbot, which should be able to have arbitrary conversations and be used in many contexts. In their paper “Towards a Human-like Open-Domain Chatbot“, the developers present the 2.6 billion parameters end-to-end trained neural conversational model. They show that Meena “can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots”. “Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.” (Google AI Blog) The company draws a comparison with OpenAI GPT-2, a model used in “Talk to Transformer” and Harmony, among others, which uses 1.5 billion parameters and is based on the text content of 8 million web pages.

The Birth of the Morality Menu

The idea of a morality menu (MOME) was born in 2018 in the context of machine ethics. It should make it possible to transfer the morality of a person to a machine. On a display you can see different rules of behaviour and you can activate or deactivate them with sliders. Oliver Bendel developed two design studies, one for an animal-friendly vacuum cleaning robot (LADYBIRD), the other for a voicebot like Google Duplex. At the end of 2018, he announced a project at the School of Business FHNW. Three students – Ozan Firat, Levin Padayatty and Yusuf Or – implemented a morality menu for a chatbot called MOBO from June 2019 to January 2020. The user enters personal information and then activates or deactivates nine different rules of conduct. MOBO compliments or does not compliment, responds with or without prejudice, threatens or does not threaten the interlocutor. It responds to each user individually, says his or her name – and addresses him or her formally or informally, depending on the setting. A video of the MOBO-MOME is available here.

Opportunities and Risks of Facial Recognition

The book chapter “The BESTBOT Project” by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the “Handbuch Maschinenethik”, edited by Oliver Bendel. From the abstract: “The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.” (Abstract) The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.