Researchers at the University of Washington have developed a web app to help children develop skills such as self-awareness and emotional management. They have published their findings in their paper “Self-Talk with Superhero Zip: Supporting Children’s Socioemotional Learning with Conversational Agents”. From the abstract: “Here, we examine whether children can learn to use a socioemotional strategy known as ‘self-talk’ from a conversational agent (CA). To investigate this question, we designed and built ‘Self-Talk with Superhero Zip,’ an interactive CA experience, and deployed it for one week in ten family homes to pairs of siblings between the ages of five and ten … We found that children could recall and accurately describe the lessons taught by the intervention, and we saw indications of children applying self-talk in daily life.” (Fu et al. 2023) The paper can be downloaded at dl.acm.org/doi/abs/10.1145/3585088.3589376 (Image: DALL-E 3).
Talking with Social Robotics Girl
On November 6, 2023, OpenAI made so-called GPTs available to ChatGPT Plus users. According to the US company, anyone can easily create his or her own GPT without any programming knowledge. Initial tests have shown the performance of the new function. ChatGPT suggests a name for the chatbot, creates the profile picture and accepts documents with text and reference lists to expand its knowledge of the topic. The function is ideal for creating your own learning companions, modern educational agents so to speak. But you can also benefit from chatbots from other users and providers. A GPT called Social Robotics Girl, which provides information about social robotics, has been available since November 12, 2023. It was created by Prof. Dr. Oliver Bendel and is based on a collection of his articles on this topic. It can therefore give his definition of social robots and make classifications based on his five-dimension model. ChatGPT Plus users can access Social Robotics Girl via chat.openai.com/g/g-TbhZSZaer-social-robotics-girl (Image: DALL-E 3).
@llegra, a Chatbot for Vallader
Conversational agents have been a research subject of Prof. Dr. Oliver Bendel for a quarter of a century. He dedicated his doctoral thesis at the University of St. Gallen to them. At the School of Business FHNW, he developed them with his changing teams from 2012 to 2022, primarily in the context of machine ethics and social robotics. The philosopher of technology now devotes himself increasingly to dead, extinct, and endangered languages. After @ve (2022), a chatbot for Latin based on GPT-3, another project started in March 2023. The chatbot @llegra is developed by Dalil Jabou for the Rhaeto-Romanic idiom Vallader, which occurs in the Lower Engadine between Martina in the northeast and Zernez in the southwest, as well as in Val Müstair. The user can type text and gets text output. In addition, @llegra speaks with the help of a text-to-speech system from the company SlowSoft, which supports the project. The GPT-3 speech model produced rather unsatisfactory results. The breakthrough then came with the use of GPT-4. The knowledge base was supplemented with the help of four children’s books on Vallader. The project will be completed in August 2023. The results will be published thereafter.
The @ve Project
On January 19, 2023, the final presentation was held for the @ve project, which started in September 2022. The chatbot runs on the website www.ave-bot.ch and on Telegram. Like ChatGPT, it is based on GPT-3 from OpenAI (@ve is not GPT-3.5, but GPT-3.0). The project was initiated by Prof. Dr. Oliver Bendel, who wants to devote more time to dead, extinct, and endangered languages. @ve was developed by Karim N’diaye, who studied business informatics at the Hochschule für Wirtschaft FHNW. You can talk to her in Latin, i.e. in a dead language that thus comes alive in a way, and ask her questions about grammar. It was tested by a relevant expert. One benefit, according to Karim N’diaye, is that you can communicate in Latin around the clock, thinking about what and how to write. One danger, he says, is that there are repeated errors in the answers. For example, sometimes the word order is not correct. In addition, it is possible that the meaning is twisted. This can happen with a human teacher, and the learner should always be alert and look for errors. Without a doubt, @ve is a tool that can be profitably integrated into Latin classes. There, students can report what they have experienced with it at home, and they can have a chat with it on the spot, alone or in a group, accompanied by the teacher. A follow-up project on an endangered language has already been announced (Illustration: Karim N’diaye/Unsplash).
Ethics of Conversational Agents
The Ethics of Conversational User Interfaces workshop at the ACM CHI 2022 conference “will consolidate ethics-related research of the past and set the agenda for future CUI research on ethics going forward”. “This builds on previous CUI workshops exploring theories and methods, grand challenges and future design perspectives, and collaborative interactions.” (CfP CUI) From the Call for Papers: “In what ways can we advance our research on conversational user interfaces (CUIs) by including considerations on ethics? As CUIs, like Amazon Alexa or chatbots, become commonplace, discussions on how they can be designed in an ethical manner or how they change our views on ethics of technology should be topics we engage with as a community.” (CfP CUI) Paper submission deadline is 24 February 2022. The workshop is scheduled to take place in New Orleans on 21 April 2022. More information is available via www.conversationaluserinterfaces.org/workshops/CHI2022/.
Conversational Agent as Trustworthy Autonomous System
The Dagstuhl seminar “Conversational Agent as Trustworthy Autonomous System (Trust-CA)” will take place from September 19 – 24, 2021. According to the website, Schloss Dagstuhl – Leibniz-Zentrum für Informatik “pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers”. Organizers of this event are Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester) and Björn Schuller (University of Augsburg). They outline the background as follows: “CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI). This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.” (Website Dagstuhl) Regarding the goal of the workshop, the organizers write: “The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.” (Website Dagstuhl) Oliver Bendel (School of Business FHNW) will talk about his chatbot and voice assistant projects. These emerge since 2013 from machine ethics and social robotics. Further information is available here (photo: Schloss Dagstuhl).
WHO Fights COVID-19 Misinformation with Viber Chatbot
A new WHO chatbot on Rakuten Viber aims to get accurate information about COVID-19 to people in several languages. “Once subscribed to the WHO Viber chatbot, users will receive notifications with the latest news and information directly from WHO. Users can also learn how to protect themselves and test their knowledge on coronavirus through an interactive quiz that helps bust myths. Another goal of the partnership is to fight misinformation.” (Website WHO) Some days ago, the Centers for Disease Control and Prevention of the United States Department of Health and Human Services have launched a chatbot that helps people decide what to do if they have potential Coronavirus symptoms such as fever, cough, or shortness of breath. However, this dialog system is only intended for people who are permanently or temporarily in the USA. The new WHO chatbot is freely available in English, Russian and Arabic with more than 20 languages to be added.
The Coronavirus Chatbot
The Centers for Disease Control and Prevention of the United States Department of Health and Human Services have launched a chatbot that will help people decide what to do if they have potential Coronavirus symptoms such as fever, cough, or shortness of breath. This was reported by the magazine MIT Technology Review on 24 March 2020. “The hope is the self-checker bot will act as a form of triage for increasingly strained health-care services.” (MIT Technology Review, 24 March 2020) According to the magazine, the chatbot asks users questions about their age, gender, and location, and about any symptoms they’re experiencing. It also inquires whether they may have met someone diagnosed with COVID-19. On the basis of the users’ replies, it recommends the best next step. “The bot is not supposed to replace assessment by a doctor and isn’t intended to be used for diagnosis or treatment purposes, but it could help figure out who most urgently needs medical attention and relieve some of the pressure on hospitals.” (MIT Technology Review, 24 March 2020) The service is intended for people who are currently located in the US. International research is being done not only on useful but also on moral chatbots.
Towards a Human-like Chatbot
Google is currently working on Meena, a particular chatbot, which should be able to have arbitrary conversations and be used in many contexts. In their paper “Towards a Human-like Open-Domain Chatbot“, the developers present the 2.6 billion parameters end-to-end trained neural conversational model. They show that Meena “can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots”. “Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.” (Google AI Blog) The company draws a comparison with OpenAI GPT-2, a model used in “Talk to Transformer” and Harmony, among others, which uses 1.5 billion parameters and is based on the text content of 8 million web pages.
The Birth of the Morality Menu
The idea of a morality menu (MOME) was born in 2018 in the context of machine ethics. It should make it possible to transfer the morality of a person to a machine. On a display you can see different rules of behaviour and you can activate or deactivate them with sliders. Oliver Bendel developed two design studies, one for an animal-friendly vacuum cleaning robot (LADYBIRD), the other for a voicebot like Google Duplex. At the end of 2018, he announced a project at the School of Business FHNW. Three students – Ozan Firat, Levin Padayatty and Yusuf Or – implemented a morality menu for a chatbot called MOBO from June 2019 to January 2020. The user enters personal information and then activates or deactivates nine different rules of conduct. MOBO compliments or does not compliment, responds with or without prejudice, threatens or does not threaten the interlocutor. It responds to each user individually, says his or her name – and addresses him or her formally or informally, depending on the setting. A video of the MOBO-MOME is available here.