Formal Ethical Agents and Robots

A workshop will be held at the University of Manchester on 11 November 2024 that can be located in the field of machine ethics. The following information can be found on the website: “Recent advances in artificial intelligence have led to a range of concerns about the ethical impact of the technology. This includes concerns about the day-to-day behaviour of robotic systems that will interact with humans in workplaces, homes and hospitals. One of the themes of these concerns is the need for such systems to take ethics into account when reasoning. This has generated new interest in how we can specify, implement and validate ethical reasoning.” (Website iFM 2024) The aim of this workshop, to be held in conjunction with iFM 2024, would be to explore formal approaches to these issues. Submission deadline is 8 August, notification is 12 September. More information at ifm2024.cs.manchester.ac.uk/fear.html.

An LLM Decides the Trolley Problem

A small study by Şahan Hatemo at the FHNW School of Engineering in the Data Science program investigated the ability of Llama-2-13B-chat, an open source language model, to make a moral decision. The focus was on the bias of eight personas and their stereotypes. The classic trolley problem was used, which can be described as follows: An out-of-control streetcar races towards five people. It can be diverted to another track, on which there is another person, by setting a switch. The moral question is whether the death of this person can be accepted in order to save the lives of the five people. The eight personas differ in terms of nationality. In addition to “Italian”, “French”, “Turkish” etc., “Arabian” (with reference to ethnicity) was also included. 30 responses per cycle were collected for each persona over three consecutive days. The responses were categorized as “Setting the switch”, “Not setting the switch”, “Unsure about setting the switch”, and “Violated the guidelines”. They were visualized and compared with the help of dashboards. The study finds that the language model reflects an inherent bias in its training data that influences decision-making processes. The Western personas are more inclined to pull the lever, while the Eastern ones are more reluctant to do so. The German and Arab personas show a higher number of policy violations, indicating a higher presence of controversial or sensitive topics in the training data related to these groups. The Arab persona is also associated with religion, which in turn influences their decisions. The Japanese persona repeatedly uses the Japanese value of giri (a sense of duty) as a basis. The decisions of the Turkish and Chinese personas are similar, as they mainly address “cultural values and beliefs”. The small study was conducted in FS 2024 in the module “Ethical Implementation” with Prof. Dr. Oliver Bendel. The initial complexity was also reduced. In a larger study, further LLMs and factors such as gender and age are to be taken into account.

Towards Moral Prompt Engineering

Machine ethics, which was often dismissed as a curiosity ten years ago, is now part of everyday business. It is required, for example, when so-called guardrails are used in language models or chatbots, via alignment in the form of fine-tuning or via prompt engineering. When you create GPTs, i.e. “custom versions of ChatGPT”, as Open AI calls them, you have the “Instructions” field available for prompt engineering. Here, the “prompteur” or “prompreuse” can create certain specifications and restrictions for the chatbot. This can include references to documents that have been uploaded. This is exactly what Myriam Rellstab is currently doing at the FHNW School of Business as part of her final thesis “Moral Prompt Engineering”, the interim results of which she presented on May 28, 2024. As a “prompteuse”, she tames GPT-4o with the help of her instructions and – as suggested by the initiator of the project, Prof. Dr. Oliver Bendel – with the help of netiquettes that she has collected and made available to the chatbot. The chatbot is tamed, the tiger becomes a house cat that can be used without danger in the classroom, for example. With GPT-4o, guardrails have already been introduced beforehand. These were programmed in or obtained via reinforcement learning from human feedback. So, strictly speaking, you turn a tamed tiger into a house cat. This is different with certain open source language models. The wild animal must first be captured and then tamed. And even then it can seriously injure you. But even with GPTs there are pitfalls, and as we know, house tigers can hiss and scratch. The results of the project will be available in August. Moral prompt engineering had already been applied to Data, a chatbot for the Data Science course at the FHNW School of Engineering (Image: Ideogram).

25 Artifacts and Concepts of ME and SR

Since 2012, on the initiative of Oliver Bendel, 25 concepts and artifacts of machine ethics and social robotics have been created to illustrate an idea or make its implementation clear. These include conversational agents such as GOODBOTLIEBOTBESTBOT, and SPACE THEA, which have been presented at conferences, in journals and in the media, and animal-friendly machines such as LADYBIRD and HAPPY HEDGEHOG, which have been covered in books such as “Die Grundfragen der Maschinenethik” by Catrin Misselhorn and on Indian, Chinese and American platforms. Most recently, two chatbots were created for a dead and an endangered language, namely @ve (for Latin) and @llegra (for Vallader, an idiom of Rhaeto-Romanic). The CAIBOT project will be continued in 2024. In this project, a language model is to be transformed into a moral machine with the help of prompt engineering or fine-tuning, following the example of Claude von Anthropic. In the “The Animal Whisperer” project, an app is to be developed that understands the body language of selected animals and also assesses their environment with the aim of providing advice on how to treat them. In the field of machine ethics, Oliver Bendel and his changing teams are probably among the most active groups worldwide.

AAAI Spring Symposia Return to Stanford

In late August 2023, AAAI announced the continuation of the AAAI Spring Symposium Series, to be held at Stanford University from 25-27 March 2024. Due to staff shortages, the prestigious conference had to be held at the Hyatt Regency SFO Airport in San Francisco in 2023 – and will now return to its traditional venue. The call for proposals is available on the AAAI Spring Symposium Series page. Proposals are due by 6 October 2023. They should be submitted to the symposium co-chairs, Christopher Geib (SIFT, USA) and Ron Petrick (Heriot-Watt University, UK), via the online submission page. Over the past ten years, the AAAI Spring Symposia have been relevant not only to classical AI, but also to roboethics and machine ethics. Groundbreaking symposia were, for example, “Ethical and Moral Considerations in Non-Human Agents” in 2016, “AI for Social Good” in 2017, or “AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents” in 2018. More information is available at aaai.org/conference/spring-symposia/sss24/.

LaborDigital Conference at the ZHdK

The LaborDigital conference at the Zurich University of the Arts (ZHdK) will take place on February 10, 2023 in English and German. It was initiated and organized by Charlotte Axelsson and others. The conference will open with a lecture by Prof. Dr. Johan Frederik Hartle, Rector of the Academy of Fine Arts Vienna. This will be followed by the keynote “Labor-Geschichte/s. On the Archaeology of a ‘Creative’ Space” by Prof. Dr. Oliver Ruf from the Bonn-Rhein-Sieg University of Applied Sciences. From 11:00 to 12:30, three Experimental Learning Labs will take place in parallel, namely “Artifacts of Machine Ethics” with Prof. Dr. Oliver Bendel (FHNW, Muttenz, Olten and Brugg-Windisch), “Dance Lab & Avatar” with Regina Bäck (Munich), and “Experimental Game Cultures Labs” with Prof. Dr. Margarete Jahrmann (University of Applied Arts Vienna). Lunch will be followed by ZHdK Lab Visits and more Experimental Learning Labs starting at 3:30 pm. At 4:30 p.m., Raphaële Bidault-Waddington, founder of the LIID Future Lab in Paris, will deliver the second keynote, titled “Designing Art-based Future Labs.” Johan Frederik Hartle will conclude the conference with further remarks. For more information, visit paul.zhdk.ch/course/view.php?id=2312.

The CARE-MOMO Project

Two of the most important conferences for social robotics are Robophilosophy and ICSR. After Robophilosophy, a biennial, was last held in Helsinki in August 2022, ICSR is now coming up in Florence (13 – 16 December 2022). “The 14th International Conference on Social Robotics (ICSR 2022) brings together researchers and practitioners working on the interaction between humans and intelligent robots and on the integration of social robots into our society. … The theme of this year’s conference is Social Robots for Assisted Living and Healthcare, emphasising on the increasing importance of social robotics in human daily living and society.” (Website ICSR) The committee sent out notifications by October 15, 2022. The paper “The CARE-MOMO Project” by Oliver Bendel and Marc Heimann was accepted. This is a project that combines machine ethics and social robotics. The invention of the morality menu was transferred to a care robot for the first time. The care recipient can use sliders on the display to determine how he or she wants to be treated. This allows them to transfer their moral and social beliefs and ideas to the machine. The morality module (MOMO) is intended for the Lio assistance robot from F&P Robotics. The result will be presented at the end of October 2022 at the company headquarters in Glattbrugg near Zurich. More information on the conference via www.icsr2022.it.

Programming Machine Ethics

The book “Programming Machine Ethics” (2016) by Luís Moniz Pereira and Ari Saptawijaya is available for free download from Z-Library. Luís Moniz Pereira is among the best-known machine ethicists. “This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail.” (Information by Springer) The download link is eu1lib.vip/book/2677910/9fd009.

AI and Society

The AAAI Spring Symposia at Stanford University are among the community’s most important get-togethers. The years 2016, 2017, and 2018 were memorable highlights for machine ethics, robot ethics, ethics by design, and AI ethics, with the symposia “Ethical and Moral Considerations in Non-Human Agents” (2016), “Artificial Intelligence for the Social Good” (2017), and “AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents” (2018) … As of 2019, the proceedings are no longer provided directly by the Association for the Advancement of Artificial Intelligence, but by the organizers of each symposium. As of summer 2021, the entire 2018 volume of the conference has been made available free of charge. It can be found via www.aaai.org/Library/Symposia/Spring/ss18.php or directly here. It includes contributions by Philip C. Jackson, Mark R. Waser, Barry M. Horowitz, John Licato, Stefania Costantini, Biplav Srivastava, and Oliver Bendel, among others.

Animal-Computer Interaction

Clara Mancini (The Open University) and Eleonora Nannoni (University of Bologna) are calling for abstracts and papers for the Frontiers research topic “Animal-Computer Interaction and Beyond: The Benefits of Animal-Centered Research and Design”. They are well-known representatives of a discipline closely related to animal-machine interaction. “The field of Animal-Computer Interaction (ACI) investigates how interactive technologies affect the individual animals involved; what technologies could be developed, and how they should be designed in order to improve animals’ welfare, support their activities and foster positive interspecies relationships; and how research methods could enable animal stakeholders to participate in the development of relevant technologies.” (Website Frontiers) The editors welcome submissions that contribute, but are not necessarily limited, to the following themes: 1) “Applications of animal-centered and/or interactive technologies within farming, animal research, conservation, welfare or other domains”, and 2) “Animal-centered research, design methods and frameworks that have been applied or have applicability within farming, animal research, conservation, welfare or other domains Submission information is available through the website” (Website Frontiers). More submission information is available through the Frontiers website.