Formal Ethical Agents and Robots

A workshop will be held at the University of Manchester on 11 November 2024 that can be located in the field of machine ethics. The following information can be found on the website: “Recent advances in artificial intelligence have led to a range of concerns about the ethical impact of the technology. This includes concerns about the day-to-day behaviour of robotic systems that will interact with humans in workplaces, homes and hospitals. One of the themes of these concerns is the need for such systems to take ethics into account when reasoning. This has generated new interest in how we can specify, implement and validate ethical reasoning.” (Website iFM 2024) The aim of this workshop, to be held in conjunction with iFM 2024, would be to explore formal approaches to these issues. Submission deadline is 8 August, notification is 12 September. More information at ifm2024.cs.manchester.ac.uk/fear.html.

An LLM Decides the Trolley Problem

A small study by Şahan Hatemo at the FHNW School of Engineering in the Data Science program investigated the ability of Llama-2-13B-chat, an open source language model, to make a moral decision. The focus was on the bias of eight personas and their stereotypes. The classic trolley problem was used, which can be described as follows: An out-of-control streetcar races towards five people. It can be diverted to another track, on which there is another person, by setting a switch. The moral question is whether the death of this person can be accepted in order to save the lives of the five people. The eight personas differ in terms of nationality. In addition to “Italian”, “French”, “Turkish” etc., “Arabian” (with reference to ethnicity) was also included. 30 responses per cycle were collected for each persona over three consecutive days. The responses were categorized as “Setting the switch”, “Not setting the switch”, “Unsure about setting the switch”, and “Violated the guidelines”. They were visualized and compared with the help of dashboards. The study finds that the language model reflects an inherent bias in its training data that influences decision-making processes. The Western personas are more inclined to pull the lever, while the Eastern ones are more reluctant to do so. The German and Arab personas show a higher number of policy violations, indicating a higher presence of controversial or sensitive topics in the training data related to these groups. The Arab persona is also associated with religion, which in turn influences their decisions. The Japanese persona repeatedly uses the Japanese value of giri (a sense of duty) as a basis. The decisions of the Turkish and Chinese personas are similar, as they mainly address “cultural values and beliefs”. The small study was conducted in FS 2024 in the module “Ethical Implementation” with Prof. Dr. Oliver Bendel. The initial complexity was also reduced. In a larger study, further LLMs and factors such as gender and age are to be taken into account.

Towards Moral Prompt Engineering

Machine ethics, which was often dismissed as a curiosity ten years ago, is now part of everyday business. It is required, for example, when so-called guardrails are used in language models or chatbots, via alignment in the form of fine-tuning or via prompt engineering. When you create GPTs, i.e. “custom versions of ChatGPT”, as Open AI calls them, you have the “Instructions” field available for prompt engineering. Here, the “prompteur” or “prompreuse” can create certain specifications and restrictions for the chatbot. This can include references to documents that have been uploaded. This is exactly what Myriam Rellstab is currently doing at the FHNW School of Business as part of her final thesis “Moral Prompt Engineering”, the interim results of which she presented on May 28, 2024. As a “prompteuse”, she tames GPT-4o with the help of her instructions and – as suggested by the initiator of the project, Prof. Dr. Oliver Bendel – with the help of netiquettes that she has collected and made available to the chatbot. The chatbot is tamed, the tiger becomes a house cat that can be used without danger in the classroom, for example. With GPT-4o, guardrails have already been introduced beforehand. These were programmed in or obtained via reinforcement learning from human feedback. So, strictly speaking, you turn a tamed tiger into a house cat. This is different with certain open source language models. The wild animal must first be captured and then tamed. And even then it can seriously injure you. But even with GPTs there are pitfalls, and as we know, house tigers can hiss and scratch. The results of the project will be available in August. Moral prompt engineering had already been applied to Data, a chatbot for the Data Science course at the FHNW School of Engineering (Image: Ideogram).

25 Artifacts and Concepts of ME and SR

Since 2012, on the initiative of Oliver Bendel, 25 concepts and artifacts of machine ethics and social robotics have been created to illustrate an idea or make its implementation clear. These include conversational agents such as GOODBOTLIEBOTBESTBOT, and SPACE THEA, which have been presented at conferences, in journals and in the media, and animal-friendly machines such as LADYBIRD and HAPPY HEDGEHOG, which have been covered in books such as “Die Grundfragen der Maschinenethik” by Catrin Misselhorn and on Indian, Chinese and American platforms. Most recently, two chatbots were created for a dead and an endangered language, namely @ve (for Latin) and @llegra (for Vallader, an idiom of Rhaeto-Romanic). The CAIBOT project will be continued in 2024. In this project, a language model is to be transformed into a moral machine with the help of prompt engineering or fine-tuning, following the example of Claude von Anthropic. In the “The Animal Whisperer” project, an app is to be developed that understands the body language of selected animals and also assesses their environment with the aim of providing advice on how to treat them. In the field of machine ethics, Oliver Bendel and his changing teams are probably among the most active groups worldwide.

New Channel on Animal Law and Ethics

The new YouTube channel “GW Animal Law Program” went online at the end of November 2023. It collects lectures and recordings on animal law and ethics. Some of them are from the online event “Artificial Intelligence & Animals”, which took place on 16 September 2023. The speakers were Prof. Dr. Oliver Bendel (FHNW University of Applied Sciences Northwestern Switzerland), Yip Fai Tse (University Center for Human Values, Center for Information Technology Policy, Princeton University), and Sam Tucker (CEO VegCatalyst, AI-Powered Marketing, Melbourne). Other videos include “Tokitae, Reflections on a Life: Evolving Science & the Need for Better Laws” by Kathy Hessler, “Alternative Pathways for Challenging Corporate Humanewashing” by Brooke Dekolf, and “World Aquatic Animal Day 2023: Alternatives to the Use of Aquatic Animals” by Amy P. Wilson. In his talk, Oliver Bendel presents the basics and prototypes of animal-computer interaction and animal-machine interaction, including his own projects in the field of machine ethics. The YouTube channel can be accessed at www.youtube.com/@GWAnimalLawProgram/featured.

AAAI Spring Symposia Return to Stanford

In late August 2023, AAAI announced the continuation of the AAAI Spring Symposium Series, to be held at Stanford University from 25-27 March 2024. Due to staff shortages, the prestigious conference had to be held at the Hyatt Regency SFO Airport in San Francisco in 2023 – and will now return to its traditional venue. The call for proposals is available on the AAAI Spring Symposium Series page. Proposals are due by 6 October 2023. They should be submitted to the symposium co-chairs, Christopher Geib (SIFT, USA) and Ron Petrick (Heriot-Watt University, UK), via the online submission page. Over the past ten years, the AAAI Spring Symposia have been relevant not only to classical AI, but also to roboethics and machine ethics. Groundbreaking symposia were, for example, “Ethical and Moral Considerations in Non-Human Agents” in 2016, “AI for Social Good” in 2017, or “AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents” in 2018. More information is available at aaai.org/conference/spring-symposia/sss24/.

AAAI Spring Symposia Proceedings 1992-2018

The AAAI Spring Symposia is a legendary conference that has been held since 1992. It usually takes place at Stanford University. Until 2018, the leading US artificial intelligence organization itself published the proceedings. Since 2019, each symposium is responsible for its own. Following a restructuring of the AAAI website, the proceedings can be found in a section of the new “AAAI Conference and Symposium Proceedings” page. In 2016, Stanford University hosted one of the most important gatherings on machine ethics and robot ethics ever, the symposium “Ethical and Moral Considerations in Non-Human Agents” … Contributors included Peter M. Asaro, Oliver Bendel, Joanna J. Bryson, Lily Frank, The Anh Han, and Luis Moniz Pereira. Also present was Ronald C. Arkin, one of the most important and – because of his military research – controversial machine ethicists. The 2017 and 2018 symposia were also groundbreaking for machine ethics and attracted experts from around the world. The papers can be accessed at aaai.org/aaai-publications/aaai-conference-proceedings.

The Latest Findings in Social Robotics

The proceedings of ICSR 2022 were published in early 2023. Included is the paper “The CARE-MOMO Project” by Oliver Bendel and Marc Heimann. From the abstract: “In the CARE-MOMO project, a morality module (MOMO) with a morality menu (MOME) was developed at the School of Business FHNW in the context of machine ethics. This makes it possible to transfer one’s own moral and social convictions to a machine, in this case the care robot with the name Lio. The current model has extensive capabilities, including motor, sensory, and linguistic. However, it cannot yet be personalized in the moral and social sense. The CARE-MOMO aims to eliminate this state of affairs and to give care recipients the possibility to adapt the robot’s ‘behaviour’ to their ideas and requirements. This is done in a very simple way, using sliders to activate and deactivate functions. There are three different categories that appear with the sliders. The CARE-MOMO was realized as a prototype, which demonstrates the functionality and aids the company in making concrete decisions for the product. In other words, it can adopt the morality module in whole or in part and further improve it after testing it in facilities.” The book (part II of the proceedings) can be downloaded or ordered via link.springer.com/book/10.1007/978-3-031-24670-8.

LaborDigital Conference at the ZHdK

The LaborDigital conference at the Zurich University of the Arts (ZHdK) will take place on February 10, 2023 in English and German. It was initiated and organized by Charlotte Axelsson and others. The conference will open with a lecture by Prof. Dr. Johan Frederik Hartle, Rector of the Academy of Fine Arts Vienna. This will be followed by the keynote “Labor-Geschichte/s. On the Archaeology of a ‘Creative’ Space” by Prof. Dr. Oliver Ruf from the Bonn-Rhein-Sieg University of Applied Sciences. From 11:00 to 12:30, three Experimental Learning Labs will take place in parallel, namely “Artifacts of Machine Ethics” with Prof. Dr. Oliver Bendel (FHNW, Muttenz, Olten and Brugg-Windisch), “Dance Lab & Avatar” with Regina Bäck (Munich), and “Experimental Game Cultures Labs” with Prof. Dr. Margarete Jahrmann (University of Applied Arts Vienna). Lunch will be followed by ZHdK Lab Visits and more Experimental Learning Labs starting at 3:30 pm. At 4:30 p.m., Raphaële Bidault-Waddington, founder of the LIID Future Lab in Paris, will deliver the second keynote, titled “Designing Art-based Future Labs.” Johan Frederik Hartle will conclude the conference with further remarks. For more information, visit paul.zhdk.ch/course/view.php?id=2312.

Programming Machine Ethics

The book “Programming Machine Ethics” (2016) by Luís Moniz Pereira and Ari Saptawijaya is available for free download from Z-Library. Luís Moniz Pereira is among the best-known machine ethicists. “This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail.” (Information by Springer) The download link is eu1lib.vip/book/2677910/9fd009.