A special feature of DALL-E 3 – in the version integrated in ChatGPT Plus – is the translation of the user’s prompt (prompt A) into a prompt of ChatGPT (prompt B), which is listed in each case. Prompt A for the image shown here was “Competition in the sea between two female swimmers with bathing cap, photorealistic”. DALL-E generated three images for this test, each based on prompt B. Prompt B1 read: “Photo of two determined female swimmers in the expansive sea, both wearing bathing caps. Their arms create ripples as they compete fiercely, striving to outpace each other.” Prompt A was obviously continued, but prompt B1 was not accurately executed. Instead of the two female swimmers, there are three. They seem to be closely related – as is often the case with depictions of people from DALL-E 3 – and perhaps they are sisters or triplets. It is also interesting that they are too close to each other (the picture in this post shows a detail). The fourth image was not executed at all, as was the case with a series before. ChatGPT said: “I apologize again, but there were issues generating one of the images based on your description.” Probably ChatGPT generated a prompt B4, which was then denied by DALL-E 3. On the request “Please tell me the prompt generated by ChatGPT that was not executed by DALL-E 3.” comes the answer “I’m sorry for the inconvenience, but I cannot retrieve the exact prompt that was not executed by DALL·E.” … Ideogram censors in a different way. There, the image is created in front of the user’s eyes, and if the AI determines that it contains elements that might be problematic according to its own guidelines, it cancels the creation and advances a tile with a cat. Ethical challenges of image generators are addressed in the article “Image Synthesis from an Ethical Perspective” by Oliver Bendel.
The Chinese Whispers Problem
DALL-E 3 – in the version integrated in ChatGPT Plus – seems to have a Chinese Whispers problem. In a test by Oliver Bendel, the prompt (prompt A) read: “Two female swimmers competing in lake, photorealistic”. ChatGPT, the interface to DALL-E 3, made four prompts out of it ( prompt B1 – B4). Prompt B4 read: “Photo-realistic image of two female swimmers, one with tattoos on her arms and the other with a swim cap, fiercely competing in a lake with lily pads and reeds at the edges. Birds fly overhead, adding to the natural ambiance.” DALL-E 3, on the other hand, turned this prompt into something that had little to do with either this or prompt A. The picture does not show two women, but two men, or a woman and a man with a beard. They do not swim in a race, but argue, standing in a pond or a small lake, furiously waving their arms and going at each other. Water lilies sprawl in front of them, birds flutter above them. Certainly an interesting picture, but produced with such arbitrariness that one wishes for the good old prompt engineering to return (the picture in this post shows a detail). This is exactly what the interface actually wants to replace – but the result is an effect familiar from the Chinese Whispers game.
Conversational Agent as Trustworthy Autonomous System
The Dagstuhl seminar “Conversational Agent as Trustworthy Autonomous System (Trust-CA)” will take place from September 19 – 24, 2021. According to the website, Schloss Dagstuhl – Leibniz-Zentrum für Informatik “pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers”. Organizers of this event are Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester) and Björn Schuller (University of Augsburg). They outline the background as follows: “CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI). This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.” (Website Dagstuhl) Regarding the goal of the workshop, the organizers write: “The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.” (Website Dagstuhl) Oliver Bendel (School of Business FHNW) will talk about his chatbot and voice assistant projects. These emerge since 2013 from machine ethics and social robotics. Further information is available here (photo: Schloss Dagstuhl).
Animal-Computer Interaction
Clara Mancini (The Open University) and Eleonora Nannoni (University of Bologna) are calling for abstracts and papers for the Frontiers research topic “Animal-Computer Interaction and Beyond: The Benefits of Animal-Centered Research and Design”. They are well-known representatives of a discipline closely related to animal-machine interaction. “The field of Animal-Computer Interaction (ACI) investigates how interactive technologies affect the individual animals involved; what technologies could be developed, and how they should be designed in order to improve animals’ welfare, support their activities and foster positive interspecies relationships; and how research methods could enable animal stakeholders to participate in the development of relevant technologies.” (Website Frontiers) The editors welcome submissions that contribute, but are not necessarily limited, to the following themes: 1) “Applications of animal-centered and/or interactive technologies within farming, animal research, conservation, welfare or other domains”, and 2) “Animal-centered research, design methods and frameworks that have been applied or have applicability within farming, animal research, conservation, welfare or other domains Submission information is available through the website” (Website Frontiers). More submission information is available through the Frontiers website.
Care Robots with New Functions
The symposium “Applied AI in Healthcare: Safety, Community, and the Environment” will be held within the AAAI Spring Symposia on March 22-23, 2021. One of the presentations is titled “Care Robots with Sexual Assistance Functions”. Author of the paper is Prof. Dr. Oliver Bendel. From the abstract: “Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.” More information about the AAAI Spring Symposia is available at aaai.org/Symposia/Spring/sss21.php.
A Prod, a Stroke, or a Hug?
Soft robots with transparent artificial skin can detect human touch with internal cameras and differentiate between a prod, a stroke, or a hug. This is what New Scientist writes in its article “Robot that looks like a bin bag can understand what a hug is“. According to the magazine, the technology could lead to better non-verbal communication between humans and robots. What is behind this message? A scientific experiment that is indeed very interesting. “Guy Hoffman and his colleagues at Cornell University, New York, created a prototype robot with nylon skin stretched over a 1.2-metre tall cylindrical scaffold atop a platform on wheels. Inside the cylinder sits a commercial USB camera which is used to interpret different types of touch on the nylon.” (New Scientist, 29 January 2021) In recent years, there have been several prototypes, studies and surveys on hugging robots. For example, the projects with PR2, Hugvie, and HUGGIE are worth mentioning. Cornell University’s research certainly represents another milestone in this context and in a way puts humans in the foreground.
Evolutionary Machine Ethics
Luís Moniz Pereira is one of the best known and most active machine ethicists in the world. Together with his colleague The Anh Han he wrote the article “Evolutionary Machine Ethics” for the “Handbuch Maschinenethik” (“Handbook Machine Ethics”). Editor is Oliver Bendel (Zurich, Switzerland). From the abstract: “Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.” (Abstract) Springer VS published the “Handbuch Maschinenethik” in October 2019. Since then it has been downloaded thousands of times.
AI Workshop at the University of Potsdam
In 2018, Dr. Yuefang Zhou and Prof. Dr. Martin Fischer initiated the first international workshop on intimate human-robot relations at the University of Potsdam, “which resulted in the publication of an edited book on developments in human-robot intimate relationships”. This year, Prof. Dr. Martin Fischer, Prof. Dr. Rebecca Lazarides, and Dr. Yuefang Zhou are organizing the second edition. “As interest in the topic of humanoid AI continues to grow, the scope of the workshop has widened. During this year’s workshop, international experts from a variety of different disciplines will share their insights on motivational, social and cognitive aspects of learning, with a focus on humanoid intelligent tutoring systems and social learning companions/robots.” (Website Embracing AI) The international workshop “Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives” will take place on 29 and 30 November 2019 at the University of Potsdam. Keynote speakers are Prof. Dr. Tony Belpaeme, Prof. Dr. Oliver Bendel, Prof. Dr. Angelo Cangelosi, Dr. Gabriella Cortellessa, Dr. Kate Devlin, Prof. Dr. Verena Hafner, Dr. Nicolas Spatola, Dr. Jessica Szczuka, and Prof. Dr. Agnieszka Wykowska. Further information is available at embracingai.wordpress.com/.
Robophilosophy 2020
“Once we place so-called ‘social robots’ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.” (Website Robophilosophy Conference) With these words the next Robophilosophy conference is announced. It will take place from 18 to 21 August 2019 in Aarhus, Denmark. The CfP raises questions like that: “How can we create cultural dynamics with or through social robots that will not impact our value landscape negatively? How can we develop social robotics applications that are culturally sustainable? If cultural sustainability is relative to a community, what can we expect in a global robot market? Could we design human-robot interactions in ways that will positively cultivate the values we, or people anywhere, care about?” (Website Robophilosophy Conference) In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.
Health Care Prediction Algorithm Biased against Black People
The research article “Dissecting racial bias in an algorithm used to manage the health of populations” by Ziad Obermeyer, Brian Powers, Christine Vogeli and Sendhil Mullainathan has been well received by science and media. It was published in the journal Science on 25 October 2019. From the abstract: “Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses.” (Abstract) The authors suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts. The journal Nature quotes Milena Gianfrancesco, an epidemiologist at the University of California, San Francisco, with the following words: “We need a better way of actually assessing the health of the patients.”