How People React to Hugs from Robots

As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The paper “Increasing Well-being and Health through Robotic Hugs” by Oliver Bendel, Andrea Puljic, Robin Heiz, Furkan Tömen, and Ivan De Paola was accepted. Among other things, they show how people in Switzerland react to robotic hugs. The talk will take place between March 26 and 29, 2023 at Hyatt Regency, San Francisco Airport. The symposium website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.

A Prod, a Stroke, or a Hug?

Soft robots with transparent artificial skin can detect human touch with internal cameras and differentiate between a prod, a stroke, or a hug. This is what New Scientist writes in its article “Robot that looks like a bin bag can understand what a hug is“. According to the magazine, the technology could lead to better non-verbal communication between humans and robots. What is behind this message? A scientific experiment that is indeed very interesting. “Guy Hoffman and his colleagues at Cornell University, New York, created a prototype robot with nylon skin stretched over a 1.2-metre tall cylindrical scaffold atop a platform on wheels. Inside the cylinder sits a commercial USB camera which is used to interpret different types of touch on the nylon.” (New Scientist, 29 January 2021) In recent years, there have been several prototypes, studies and surveys on hugging robots. For example, the projects with PR2, Hugvie, and HUGGIE are worth mentioning. Cornell University’s research certainly represents another milestone in this context and in a way puts humans in the foreground.

Research Program on Responsible AI

“HASLER RESPONSIBLE AI” is a research program of the Hasler Foundation open to research institutions within the higher education sector or non-commercial research institutions outside the higher education sector. The foundation explains the goals of the program in a call for project proposals: “The HASLER RESPONSIBLE AI program will support research projects that investigate machine-learning algorithms and artificial intelligence systems whose results meet requirements on responsibility and trustworthiness. Projects are expected to seriously engage in the application of the new models and methods in scenarios that are relevant to society. In addition, projects should respect the interdisciplinary character of research in the area of RESPONSIBLE AI by involving the necessary expertise.” (CfPP by Hasler Foundation) Deadline for submission of short proposals is 24 January 2021. More information at haslerstiftung.ch.