Atlas Finally has Hands

“Boston Dynamics just released the latest demo of its humanoid robot, Atlas. The robot could already run and jump over complex terrain thanks to its feet. Now, the robot has hands, per se. These rudimentary grippers give the robot new life. Suddenly, instead of being an agile pack mule, the Atlas becomes something closer to a human, with the ability to pick up and drop off anything it can grab independently.” (TechCrunch, January 18, 2023) This was reported by TechCrunch on January 18, 2023. Hands are actually very important for Atlas. The humanoid robot could now pick up or move heavy objects on a construction site. But it could also take care of trapped or injured animals in a nature park, freeing them or providing them with food and water. Such visions have been described by robot ethicist and machine ethicist Oliver Bendel for some time. A video released on January 18, 2023 shows the grippers picking up construction lumber and a nylon tool bag. “Next, the Atlas picks up a 2×8 and places it between two boxes to form a bridge. The Atlas then picks up a bag of tools and dashes over the bridge and through construction scaffolding. But the tool bag needs to go to the second level of the structure – something Atlas apparently realized and quickly throws the bag a considerable distance.” (TechCrunch, January 18, 2023) At the end of the video, Atlas does a somersault and then extends its hand – its brand new hand – triumphantly.

The HUGGIE Project II

A large survey on hugging robots was already conducted in 2020 (HUGGIE Project I). The results were published by Leonie Stocker, Ümmühan Korucu, and Oliver Bendel in the book chapter “In den Armen der Maschine” (“In the arms of the machine”) in the Springer volume “Soziale Roboter” (“Social Robots”) . In the summer of 2022, another project was announced at the School of Business FHNW, called HUGGIE Project II in distinction to the previous project. A team of four was recruited, with Andrea Puljic, Robin Heiz, Furkan Tömen, and Ivan De Paola. The task was to build and test a hugging robot called HUGGIE. In doing so, the results of HUGGIE Project I were to be used as a basis. In particular, it was to be determined whether a robotic hug contributes to well-being and health and whether voice and vibration as a simulation of the heartbeat as well as scent increase the acceptance of the hug by a robot. Thus, indications from the survey in HUGGIE Project I were included. Furthermore, hugs with a giant stuffed animal named Teddy took place for comparison. The results show that people benefit from robotic hugs and that these can increase their well-being and health as a result. There was clear evidence of this in the pretest, and still sufficient evidence in the main test. However, some already have an aversion to robotic hugs in their imagination – as HUGGIE Project I revealed – which in turn some also showed in reality (HUGGIE Project II). Warmth and softness of body and arms are important. This was already proven by the research of Alexis E. Block and Katherine J. Kuchenbecker. Voice, vibration, and scent were found to be less relevant. However, there is significant indication that a female voice can increase acceptance and therefore needs to be further explored and adapted in this context. The findings were summarized in a paper and submitted to an international conference (Photo: Furkan Tömen).

The @ve Project

On January 19, 2023, the final presentation was held for the @ve project, which started in September 2022. The chatbot runs on the website www.ave-bot.ch and on Telegram. Like ChatGPT, it is based on GPT-3 from OpenAI (@ve is not GPT-3.5, but GPT-3.0). The project was initiated by Prof. Dr. Oliver Bendel, who wants to devote more time to dead, extinct, and endangered languages. @ve was developed by Karim N’diaye, who studied business informatics at the Hochschule für Wirtschaft FHNW. You can talk to her in Latin, i.e. in a dead language that thus comes alive in a way, and ask her questions about grammar. It was tested by a relevant expert. One benefit, according to Karim N’diaye, is that you can communicate in Latin around the clock, thinking about what and how to write. One danger, he says, is that there are repeated errors in the answers. For example, sometimes the word order is not correct. In addition, it is possible that the meaning is twisted. This can happen with a human teacher, and the learner should always be alert and look for errors. Without a doubt, @ve is a tool that can be profitably integrated into Latin classes. There, students can report what they have experienced with it at home, and they can have a chat with it on the spot, alone or in a group, accompanied by the teacher. A follow-up project on an endangered language has already been announced (Illustration: Karim N’diaye/Unsplash).

AI for Well-being

As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The AAAI website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.

About Robots in Policing

In January 2023, the Proceedings of Robophilosophy 2022 were published. Included is the paper “Robots in Policing” by Oliver Bendel. From the abstract: “This article is devoted to the question of how robots are used in policing and what opportunities and risks arise in social terms. It begins by briefly explaining the characteristics of modern police work. It puts service robots and social robots in relation to each other and outlines relevant disciplines. The article also lists types of robots that are and could be relevant in the present context. It then gives examples from different countries of the use of robots in police work and security services. From these, it derives the central tasks of robots in this area and their most important technical features. A discussion from social, ethical, and technical perspectives seeks to provide clarity on how robots are changing the police as a social institution and with social actions and relationships, and what challenges need to be addressed.” (Abstract) Robots in policing is a topic that has not received much attention. However, it is likely to become considerably more topical in the next few years. More information about the conference on cas.au.dk/en/robophilosophy/conferences/rpc2022 (Photo: Anna Jarske-Fransas).

Proceedings of Robophilosophy 2022

In January 2023, the proceedings of Robophilosophy 2022 were published, under the title “Social Robots in Social Institutions”. “This book presents the Proceedings of Robophilosophy 2022, the 5th event in the biennial Robophilosophy conference series, held in Helsinki, Finland, from 16 to 19 August 2022. The theme of this edition of the conference was Social Robots in Social Institutions, and it featured international multidisciplinary research from the humanities, social sciences, Human-Robot Interaction, and social robotics. The 63 papers, 41 workshop papers and 5 posters included in this book are divided into 4 sections: plenaries, sessions, workshops and posters, with the 41 papers in the ‘Sessions’ section grouped into 13 subdivisions including elderly care, healthcare, law, education and art, as well as ethics and religion. These papers explore the anticipated conceptual and practical changes which will come about in the course of introducing social robotics into public and private institutions, such as public services, legal systems, social and healthcare services, or educational institutions.” (Website IOS Press) The proceedings contain the paper “Robots in Policing” by Oliver Bendel and the poster “Tamagotchi on our couch: Are social robots perceived as pets?” by Katharina Kühne, Melinda A. Jeglinski-Mende, and Oliver Bendel. More information via www.iospress.com/catalog/books/social-robots-in-social-institutions.

AI-based Q-bear

Why is your baby crying? And what if artificial intelligence (AI) could answer that question for you? “If there was a flat little orb the size of a dessert plate that could tell you exactly what your baby needs in that moment? That’s what Q-bear is trying to do.” (Mashable, January 3, 2023) That’s what tech magazine Mashable wrote in a recent article. At CES 2023, the Taiwanese company qbaby.ai demonstrated its AI-powered tool which aims to help parents resolve their needs in a more targeted way. “The soft silicone-covered device, which can be fitted in a crib or stroller, uses Q-bear’s patented tech to analyze a baby’s cries to determine one of four needs from its ‘discomfort index’: hunger, a dirty diaper, sleepiness, and need for comfort. Q-bear’s translation comes within 10 seconds of a baby crying, and the company says it will become more accurate the more you use the device.” (Mashable, January 3, 2023) Whether the tool really works remains to be seen – presumably, baby cries can be interpreted more easily than animal languages. Perhaps the use of the tool is ultimately counterproductive because parents forget to trust their own intuition. The article “CES 2023: The device that tells you why your baby is crying” can be accessed via mashable.com/article/ces-2023-why-is-my-baby-crying.