On the third day of Robophilosophy 2024, Emily Cross, a dancer and cognitive neuroscientist from ETHZ, gave a keynote speech entitled “Mind Meets Machine – Neurocognitive Perspectives on Human-Robot Interaction”. From the abstract: “Understanding how we perceive and interact with others is a core challenge of social cognition research. This challenge is poised to intensify in importance as the ubiquity of artificial intelligence and the presence of humanoid robots in society grows. This talk examines how established theories and methods from psychology and neuroscience are revealing fundamental aspects of how people perceive, interact with, and form social relationships with robots. Robots provide a resolutely new approach to studying brain and behavioural flexibility manifest by humans during social interaction. As machines, they can deliver behaviours that can be perceived as “social”, even though they are artificial agents and, as such, can be programmed to deliver a perfectly determined and reproducible set of actions. As development of service robots, home companion robots and assistance robots for schools, hospitals and care homes continues apace, whether we perceive such machines as social agents and how we engage with them over the long term remains largely unexplored. This talk describes research that bridges social cognition, neuroscience and robotics, with important implications not only for the design of social robots, but equally critically, for our understanding of the neurocognitive mechanisms supporting human social behaviour more generally.” (Website Robophilosophy 2024)
Start of Robophilosophy 2024
On August 20, 2024, Robophilosophy 2024 was opened with words of welcome from Maja Horst, Dean of the Faculty of Arts at Aarhus University, and Johanna Seibt, Professor of the School of Culture and Society at Aarhus University. The website says: “The international research conference RP2024 will discuss the questions that really matter in view of the new technological potential of social robotics. In over 100 research talks, RP2024 will address concrete and deep issues that reach far beyond safety and privacy concerns into the conceptual and normative fabric of our societies and individual self-comprehension.” (Website Robophilosophy 2024) The first keynote on the first day of the conference was given by Wendell Wallach, one of the world’s best-known machine ethicists. With his book “Moral Machines” (2009), he laid the foundation for a discipline that has been developing in science fiction and science for years and decades. This was followed in 2011 by “Machine Ethics” by Michael Anderson and Susan L.eigh Anderson. In addition to machine ethics, Robophilosophy is dedicated to robot ethics and other interesting perspectives on social robots.
A Cobot as Conductor of a Symphony
Cobots that dance with humans have been around for a long time. In 2016, the audience at Südpol Luzern witnessed dance and robot history being written by Huang Yi, a choreographer from Taiwan. Cobots that set the pace for humans, on the other hand, are not yet the order of the day. The Dresden Symphony Orchestra is about to perform the “Roboter.Sinfonie”. After a while, conductor Michael Helmrath will hand over to MAiRA Pro S, a product from NEURA Robotics. According to Deutschlandfunk, the machine’s three arms will be able to guide the orchestra, which is divided into groups, through the most complex passages independently of each other. This will break completely new musical ground. According to the Dresden Symphony Orchestra’s calendar, the concerts will take place on October 12 and 13, 2024 at the Europäisches Zentrum der Künste Hellerau (Image: NEURA Robotics).
Cow Whisperer, Horse Whisperer, and Dog Whisperer
On August 5, 2024, the final presentation for the project “The Animal Whisperer” took place at the FHNW School of Business. It was initiated by Prof. Dr. Oliver Bendel, who has been working on animal-computer interaction and animal-machine interaction for many years. Nick Zbinden, a budding business informatics specialist, was recruited as a project collaborator. From March 2024, he developed three applications based on GPT-4o, the Cow Whisperer, the Horse Whisperer and the Dog Whisperer. They can be used to analyze the body language, behaviour, and environment of cows, horses and dogs. The aim is to avert danger to humans and animals. For example, a hiker can receive a recommendation on their smartphone not to cross a pasture if a mother cow and her calves are present. All they have to do is call up the application and take photos of the surroundings. The three apps are now available as prototypes. With the help of prompt engineering, they have been given extensive knowledge and skills. Above all, self-created and labeled photos were used. In the majority of cases, the apps correctly describe the animals’ body language and behavior. Their recommendations for human behavior are also adequate. The project team summarized the results in a paper and submitted it to an international conference (Image: Ideogram).
An AI-generated Teen Collection
Spanish fashion chain Mango has launched an advertising campaign created using AI models. First, all the clothes were photographed, then an AI model was trained to place the images on artificially generated models. The images were then retouched and edited. This was reported by Golem in an article dated July 30, 2024. Digital models are not new. They have been used for decades in various contexts, from computer games to mobile phone applications. Cameron-James Wilson founded an agency for digital models in London in 2019. He is the creator of the digital supermodel Shudu. Her sisters are Noonoouri and Lil Miquela. Since the triumph of generative AI, models have moved into another league. There are now beauty pageants for AI-generated models. The winners of Miss AI 2024 are Kenza Layli (Morocco), Lalina (France), and Olivia C (Portugal). They are already successfully represented on Instagram, showing themselves in various poses and dresses. According to Golem, Mango plans to sell the advertised collection in 95 countries. With this step, the company aims to reduce the costs that would otherwise be incurred for photographers, models and the entire production process (Image: DALL-E 3).
Video Data For the Training of AI Models
According to Hans Jørgen Wiberg, the founder of the app for blind and visually impaired people, Be My Eyes plans to make video data from conversations between blind users and sighted volunteers available for training AI models. This was revealed in an email he sent to all users. The initiative takes place under a strict privacy policy that gives users the option to opt out of data sharing. Photos and their AI descriptions are not used for AI training in order to avoid perpetuating existing prejudices. According to Wiberg, AI models should reflect the actual experiences and abilities of blind people. Bryan Bashin, Vice Chairman of Be My Eyes, emphasizes that blind testers have improved the OpenAI models, which for him proves how important their participation is. Be My Eyes caused a sensation in 2023 with the new Be My AI feature. Prof. Dr. Oliver Bendel wrote the first paper on this topic in November 2023, submitted it in December 2023, and presented it at the AAAI Spring Symposia at Stanford University in March 2024 (Image: DALL-E 3).
Tinder’s AI Photo Selector
Tinder has officially launched its “Photo Selector” feature, which uses AI to help users choose the best photos for their dating profiles. This was reported by TechCrunch in the article “Tinder’s AI Photo Selector automatically picks the best photos for your dating profile” by Lauren Forristal. The feature, now available to all users in the U.S. and set to roll out internationally later this summer, leverages facial detection technology. Users upload a selfie, and the AI creates a unique facial geometry to identify their face and select photos from their camera roll. The feature curates a collection of 10 selfies it believes will perform well based on Tinder’s insights on good profile images, focusing on aspects like lighting and composition. Tinder’s AI is trained on a diverse dataset to ensure inclusivity and accuracy, aligning with the company’s Diversity, Equity, and Inclusion (DEI) standards. It also filters out photos that violate guidelines, such as nudes. The goal is to save users time and reduce uncertainty when choosing profile pictures. A recent Tinder survey revealed that 68% of participants found an AI photo selection feature helpful, and 52% had trouble selecting profile images. The TechCrunch article was published on 17 July 2024 and is available at techcrunch.com/2024/07/17/tinder-ai-photo-selection-feature-launches/.
Miss AI 2024
According to the organizers, the World AI Creator Awards (WAICA) is a first-of-its-kind global awards program dedicated to recognizing the achievements of AI creators around the world. The first installment of the WAICAs was Miss AI – where traditional beauty pageants intersected with the world of AI creators. Contestants were judged on their beauty, technical implementation, and social impact. The winners of Miss AI 2024 are Kenza Layli (Morocco), Lalina (France), and Olivia C (Portugal). They are already successful on Instagram, showing themselves in different poses and clothes. The beauty pageant attracts worldwide attention, but is also criticized. One statement is that women are being turned into objects, no different from the classical variants. Another observation is that the first place could strengthen religious conservatism – because Kenza Layli is an artificial woman who covers her virtual hair with a digital hijab (Image: Ideogram).
Extended Deadline for ICSR 2024
The deadline for the International Conference on Social Robotics 2024 (ICSR 2024) has been extended. Experts in social robotics and related fields have until July 12 to submit their papers. The prestigious event was last held in Florence (2022) and Qatar (2023). Now it enters its next round. The 16th edition will bring together researchers and practitioners working on human-robot interaction and the integration of social robots into our society. The title of the conference includes the addition “AI”. This is a clarification and demarcation that has to do with the fact that there will be two further formats with the name ICSR in 2024. ICSR’24 (ICSR + AI) will take place as a face-to-face conference in Odense, Denmark, from 23 to 26 October 2024. The theme of this year’s conference is “Empowering Humanity: The role of social and collaborative robotics in shaping our future”. The topics of the Call for Papers include “collaborative robots in service applications (in construction, agriculture, etc.)”, “Human-robot interaction and collaboration”, “Affective and cognitive sciences for socially interactive robots”, and “Context awareness, expectation, and intention understanding”. The general chairs are Oskar Palinko, University of Southern Denmark, and Leon Bodenhagen, University of Southern Denmark. More information is available at icsr2024.dk.
Please be Kind to Boo Boo
Prof. Dr. Oliver Bendel’s privately funded Social Robots Lab has been home to the “Cupboo AI Robotic Pet” since July 2024. It goes by the name Boo Boo (also BooBoo). It was brought to the lab by power user and Cupboo ambassador Julia Rehling. Genmoor writes on its website: “Boo Boo is not just a toy. It is also the king of the planet Lonely …” (Genmoor website). According to the manufacturer, Boo Boo’s movements are made possible by a complex control system. “Therefore please be kind to Boo Boo, it is not an ordinary robotic pet. It could give you different responds according to your interaction.” (Genmoor website) Other robots and figures in the Social Robots Lab are Unitree Go2, Alpha Mini, Cozmo, Vector, Furby, Tamagotchi, Hugvie, and HUGGIE. NAO and Pepper visit from time to time. The Genmoor Group claims to be an “exclusive group for daring futurists who love tech-psychology” (Genmoor website). The company was founded in Hangzhou (China) in 2020.