On the first day of Robophilosophy 2024, Wendell Wallach, one of the fathers of machine ethics, gave a keynote speech entitled “Re-envisioning Ethics: From Moral Machine to Extensive Regulation”. From the abstract: “Have we been underestimating the socio-technical challenges posed by ro(bot)s – physical systems and virtual bots? Many of the complexities inherent in managing intelligent systems can not be adequately met by scientific innovation, existing ethical constraints, or weak regulations forged by legislatures under the capture of the AI oligopoly. In spite of naive future projections, the science we have, and are likely to have in the near future, will not produce AI systems capable of making even satisfactory choices in complex situations where uncertainty reigns, multiple values converge, and the information available is inadequate to project meaningful consequences for various courses of action. AI will pose safety and security risks far beyond those being addressed by the pittance of investment presently directed to build trustworthy systems. Scientific humility is needed. Ethics must be reenvisioned and empowered to work through the plethora of socio-technical obstacles and trials ahead. A vast infrastruture to ensure AI safety will be required.” (Website Robophilosophy 2024) Wendell Wallach went far beyond these hints and gave an overview of the advances and setbacks in machine ethics and AI ethics in recent decades.
Start of Robophilosophy 2024
On August 20, 2024, Robophilosophy 2024 was opened with words of welcome from Maja Horst, Dean of the Faculty of Arts at Aarhus University, and Johanna Seibt, Professor of the School of Culture and Society at Aarhus University. The website says: “The international research conference RP2024 will discuss the questions that really matter in view of the new technological potential of social robotics. In over 100 research talks, RP2024 will address concrete and deep issues that reach far beyond safety and privacy concerns into the conceptual and normative fabric of our societies and individual self-comprehension.” (Website Robophilosophy 2024) The first keynote on the first day of the conference was given by Wendell Wallach, one of the world’s best-known machine ethicists. With his book “Moral Machines” (2009), he laid the foundation for a discipline that has been developing in science fiction and science for years and decades. This was followed in 2011 by “Machine Ethics” by Michael Anderson and Susan L.eigh Anderson. In addition to machine ethics, Robophilosophy is dedicated to robot ethics and other interesting perspectives on social robots.
A Cobot as Conductor of a Symphony
Cobots that dance with humans have been around for a long time. In 2016, the audience at Südpol Luzern witnessed dance and robot history being written by Huang Yi, a choreographer from Taiwan. Cobots that set the pace for humans, on the other hand, are not yet the order of the day. The Dresden Symphony Orchestra is about to perform the “Roboter.Sinfonie”. After a while, conductor Michael Helmrath will hand over to MAiRA Pro S, a product from NEURA Robotics. According to Deutschlandfunk, the machine’s three arms will be able to guide the orchestra, which is divided into groups, through the most complex passages independently of each other. This will break completely new musical ground. According to the Dresden Symphony Orchestra’s calendar, the concerts will take place on October 12 and 13, 2024 at the Europäisches Zentrum der Künste Hellerau (Image: NEURA Robotics).
Cow Whisperer, Horse Whisperer, and Dog Whisperer
On August 5, 2024, the final presentation for the project “The Animal Whisperer” took place at the FHNW School of Business. It was initiated by Prof. Dr. Oliver Bendel, who has been working on animal-computer interaction and animal-machine interaction for many years. Nick Zbinden, a budding business informatics specialist, was recruited as a project collaborator. From March 2024, he developed three applications based on GPT-4o, the Cow Whisperer, the Horse Whisperer and the Dog Whisperer. They can be used to analyze the body language, behaviour, and environment of cows, horses and dogs. The aim is to avert danger to humans and animals. For example, a hiker can receive a recommendation on their smartphone not to cross a pasture if a mother cow and her calves are present. All they have to do is call up the application and take photos of the surroundings. The three apps are now available as prototypes. With the help of prompt engineering, they have been given extensive knowledge and skills. Above all, self-created and labeled photos were used. In the majority of cases, the apps correctly describe the animals’ body language and behavior. Their recommendations for human behavior are also adequate. The project team summarized the results in a paper and submitted it to an international conference (Image: Ideogram).
An AI-generated Teen Collection
Spanish fashion chain Mango has launched an advertising campaign created using AI models. First, all the clothes were photographed, then an AI model was trained to place the images on artificially generated models. The images were then retouched and edited. This was reported by Golem in an article dated July 30, 2024. Digital models are not new. They have been used for decades in various contexts, from computer games to mobile phone applications. Cameron-James Wilson founded an agency for digital models in London in 2019. He is the creator of the digital supermodel Shudu. Her sisters are Noonoouri and Lil Miquela. Since the triumph of generative AI, models have moved into another league. There are now beauty pageants for AI-generated models. The winners of Miss AI 2024 are Kenza Layli (Morocco), Lalina (France), and Olivia C (Portugal). They are already successfully represented on Instagram, showing themselves in various poses and dresses. According to Golem, Mango plans to sell the advertised collection in 95 countries. With this step, the company aims to reduce the costs that would otherwise be incurred for photographers, models and the entire production process (Image: DALL-E 3).
Video Data For the Training of AI Models
According to Hans Jørgen Wiberg, the founder of the app for blind and visually impaired people, Be My Eyes plans to make video data from conversations between blind users and sighted volunteers available for training AI models. This was revealed in an email he sent to all users. The initiative takes place under a strict privacy policy that gives users the option to opt out of data sharing. Photos and their AI descriptions are not used for AI training in order to avoid perpetuating existing prejudices. According to Wiberg, AI models should reflect the actual experiences and abilities of blind people. Bryan Bashin, Vice Chairman of Be My Eyes, emphasizes that blind testers have improved the OpenAI models, which for him proves how important their participation is. Be My Eyes caused a sensation in 2023 with the new Be My AI feature. Prof. Dr. Oliver Bendel wrote the first paper on this topic in November 2023, submitted it in December 2023, and presented it at the AAAI Spring Symposia at Stanford University in March 2024 (Image: DALL-E 3).
Robot Dog Neo Interferes With IoT Devices
According to a report by 404 Media on 22 July 2024, the Department of Homeland Security (DHS) has acquired and modified a dog-like robot called NEO. This robot, equipped with an antenna array, can overload home networks to disable Internet of Things (IoT) devices during law enforcement operations. Benjamine Huffman, director of the Federal Law Enforcement Training Centers (FLETC), revealed the details at the 2024 Border Security Expo. NEO, a modified version of Ghost Robotics’ Vision 60 quadruped unmanned ground vehicle (Q-UGV), helps disable potentially dangerous smart home devices that could be used as booby traps. This development follows an incident in 2021 in which a suspect used a doorbell camera to spy on FBI agents and shot at them, killing two agents. The DHS has also created the ‘FLETC Smart House’ to train officers on how to deal with IoT devices that could be used against them. Robotic pets are becoming increasingly popular with homeland security agencies and police forces. Boston Dynamics’ Spot is used on patrols in New York City and Germany.
Tinder’s AI Photo Selector
Tinder has officially launched its “Photo Selector” feature, which uses AI to help users choose the best photos for their dating profiles. This was reported by TechCrunch in the article “Tinder’s AI Photo Selector automatically picks the best photos for your dating profile” by Lauren Forristal. The feature, now available to all users in the U.S. and set to roll out internationally later this summer, leverages facial detection technology. Users upload a selfie, and the AI creates a unique facial geometry to identify their face and select photos from their camera roll. The feature curates a collection of 10 selfies it believes will perform well based on Tinder’s insights on good profile images, focusing on aspects like lighting and composition. Tinder’s AI is trained on a diverse dataset to ensure inclusivity and accuracy, aligning with the company’s Diversity, Equity, and Inclusion (DEI) standards. It also filters out photos that violate guidelines, such as nudes. The goal is to save users time and reduce uncertainty when choosing profile pictures. A recent Tinder survey revealed that 68% of participants found an AI photo selection feature helpful, and 52% had trouble selecting profile images. The TechCrunch article was published on 17 July 2024 and is available at techcrunch.com/2024/07/17/tinder-ai-photo-selection-feature-launches/.
Deadline for ICSR 2024 Extended Again
The deadline for the International Conference on Social Robotics 2024 (ICSR 2024) has been extended again. Experts in social robotics and related fields have until July 19 to submit their full papers. The prestigious event was last held in Florence (2022) and Qatar (2023). Now it enters its next round. The 16th edition will bring together researchers and practitioners working on human-robot interaction and the integration of social robots into our society. The title of the conference includes the addition “AI”. This is a clarification and demarcation that has to do with the fact that there will be two further formats with the name ICSR in 2024. ICSR’24 (ICSR + AI) will take place as a face-to-face conference in Odense, Denmark, from 23 to 26 October 2024. The theme of this year’s conference is “Empowering Humanity: The role of social and collaborative robotics in shaping our future”. The topics of the Call for Papers include “collaborative robots in service applications (in construction, agriculture, etc.)”, “Human-robot interaction and collaboration”, “Affective and cognitive sciences for socially interactive robots”, and “Context awareness, expectation, and intention understanding”. The general chairs are Oskar Palinko, University of Southern Denmark, and Leon Bodenhagen, University of Southern Denmark. More information is available at icsr2024.dk (Photo: Jacob Christensen).
Miss AI 2024
According to the organizers, the World AI Creator Awards (WAICA) is a first-of-its-kind global awards program dedicated to recognizing the achievements of AI creators around the world. The first installment of the WAICAs was Miss AI – where traditional beauty pageants intersected with the world of AI creators. Contestants were judged on their beauty, technical implementation, and social impact. The winners of Miss AI 2024 are Kenza Layli (Morocco), Lalina (France), and Olivia C (Portugal). They are already successful on Instagram, showing themselves in different poses and clothes. The beauty pageant attracts worldwide attention, but is also criticized. One statement is that women are being turned into objects, no different from the classical variants. Another observation is that the first place could strengthen religious conservatism – because Kenza Layli is an artificial woman who covers her virtual hair with a digital hijab (Image: Ideogram).