The Robotics Innovation Center (RIC) at the German Research Centre for Artificial Intelligence (DFKI) in Bremen wants to clear the seabed of discarded ammunition in the North Sea and Baltic Sea. This was reported by the online magazine Golem on 14 June 2023. The researchers are using the autonomous underwater vehicle Cuttlefish, developed at DFKI, as a test platform. According to Golem, the robot has been equipped with two deep-sea-capable gripper systems. These are designed to enable flexible handling of objects under water, even difficult objects such as explosive devices. The AI-based control system allows the robot to change its buoyancy and centre of gravity during the dive. According to the online magazine, the AUV is equipped with numerous sensors such as cameras, sonars, laser scanners, and magnetometers. This is how it is supposed to approach an object without colliding with it. The system will certainly be effective – whether it is efficient remains to be seen.
Self-driving Cars Stopped by Fog
“Five self-driving vehicles blocked traffic early Tuesday morning in the middle of a residential street in San Francisco’s Balboa Terrace neighborhood, apparently waylaid by fog that draped the southwestern corner of the city.” (San Francisco Chronicle, 11 April 2023) The San Francisco Chronicle reported this in an article published on April 11, 2023. The fact that fog is a problem for Waymo’s vehicles has been known to the company for some time. A blog post from 2021 states: “Fog is finicky – it comes in a range of densities, it can be patchy, and can affect a vehicle’s sensors differently.” (Blog Waymo, 15 November 2021) Against this background, it is surprising that vehicles are allowed to roll through the city unaccompanied, especially since Frisco – this name comes from sailors – is very often beset by fog. But fog is not the only challenge for the sensors of self-driving cars. A thesis commissioned and supervised by Prof. Dr. Oliver Bendel presented dozens of phenomena and methods that can mislead sensors of self-driving cars. The San Francisco Chronicle article “Waymo says dense S.F. fog brought 5 vehicles to a halt on Balboa Terrace street” can be accessed at www.sfchronicle.com/bayarea/article/san-francisco-waymo-stopped-in-street-17890821.php.
Bar Robots for Well-being of Guests
From March 27-29, 2023, the AAAI 2023 Spring Symposia will feature the symposium “Socially Responsible AI for Well-being” by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The venue is usually Stanford University. For staffing reasons, this year the conference will be held at the Hyatt Regency in San Francisco. On March 28, Prof. Dr. Oliver Bendel and Lea Peier will present their paper “How Can Bar Robots Enhance the Well-being of Guests?”. From the abstract: “This paper addresses the question of how bar robots can contribute to the well-being of guests. It first develops the basics of service robots and social robots. It gives a brief overview of which gastronomy robots are on the market. It then presents examples of bar robots and describes two models used in Switzerland. A research project at the School of Business FHNW collected empirical data on them, which is used for this article. The authors then discuss how the robots could be improved to increase the well-being of customers and guests and better address their individual wishes and requirements. Artificial intelligence can play an important role in this. Finally, ethical and social problems in the use of bar robots are discussed and possible solutions are suggested to counter these.” More Information via aaai.org/conference/spring-symposia/sss23/.
GPT-4 as Multimodal Model
GPT-4 was launched by OpenAI on March 14, 2023. “GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.” (Website OpenAI) On its website, the company explains the multimodal options in more detail: “GPT-4 can accept a prompt of text and images, which – parallel to the text-only setting – lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images.” (Website OpenAI) The example that OpenAI gives is impressive. An image with multiple panels was uploaded. The prompt is: “What is funny about this image? Describe it panel by panel”. This is exactly what GPT-4 does and then comes to the conclusion: “The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port.” (Website OpenAI) The technical report is available via cdn.openai.com/papers/gpt-4.pdf.
Bard Comes into the World
Sundar Pichai, the CEO of Google and Alphabet, announced the answer to ChatGPT in a blog post dated February 6, 2023. According to him, Bard is an experimental conversational AI service powered by LaMDA. It has been opened to trusted testers and will be made available to the public in the coming weeks. “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.” (Sundar Pichai 2023) In recent weeks, Google had come under heavy pressure from OpenAI’s ChatGPT. It was clear that they had to present a comparable application based on LaMDA as soon as possible. In addition, Baidu wants to launch the Ernie Bot, which means another competing product. More information via blog.google/technology/ai/bard-google-ai-search-updates/.
AI for Well-being
As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The AAAI website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.
AI-based Q-bear
Why is your baby crying? And what if artificial intelligence (AI) could answer that question for you? “If there was a flat little orb the size of a dessert plate that could tell you exactly what your baby needs in that moment? That’s what Q-bear is trying to do.” (Mashable, January 3, 2023) That’s what tech magazine Mashable wrote in a recent article. At CES 2023, the Taiwanese company qbaby.ai demonstrated its AI-powered tool which aims to help parents resolve their needs in a more targeted way. “The soft silicone-covered device, which can be fitted in a crib or stroller, uses Q-bear’s patented tech to analyze a baby’s cries to determine one of four needs from its ‘discomfort index’: hunger, a dirty diaper, sleepiness, and need for comfort. Q-bear’s translation comes within 10 seconds of a baby crying, and the company says it will become more accurate the more you use the device.” (Mashable, January 3, 2023) Whether the tool really works remains to be seen – presumably, baby cries can be interpreted more easily than animal languages. Perhaps the use of the tool is ultimately counterproductive because parents forget to trust their own intuition. The article “CES 2023: The device that tells you why your baby is crying” can be accessed via mashable.com/article/ces-2023-why-is-my-baby-crying.
AAAI 2023 Spring Symposia in San Fran
The Association for the Advancement of Artificial Intelligence (AAAI) is pleased to present the AAAI 2023 Spring Symposia, to be held at the Hyatt Regency, San Francisco Airport, California, March 27-29. According to the organizers, Stanford University cannot act as host this time because of insufficient staff. Symposia of particular interest from a philosophical point of view are “AI Climate Tipping-Point Discovery”, “AI Trustworthiness Assessment”, “Computational Approaches to Scientific Discovery”, “Evaluation and Design of Generalist Systems (EDGeS): Challenges and methods for assessing the new generation of AI”, and “Socially Responsible AI for Well-being”. According to AAAI, symposia generally range from 40–75 participants each. “Participation will be open to active participants as well as other interested individuals on a first-come, first-served basis.” (Website AAAI) Over the past decade, the conference has become one of the most important venues in the world for discussions on robot ethics, machine ethics, and AI ethics. It will be held again at History Corner from 2024. Further information via www.aaai.org/Symposia/Spring/sss23.php.
Towards Exploring Perceptions of Dogs
The ACI2022 conference continued on the afternoon of December 7, 2022. “Paper Session 2: Recognising Animals & Animal Behaviour” began with a presentation by Anna Zamansky (University of Haifa). The title was “How Can Technology Support Dog Shelters in Behavioral Assessment: an Exploratory Study”. Her next talk was also about dogs: “Do AI Models ‘Like’ Black Dogs? Towards Exploring Perceptions of Dogs with Vision-Language Models”. She went into detail about OpenAI’s CLIP model, among other things. CLIP is a neural network which learns visual concepts from natural language supervision. She raised the question: “How can we use CLIP to investigate adoptability?” Hugo Jair Escalante (INAOE) then gave a presentation on the topic “Dog emotion recognition from images in the wild: DEBIw dataset and first results”. Emotion recognition using face recognition is still in its infancy with respect to animals, but impressive progress is already being made. The last presentation in the afternoon before the coffee break was “Detecting Canine Mastication: A Wearable Approach” by Charles Ramey (Georgia Institute of Technology). He raised the question: “Can automatic chewing detection measure how detection canines are coping with stress?”. More information on the conference via www.aciconf.org.
Proceedings of “How Fair is Fair? Achieving Wellbeing AI”
On November 17, 2022, the proceedings of “How Fair is Fair? Achieving Wellbeing AI” (organizers: Takashi Kido and Keiki Takadama) were published on CEUR-WS. The AAAI 2022 Spring Symposium was held at Stanford University from March 21-23, 2022. There are seven full papers of 6 – 8 pages in the electronic volume: “Should Social Robots in Retail Manipulate Customers?” by Oliver Bendel and Liliana Margarida Dos Santos Alves (3rd place of the Best Presentation Awards), “The SPACE THEA Project” by Martin Spathelf and Oliver Bendel (2nd place of the Best Presentation Awards), “Monitoring and Maintaining Student Online Classroom Participation Using Cobots, Edge Intelligence, Virtual Reality, and Artificial Ethnographies” by Ana Djuric, Meina Zhu, Weisong Shi, Thomas Palazzolo, and Robert G. Reynolds, “AI Agents for Facilitating Social Interactions and Wellbeing” by Hiro Taiyo Hamada and Ryota Kanai (1st place of the Best Presentation Awards) , “Sense and Sensitivity: Knowledge Graphs as Training Data for Processing Cognitive Bias, Context and Information Not Uttered in Spoken Interaction” by Christina Alexandris, “Fairness-aware Naive Bayes Classifier for Data with Multiple Sensitive Features” by Stelios Boulitsakis-Logothetis, and “A Thermal Environment that Promotes Efficient Napping” by Miki Nakai, Tomoyoshi Ashikaga, Takahiro Ohga, and Keiki Takadama. In addition, there are several short papers and extended abstracts. The proceedings can be accessed via ceur-ws.org/Vol-3276/.