From March 27-29, 2023, the AAAI 2023 Spring Symposia will feature the symposium “Socially Responsible AI for Well-being” by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The venue is usually Stanford University. For staffing reasons, this year the conference will be held at the Hyatt Regency in San Francisco. On March 28, Prof. Dr. Oliver Bendel and Lea Peier will present their paper “How Can Bar Robots Enhance the Well-being of Guests?”. From the abstract: “This paper addresses the question of how bar robots can contribute to the well-being of guests. It first develops the basics of service robots and social robots. It gives a brief overview of which gastronomy robots are on the market. It then presents examples of bar robots and describes two models used in Switzerland. A research project at the School of Business FHNW collected empirical data on them, which is used for this article. The authors then discuss how the robots could be improved to increase the well-being of customers and guests and better address their individual wishes and requirements. Artificial intelligence can play an important role in this. Finally, ethical and social problems in the use of bar robots are discussed and possible solutions are suggested to counter these.” More Information via aaai.org/conference/spring-symposia/sss23/.
GPT-4 as Multimodal Model
GPT-4 was launched by OpenAI on March 14, 2023. “GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.” (Website OpenAI) On its website, the company explains the multimodal options in more detail: “GPT-4 can accept a prompt of text and images, which – parallel to the text-only setting – lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images.” (Website OpenAI) The example that OpenAI gives is impressive. An image with multiple panels was uploaded. The prompt is: “What is funny about this image? Describe it panel by panel”. This is exactly what GPT-4 does and then comes to the conclusion: “The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port.” (Website OpenAI) The technical report is available via cdn.openai.com/papers/gpt-4.pdf.
Bard Comes into the World
Sundar Pichai, the CEO of Google and Alphabet, announced the answer to ChatGPT in a blog post dated February 6, 2023. According to him, Bard is an experimental conversational AI service powered by LaMDA. It has been opened to trusted testers and will be made available to the public in the coming weeks. “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.” (Sundar Pichai 2023) In recent weeks, Google had come under heavy pressure from OpenAI’s ChatGPT. It was clear that they had to present a comparable application based on LaMDA as soon as possible. In addition, Baidu wants to launch the Ernie Bot, which means another competing product. More information via blog.google/technology/ai/bard-google-ai-search-updates/.
AI for Well-being
As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The AAAI website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.
Proceedings of “How Fair is Fair? Achieving Wellbeing AI”
On November 17, 2022, the proceedings of “How Fair is Fair? Achieving Wellbeing AI” (organizers: Takashi Kido and Keiki Takadama) were published on CEUR-WS. The AAAI 2022 Spring Symposium was held at Stanford University from March 21-23, 2022. There are seven full papers of 6 – 8 pages in the electronic volume: “Should Social Robots in Retail Manipulate Customers?” by Oliver Bendel and Liliana Margarida Dos Santos Alves (3rd place of the Best Presentation Awards), “The SPACE THEA Project” by Martin Spathelf and Oliver Bendel (2nd place of the Best Presentation Awards), “Monitoring and Maintaining Student Online Classroom Participation Using Cobots, Edge Intelligence, Virtual Reality, and Artificial Ethnographies” by Ana Djuric, Meina Zhu, Weisong Shi, Thomas Palazzolo, and Robert G. Reynolds, “AI Agents for Facilitating Social Interactions and Wellbeing” by Hiro Taiyo Hamada and Ryota Kanai (1st place of the Best Presentation Awards) , “Sense and Sensitivity: Knowledge Graphs as Training Data for Processing Cognitive Bias, Context and Information Not Uttered in Spoken Interaction” by Christina Alexandris, “Fairness-aware Naive Bayes Classifier for Data with Multiple Sensitive Features” by Stelios Boulitsakis-Logothetis, and “A Thermal Environment that Promotes Efficient Napping” by Miki Nakai, Tomoyoshi Ashikaga, Takahiro Ohga, and Keiki Takadama. In addition, there are several short papers and extended abstracts. The proceedings can be accessed via ceur-ws.org/Vol-3276/.
From WALL·E to DALL·E
DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language. It was announced by OpenAI in April 2022. The name is a portmanteau of “WALL-E” and “Salvador Dalí”. The website openai.com says more about the program: “DALL·E 2 can create original, realistic images and art from a text description. It can combine concepts, attributes, and styles.” (Website openai.com) Moreover, it is able to “make realistic edits to existing images from a natural language caption” and to “add and remove elements while taking shadows, reflections, and textures into account” (Website openai.com). Last but not least, it “can take an image and create different variations of it inspired by the original” (Website openai.com). The latter form of use is shown by variations of the famous painting “Girl with a Pearl Earring” by Johannes Vermeer. The website says about the principle of the program: “DALL·E 2 has learned the relationship between images and the text used to describe them. It uses a process called ‘diffusion,’ which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.” (Website openai.com) DALL·E mini is a slimmed down version of the powerful program, with which you can gain a first insight. Overall, this is a fascinating and valuable project. From the perspective of information ethics and the philosophy of technology, many questions arise.
A New Language AI
“Meta’s AI lab has created a massive new language model that shares both the remarkable abilities and the harmful flaws of OpenAI’s pioneering neural network GPT-3. And in an unprecedented move for Big Tech, it is giving it away to researchers – together with details about how it was built and trained.” (MIT Technology Review, May 3, 2022) This was reported by MIT Technology Review on May 3, 2022. GPT-3 (Generative Pre-trained Transformer 3) is an autoregressive language model that uses deep learning to generate natural language. Not only web-based systems, but also voice assistants and social robots can be equipped with it. Amazing texts emerge, and long meaningful conversations are possible – almost like between two real people. “Meta’s move is the first time that a fully trained large language model will be made available to any researcher who wants to study it. The news has been welcomed by many concerned about the way this powerful technology is being built by small teams behind closed doors.” (MIT Technology Review, May 3, 2022)
Achieving Wellbeing AI
The AAAI 2022 Spring Symposium “How Fair is Fair? Achieving Wellbeing AI” will be held March 21-23 at Stanford University. The symposium website states: “What are the ultimate outcomes of artificial intelligence? AI has the incredible potential to improve the quality of human life, but it also presents unintended risks and harms to society. The goal of this symposium is (1) to combine perspectives from the humanities and social sciences with technical approaches to AI and (2) to explore new metrics of success for wellbeing AI, in contrast to ‚productive AI‘, which prioritizes economic incentives and values.” (Website “How Fair is Fair”) After two years of pandemics, the AAAI Spring Symposia are once again being held in part locally. However, several organizers have opted to hold them online. “How fair is fair” is a hybrid event. On site speakers include Takashi Kido, Oliver Bendel, Robert Reynolds, Stelios Boulitsakis-Logothetis, and Thomas Goolsby. The complete program is available via sites.google.com/view/hfif-aaai-2022/program.
ANIFACE: Animal Face Recognition
Facial recognition is a problematic technology, especially when it is used to monitor people. However, it also has potential, for example with regard to the recognition of (individuals of) animals. Prof. Dr. Oliver Bendel had announced the topic “ANIFACE: Animal Face Recognition” at the University of Applied Sciences FHNW in 2021 and left the choice whether it should be about wolves or bears. Ali Yürekkirmaz accepted the assignment and, in his final thesis, designed a system that could be used to identify individual bears in the Alps – without electronic collars or implanted microchips – and initiate appropriate measures. The idea is that appropriate camera and communication systems are available in certain areas. Once a bear is identified, it is determined whether it is considered harmless or dangerous. Then, the relevant agencies or directly the people concerned will be informed. Walkers can be warned about the recordings – but it is also technically possible to protect their privacy. In an expert discussion with a representative of KORA, the student was able to gain important insights into wildlife monitoring and specifically bear monitoring, and with a survey he was able to find out the attitude of parts of the population. Building on the work of Ali Yürekkirmaz, delivered in January 2022, an algorithm for bears could be developed and an ANIFACE system implemented and evaluated in the Alps. A video about the project is available here.
AI at the Service of Animals and Biodiversity
“L’intelligence artificielle au service de l’animal et de la biodiversité” (“Artificial intelligence at the service of animals and biodiversity”) is the title of a webinar that will take place on 5 November 2021 from 10:30 – 12:00 via us02web.zoom.us/webinar/register/WN_SJjYGx7qQt-FezEBprGMww. There are over 600 registered participants (professionals from the animal industry, health care and animal welfare, also entrepreneurs, investors, scientists, consultants, NGOs, associations). The webinar is for anyone interested in technologies with a positive impact on animals (wildlife, livestock, pets) and biodiversity. The goal is to take advantage of the opportunities that Artificial Intelligence offers alongside the many technological building blocks (Blockchain, IoT, etc.). This first webinar will be an “introduction” to AI in this specific application area. It will present use cases and be the starting point of a series of webinars. On the same day, there will be a Zoom conference from 1:30 to 2:30 pm. The title of the talk by Prof. Dr. Oliver Bendel is “Towards Animal-friendly Machines”.