ACI 2022 Proccedings

The ACI took place from 5 to 8 December 2022 in Newcastle upon Tyne. It is the world’s leading conference on animal-computer interaction. The proceedings were published in the ACM Library on March 30, 2023. They include the paper “A Face Recognition System for Bears: Protection for Animals and Humans in the Alps” by Oliver Bendel and Ali Yürekkirmaz. From the abstract: “Face recognition, in the sense of identifying people, is controversial from a legal, social, and ethical perspective. In particular, opposition has been expressed to its use in public spaces for mass surveillance purposes. Face recognition in animals, by contrast, seems to be uncontroversial from a social and ethical point of view and could even have potential for animal welfare and protection. This paper explores how face recognition for bears (understood here as brown bears) in the Alps could be implemented within a system that would help animals as well as humans. It sets out the advantages and disadvantages of wildlife cameras, ground robots, and camera drones that would be linked to artificial intelligence. Based on this, the authors make a proposal for deployment. They favour a three-stage plan that first deploys fixed cameras and then incorporates camera drones and ground robots. These are all connected to a control centre that assesses images and developments and intervenes as needed. The paper then discusses social and ethical, technical and scientific, and economic and structural perspectives. In conclusion, it considers what could happen in the future in this context.” The proceedings can be accessed via dl.acm.org/doi/proceedings/10.1145/3565995.

Mind-controlled Four-legged Robot

“Researchers from the University of Technology Sydney (UTS) have developed biosensor technology that will allow you to operate devices, such as robots and machines, solely through thought-control.” (UTS, 20 March 2023) This was reported by UTS on its website on March 20, 2023. The brain-machine interface was developed by Chin-Teng Lin and Francesca Iacopi (UTS Faculty of Engineering and IT) in collaboration with the Australian Army and Defence Innovation Hub. “The user wears a head-mounted augmented reality lens which displays white flickering squares. By concentrating on a particular square, the brainwaves of the operator are picked up by the biosensor, and a decoder translates the signal into commands.” (UTS, 20 March 2023) According to the website, the technology was demonstrated by the Australian Army, where selected soldiers operated a quadruped robot using the brain-machine interface. “The device allowed hands-free command of the robotic dog with up to 94% accuracy.” (UTS, 20 March 2023) The paper “Noninvasive Sensors for Brain–Machine Interfaces Based on Micropatterned Epitaxial Graphene” can be accessed at pubs.acs.org/doi/10.1021/acsanm.2c05546.

Bar Robots for Well-being of Guests

From March 27-29, 2023, the AAAI 2023 Spring Symposia will feature the symposium “Socially Responsible AI for Well-being” by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The venue is usually Stanford University. For staffing reasons, this year the conference will be held at the Hyatt Regency in San Francisco. On March 28, Prof. Dr. Oliver Bendel and Lea Peier will present their paper “How Can Bar Robots Enhance the Well-being of Guests?”. From the abstract: “This paper addresses the question of how bar robots can contribute to the well-being of guests. It first develops the basics of service robots and social robots. It gives a brief overview of which gastronomy robots are on the market. It then presents examples of bar robots and describes two models used in Switzerland. A research project at the School of Business FHNW collected empirical data on them, which is used for this article. The authors then discuss how the robots could be improved to increase the well-being of customers and guests and better address their individual wishes and requirements. Artificial intelligence can play an important role in this. Finally, ethical and social problems in the use of bar robots are discussed and possible solutions are suggested to counter these.” More Information via aaai.org/conference/spring-symposia/sss23/.

An Investigation of Robotic Hugs

From March 27-29, 2023, the AAAI 2023 Spring Symposia will feature the symposium “Socially Responsible AI for Well-being” by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The venue is usually Stanford University. For staffing reasons, this year the conference will be held at the Hyatt Regency in San Francisco. On March 28, Prof. Dr. Oliver Bendel will present the paper “Increasing Well-being through Robotic Hugs”, written by himself, Andrea Puljic, Robin Heiz, Furkan Tömen, and Ivan De Paola. From the abstract: “This paper addresses the question of how to increase the acceptability of a robot hug and whether such a hug contributes to well-being. It combines the lead author’s own research with pioneering research by Alexis E. Block and Katherine J. Kuchenbecker. First, the basics of this area are laid out with particular attention to the work of the two scientists. The authors then present HUGGIE Project I, which largely consisted of an online survey with nearly 300 participants, followed by HUGGIE Project II, which involved building a hugging robot and testing it on 136 people. At the end, the results are linked to current research by Block and Kuchenbecker, who have equipped their hugging robot with artificial intelligence to better respond to the needs of subjects.” More information via aaai.org/conference/spring-symposia/sss23/.

GPT-4 as Multimodal Model

GPT-4 was launched by OpenAI on March 14, 2023. “GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.” (Website OpenAI) On its website, the company explains the multimodal options in more detail: “GPT-4 can accept a prompt of text and images, which – parallel to the text-only setting – lets the user specify any vision or language task. Specifically, it generates text outputs (natural language, code, etc.) given inputs consisting of interspersed text and images.” (Website OpenAI) The example that OpenAI gives is impressive. An image with multiple panels was uploaded. The prompt is: “What is funny about this image? Describe it panel by panel”. This is exactly what GPT-4 does and then comes to the conclusion: “The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port.” (Website OpenAI) The technical report is available via cdn.openai.com/papers/gpt-4.pdf.

Introducing Visual ChatGPT

Researchers at Microsoft are working on a new application based on ChatGPT and solutions like Stable Diffusion. Visual ChatGPT is designed to allow users to generate images using text input and then edit individual elements. In their paper “Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models” Chenfei Wu and his co-authors write: “We build a system called Visual ChatGPT, incorporating different Visual Foundation Models, to enable the user to interact with ChatGPT by 1) sending and receiving not only languages but also images 2) providing complex visual questions or visual editing instructions that require the collaboration of multiple AI models with multi-steps” – and, not to forget: “3) providing feedback and asking for corrected results” (Wu et al. 2023). For example, one lets an appropriate prompt create an image of a landscape, with blue sky, hills, meadows, flowers, and trees. Then, one instructs Visual ChatGPT with another prompt to make the hills higher and the sky more dusky and cloudy.  One can also ask the program what color the flowers are and color them with another prompt. A final prompt makes the trees in the foreground appear greener. The paper can be downloaded from arxiv.org.

Little Teacher

Alpha Mini is a social robot characterized by small size (and thus good transportability) and extensive natural language and motor skills. It can be used in school lessons, both as a teacher and tutor and as a tool with which to program. On March 8, 2023, a new project started at the School of Business FHNW, in which Alpha Mini plays a leading role. The initiator is Prof. Dr. Oliver Bendel, who has been researching conversational agents and social robots for a quarter of a century. Andrin Allemann is contributing to the project as part of his final thesis. Alpha Mini will be integrated into a learning environment and will be able to interact and communicate with other components such as a display. It is to convey simple learning material with the help of pictures and texts and motivate the children through gestural and mimic feedback. So this is a little teacher with great possibilities. In principle, it should comply with the new Swiss federal law on data protection (neues Datenschutzgesetz, nDSG). The project will last until August 2023, after which the results will be published.

The World’s First AI-Driven Localized Radio Content

Futuri launches RadioGP, the first AI-driven localized radio content. “RadioGPT™ uses TopicPulse technology, which scans Facebook, Twitter, Instagram, and 250k+ other sources of news and information, to identify which topics are trending in a local market. Then, using GPT-3 technology, RadioGPT™ creates a script for on-air use, and AI voices turn that script into com-pelling audio.” (Press release, February 23, 2023) It is not a new radio station, but an offer and a tool for existing radio stations. “Stations can select from a variety of AI voices for single-, duo-, or trio-hosted shows, or train the AI with their existing personalities’ voices. Programming is available for individual dayparts, or Futuri’s RadioGPT™ can power the entire station. RadioGPT™ is available for all formats in a white-labeled fashion.” (Press release, February 23, 2023) More information via futurimedia.com/futuri-launches-radiogpt/.