The Enhancement of Bixby

The Korean company Samsung Electronics announced new updates to its voice assistant Bixby that are designed to improve user experience, performance, and capabilities of the intelligent assistant and platform. One of the most interesting innovations concerns the voice of the users. According to Samsung, they “can personalize their Bixby Text Call voice”. “Using the new Bixby Custom Voice Creator, users can record different sentences for Bixby to analyze and create an AI generated copy of their voice and tone. Currently available in Korean, this generated voice is planned to be compatible with other Samsung apps beyond phone calls” (Samsung, 22 February 2023). As early as 2017, Oliver Bendel wrote with respect to Adobe VoCo: “Today, just a few minutes of samples are enough to be able to imitate a speaker convincingly in all kinds of statements.” In his article “The synthetization of human voices”, published in AI & Society, he also made ethical considerations. Now there seems to be a recognized market for such applications and they are being rolled out more widely.

Students Get Excited about Social Robots

From February 16 to 18, 2023, the elective module “Soziale Roboter aus technischer, wirtschaftlicher und ethischer Sicht” (“Social Robots from a Technical, Economic and Ethical Perspective”) took place at the Brugg-Windisch campus of the School of Business FHNW. The approximately 30 students came from the Basel, Olten, and Brugg-Windisch campuses and from the International Management, Business Informatics, and Business Administration degree programs. Prof. Dr. Oliver Bendel first taught them the basics of robotics and social robotics. In addition, there were excursions into service robotics, proving the thesis that this is increasingly influenced by social robotics: Transport robots and serving robots are getting eyes and mouths, security robots natural language capabilities. In addition, ethical considerations were made, for which empirical foundations had previously been developed. Also present were Pepper, NAO, Alpha Mini, Cozmo and Hugvie, and for a short time little EMO. Guest lectures came from Marc Heimann (CARE-MOMO) and from Lea Peier (barrobots in Switzerland). The students were highly motivated and in the end designed their own social robots with different tasks in group work. This was the third implementation of the elective module and the first at the Brugg-Windisch site. In November 2023, the fourth will take place at the Olten site.

Bard Comes into the World

Sundar Pichai, the CEO of Google and Alphabet, announced the answer to ChatGPT in a blog post dated February 6, 2023. According to him, Bard is an experimental conversational AI service powered by LaMDA. It has been opened to trusted testers and will be made available to the public in the coming weeks. “Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.” (Sundar Pichai 2023) In recent weeks, Google had come under heavy pressure from OpenAI’s ChatGPT. It was clear that they had to present a comparable application based on LaMDA as soon as possible. In addition, Baidu wants to launch the Ernie Bot, which means another competing product. More information via blog.google/technology/ai/bard-google-ai-search-updates/.

The Latest Findings in Social Robotics

The proceedings of ICSR 2022 were published in early 2023. Included is the paper “The CARE-MOMO Project” by Oliver Bendel and Marc Heimann. From the abstract: “In the CARE-MOMO project, a morality module (MOMO) with a morality menu (MOME) was developed at the School of Business FHNW in the context of machine ethics. This makes it possible to transfer one’s own moral and social convictions to a machine, in this case the care robot with the name Lio. The current model has extensive capabilities, including motor, sensory, and linguistic. However, it cannot yet be personalized in the moral and social sense. The CARE-MOMO aims to eliminate this state of affairs and to give care recipients the possibility to adapt the robot’s ‘behaviour’ to their ideas and requirements. This is done in a very simple way, using sliders to activate and deactivate functions. There are three different categories that appear with the sliders. The CARE-MOMO was realized as a prototype, which demonstrates the functionality and aids the company in making concrete decisions for the product. In other words, it can adopt the morality module in whole or in part and further improve it after testing it in facilities.” The book (part II of the proceedings) can be downloaded or ordered via link.springer.com/book/10.1007/978-3-031-24670-8.

How People React to Hugs from Robots

As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The paper “Increasing Well-being and Health through Robotic Hugs” by Oliver Bendel, Andrea Puljic, Robin Heiz, Furkan Tömen, and Ivan De Paola was accepted. Among other things, they show how people in Switzerland react to robotic hugs. The talk will take place between March 26 and 29, 2023 at Hyatt Regency, San Francisco Airport. The symposium website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.

How Customers React to Bar Robots

As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium “Socially Responsible AI for Well-being” is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The paper “How Can Bar Robots Enhance the Well-being of Guests?” by Oliver Bendel and Lea K. Peier was accepted. Among other things, they show how customers in Switzerland react to bar robots. The talk will take place between March 26 and 29, 2023 at Hyatt Regency, San Francisco Airport. The symposium website states: “For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‘What is socially responsible?’ in several potential situations of well-being in the coming AI age.” (Website AAAI) According to the organizers, the first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.

A Slime Mold in a Smartwatch

Jasmine Lu and Pedro Lopes of the University of Chicago published a paper in late 2022 describing the integration of an organism – the single-celled slime mold Physarum Polycephalum – into a wearable. From the abstract: “Researchers have been exploring how incorporating care-based interactions can change the user’s attitude & relationship towards an interactive device. This is typically achieved through virtual care where users care for digital entities. In this paper, we explore this concept further by investigating how physical care for a living organism, embedded as a functional component of an interactive device, also changes user-device relationships. Living organisms differ as they require an environment conducive to life, which in our concept, the user is responsible for providing by caring for the organism (e.g., feeding it). We instantiated our concept by engineering a smartwatch that includes a slime mold that physically conducts power to a heart rate sensor inside the device, acting as a living wire. In this smartwatch, the availability of heart-rate sensing depends on the health of the slime mold – with the user’s care, the slime mold becomes conductive and enables the sensor; conversely, without care, the slime mold dries and disables the sensor (resuming care resuscitates the slime mold).” (Lu and Lopes 2022) The paper “Integrating Living Organisms in Devices to Implement Care-based Interactions” can be downloaded here.

LaborDigital Conference at the ZHdK

The LaborDigital conference at the Zurich University of the Arts (ZHdK) will take place on February 10, 2023 in English and German. It was initiated and organized by Charlotte Axelsson and others. The conference will open with a lecture by Prof. Dr. Johan Frederik Hartle, Rector of the Academy of Fine Arts Vienna. This will be followed by the keynote “Labor-Geschichte/s. On the Archaeology of a ‘Creative’ Space” by Prof. Dr. Oliver Ruf from the Bonn-Rhein-Sieg University of Applied Sciences. From 11:00 to 12:30, three Experimental Learning Labs will take place in parallel, namely “Artifacts of Machine Ethics” with Prof. Dr. Oliver Bendel (FHNW, Muttenz, Olten and Brugg-Windisch), “Dance Lab & Avatar” with Regina Bäck (Munich), and “Experimental Game Cultures Labs” with Prof. Dr. Margarete Jahrmann (University of Applied Arts Vienna). Lunch will be followed by ZHdK Lab Visits and more Experimental Learning Labs starting at 3:30 pm. At 4:30 p.m., Raphaële Bidault-Waddington, founder of the LIID Future Lab in Paris, will deliver the second keynote, titled “Designing Art-based Future Labs.” Johan Frederik Hartle will conclude the conference with further remarks. For more information, visit paul.zhdk.ch/course/view.php?id=2312.