Towards Human-friendly Robot Cars

According to a news story by University of Leeds, robot cars and other automated vehicles could be made more pedestrian-friendly thanks to new research which could help predict when people will cross the road. Leeds scientists say “that neuroscientific theories of how the brain makes decisions can be used in automated vehicle technology to improve safety and make them more human-friendly” (University of Leeds, 5 October 2021). “The researchers set out to determine whether a decision-making model called drift diffusion could predict when pedestrians would cross a road in front of approaching cars, and whether it could be used in scenarios where the car gives way to the pedestrian, either with or without explicit signals. This prediction capability will allow the autonomous vehicle to communicate more effectively with pedestrians, in terms of its movements in traffic and any external signals such as flashing lights, to maximise traffic flow and decrease uncertainty.” (University of Leeds, 5 October 2021) In fact, communication between automated vehicles and pedestrians or cyclists is the crucial problem to be solved in cities. Predictive models can be used, as well as communication options such as eye contact and natural language. For this to happen, however, the autonomous car would have to become a social robot.

Should we Trust Conversational Agents?

A group of about 50 scientists from all over the world worked for one week (September 19 – 24, 2021) at Schloss Dagstuhl – Leibniz-Zentrum für Informatik on the topic „Conversational Agent as Trustworthy Autonomous System (Trust-CA)“. Half were on site, the other half were connected via Zoom. Organizers of this event were Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester), and Björn Schuller (University of Augsburg). On-site participants from Germany and Switzerland included Elisabeth André (University of Augsburg), Stefan Schaffer (DFKI), Sebastian Hobert (University of Göttingen), Matthias Kraus (University of Ulm), and Oliver Bendel (School of Business FHNW). The complete list of participants can be found on the Schloss Dagstuhl website, as well as some pictures. Oliver Bendel presented projects from ten years of research in machine ethics, namely GOODBOT, LIEBOT, BESTBOT, MOME, and SPACE-THEA. Further information is available here.

Xavier Plays Auxiliary Policeman

“Singapore’s Home Team Science and Technology Agency (HTX) roving robot has hit the streets of Toa Payoh Central as part of a trial to support public officers in enhancing public health and safety.” (ZDNet, 8 September 2021) This is reported by the magazine ZDNet. “The robot, named Xavier, was jointly developed by HTX and the Agency for Science, Technology and Research. It is fitted with sensors for autonomous navigation, a 360-degree video feed to the command and control centre, real-time sensing and analysis, and an interactive dashboard where public officers can receive real-time information from and be able to monitor and control multiple robots simultaneously.” (ZDNet, 8 September 2021) Xavier is one of many security robots deployed around the world. Widely known are K3 and K5 from Knightscope. REEM is also used as a policeman and even costumed like a policeman – a case of Robot Enhancement. Whether the people of Singapore will accept security robots remains to be seen.

Conversational Agent as Trustworthy Autonomous System

The Dagstuhl seminar “Conversational Agent as Trustworthy Autonomous System (Trust-CA)” will take place from September 19 – 24, 2021. According to the website, Schloss Dagstuhl – Leibniz-Zentrum für Informatik “pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers”. Organizers of this event are Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester) and Björn Schuller (University of Augsburg). They outline the background as follows: “CA, like many other AI/ML-infused autonomous systems, need to gain the trust of their users in order to be deployed effectively. Nevertheless, in the first place, we need to ensure that such systems are trustworthy. Persuading users to trust a non-trustworthy CA is grossly unethical. Conversely, failing to convince users to trust a trustworthy CA that is beneficial to their wellbeing can be detrimental, given that a lack of trust leads to low adoption or total rejection of a system. A deep understanding of how trust is initially built and evolved in human-human interaction (HHI) can shed light on the trust journey in human-automation interaction (HAI). This calls forth a multidisciplinary analytical framework, which is lacking but much needed for informing the design of trustworthy autonomous systems like CA.” (Website Dagstuhl) Regarding the goal of the workshop, the organizers write: “The overall goal of this Dagstuhl Seminar is to bring together researchers and practitioners, who are currently engaged in diverse communities related to Conversational Agent (CA) to explore the three main challenges on maximising the trustworthiness of and trust in CA as AI/ML-driven autonomous systems – an issue deemed increasingly significant given the widespread uses of CA in every sector of life – and to chart a roadmap for the future research on CA.” (Website Dagstuhl) Oliver Bendel (School of Business FHNW) will talk about his chatbot and voice assistant projects. These emerge since 2013 from machine ethics and social robotics. Further information is available here (photo: Schloss Dagstuhl).

When Robots Flatter the Customer

Under the supervision of Prof. Dr. Oliver Bendel, Liliana Margarida Dos Santos Alves wrote her master thesis “Manipulation by humanoid consulting and sales hardware robots from an ethical perspective” at the School of Business FHNW. The background was that social robots and service robots like Pepper and Paul have been doing their job in retail for years. In principle, they can use the same sales techniques – including those of a manipulative nature – as salespeople. The young scientist submitted her comprehensive study in June 2021. According to the abstract, the main research question (RQ) is “to determine whether it is ethical to intentionally program humanoid consulting and sales hardware robots with manipulation techniques to influence the customer’s purchase decision in retail stores” (Alves 2021). To answer this central question, five sub-questions (SQ) were defined and answered based on an extensive literature review and a survey conducted with potential customers of all ages: “SQ1: How can humanoid consulting and selling robots manipulate customers in the retail store? SQ2: Have ethical guidelines and policies, to which developers and users must adhere, been established already to prevent the manipulation of customers’ purchasing decisions by humanoid robots in the retail sector? SQ3: Have ethical guidelines and policies already been established regarding who must perform the final inspection of the humanoid robot before it is put into operation? SQ4: How do potential retail customers react, think and feel when being confronted with a manipulative humanoid consultant and sales robot in a retail store? SQ5: Do potential customers accept a manipulative and humanoid consultant and sales robot in the retail store?” (Alves 2021) To be able to answer the main research question (RQ), the sub-questions SQ1 – SQ5 were worked through step by step. In the end, the author comes to the conclusion “that it is neither ethical for software developers to program robots with manipulative content nor is it ethical for companies to actively use these kinds of robots in retail stores to systematically and extensively manipulate customers’ negatively in order to obtain an advantage”. “Business is about reciprocity, and it is not acceptable to systematically deceive, exploit and manipulate customers to attain any kind of benefit.” (Alves 2021) The book “Soziale Roboter” – which will be published in September or October 2021 – contains an article on social robots in retail by Prof. Dr. Oliver Bendel. In it, he also mentions the very interesting study.

Animal-Computer Interaction

Clara Mancini (The Open University) and Eleonora Nannoni (University of Bologna) are calling for abstracts and papers for the Frontiers research topic “Animal-Computer Interaction and Beyond: The Benefits of Animal-Centered Research and Design”. They are well-known representatives of a discipline closely related to animal-machine interaction. “The field of Animal-Computer Interaction (ACI) investigates how interactive technologies affect the individual animals involved; what technologies could be developed, and how they should be designed in order to improve animals’ welfare, support their activities and foster positive interspecies relationships; and how research methods could enable animal stakeholders to participate in the development of relevant technologies.” (Website Frontiers) The editors welcome submissions that contribute, but are not necessarily limited, to the following themes: 1) “Applications of animal-centered and/or interactive technologies within farming, animal research, conservation, welfare or other domains”, and 2) “Animal-centered research, design methods and frameworks that have been applied or have applicability within farming, animal research, conservation, welfare or other domains Submission information is available through the website” (Website Frontiers). More submission information is available through the Frontiers website.

A Four-legged Robocop

In New York City, police have taken a Boston Dynamics robot on a mission to an apartment building. Spot is a four-legged model that is advanced and looks scary to many people. The operation resulted in the arrest of an armed man. Apparently, the robot had no active role in this. This is reported by Futurism magazine in a recent article. It is also noted there that certain challenges may arise. “The robodog may not have played an active role in the arrest, but having an armed police squadron deploy a robot to an active crime scene raises red flags about civil liberties and the future of policing.” (Futurism, 15 April 2021) Even Boston Dynamics robots are not so advanced that they can play a central role in police operations. They can, however, serve to intimidate. Whether the NYPD is doing itself any favors by doing so can be questioned. The robots’ reputation will certainly not benefit from this kind of use.

The Robot Called HAPPY HEDGEHOG

The paper “The HAPPY HEDGEHOG Project” by Prof. Dr. Oliver Bendel, Emanuel Graf and Kevin Bollier was accepted at the AAAI Spring Symposia 2021. The researchers will present it at the sub-conference “Machine Learning for Mobile Robot Navigation in the Wild” at the end of March. The project was conducted at the School of Business FHNW between June 2019 and January 2020. Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed a prototype of a lawn mowing robot in the context of machine ethics and social robotics, which stops its work as soon as it detects a hedgehog. HHH has a thermal imaging camera. When it encounters a warm object, it uses image recognition to investigate it further. At night, a lamp mounted on top helps. After training with hundreds of photos, HHH can quite accurately identify a hedgehog. With this artifact, the team provides a solution to a problem that frequently occurs in practice. Commercial robotic mowers repeatedly kill young hedgehogs in the dark. HAPPY HEDGEHOG could help to save them. The video on informationsethik.net shows it without disguise. The robot is in the tradition of LADYBIRD.

Alexa has Hunches

Amazon’s Alexa can perform actions on her own based on previous instructions from the user without asking beforehand. Until now, the voicebot always asked before it did anything. Now it has hunches, which is what Amazon calls the function. On its website, the company writes: “Managing your home’s energy usage is easier than ever, with the Alexa energy dashboard. It works with a variety of smart lights, plugs, switches, water heaters, thermostats, TVs and Echo devices. Once you connect your devices to Alexa, you can start tracking the energy they use, right in the Alexa app. Plus, try an exciting new Hunches feature that can help you save energy without even thinking about it. Now, if Alexa has a hunch that you forgot to turn off a light and no one is home or everyone went to bed, Alexa can automatically turn it off for you. It’s a smart and convenient way to help your home be kinder to the world around it. Every device, every home, and every day counts. Let’s make a difference, together. Amazon is committed to building a sustainable business for our customers and the planet.” (Website Amazon) It will be interesting to see how often Alexa is right with her hunches and how often she is wrong.

A Fish-inspired Robotic Swarm

A team from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering has developed fish-inspired robots that can synchronize their movements like a real school of fish, without any external control. According to a SEAS press release, it is the first time scientists have demonstrated complex 3D collective behaviors with implicit coordination in underwater robots. “Robots are often deployed in areas that are inaccessible or dangerous to humans, areas where human intervention might not even be possible”, said Florian Berlinger, a PhD Candidate at SEAS and Wyss in an interview. “In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient.” (SEAS, 13 January 2021) The fish-inspired robotic swarm, dubbed Blueswarm, was created in the lab of Prof. Radhika Nagpal, an expert in self-organizing systems. There are several studies and prototypes in the field of robotic fishs, from CLEANINGFISH (School of Business FHNW) to an invention by Cornell University in New York.