Why Animals Can (Still) Outrun Robots

In an article published in Science Robotics in April 2024, Samuel A. Burden and his co-authors explore the question of why animals can outrun robots. In their abstract they write: “Animals are much better at running than robots. The difference in performance arises in the important dimensions of agility, range, and robustness. To understand the underlying causes for this performance gap, we compare natural and artificial technologies in the five subsystems critical for running: power, frame, actuation, sensing, and control. With few exceptions, engineering technologies meet or exceed the performance of their biological counterparts. We conclude that biology’s advantage over engineering arises from better integration of subsystems, and we identify four fundamental obstacles that roboticists must overcome. Toward this goal, we highlight promising research directions that have outsized potential to help future running robots achieve animal-level performance.” (Abstract) The article was published at a time when the market for robotic four-legged friends is exploding. Spot, Unitree Go2 and many others can certainly compete with some animals when it comes to running. But when it comes to suppleness and elegance, further progress is still needed.

ANIFACE: Animal Face Recognition

Facial recognition is a problematic technology, especially when it is used to monitor people. However, it also has potential, for example with regard to the recognition of (individuals of) animals. Prof. Dr. Oliver Bendel had announced the topic “ANIFACE: Animal Face Recognition” at the University of Applied Sciences FHNW in 2021 and left the choice whether it should be about wolves or bears. Ali Yürekkirmaz accepted the assignment and, in his final thesis, designed a system that could be used to identify individual bears in the Alps – without electronic collars or implanted microchips – and initiate appropriate measures. The idea is that appropriate camera and communication systems are available in certain areas. Once a bear is identified, it is determined whether it is considered harmless or dangerous. Then, the relevant agencies or directly the people concerned will be informed. Walkers can be warned about the recordings – but it is also technically possible to protect their privacy. In an expert discussion with a representative of KORA, the student was able to gain important insights into wildlife monitoring and specifically bear monitoring, and with a survey he was able to find out the attitude of parts of the population. Building on the work of Ali Yürekkirmaz, delivered in January 2022, an algorithm for bears could be developed and an ANIFACE system implemented and evaluated in the Alps. A video about the project is available here.

Talking with Animals

We use our natural language, facial expressions and gestures when communicating with our fellow humans. Some of our social robots also have these abilities, and so we can converse with them in the usual way. Many highly evolved animals have a language in which there are sounds and signals that have specific meanings. Some of them – like chimpanzees or gorillas – have mimic and gestural abilities comparable to ours. Britt Selvitelle and Aza Raskin, founders of the Earth Species Project, want to use machine learning to enable communication between humans and animals. Languages, they believe, can be represented not only as geometric structures, but also translated by matching their structures to each other. They say they have started working on whale and dolphin communication. Over time, the focus will broaden to include primates, corvids, and others. It would be important for the two scientists to study not only natural language but also facial expressions, gestures and other movements associated with meaning (they are well aware of this challenge). In addition, there are aspects of animal communication that are inaudible and invisible to humans that would need to be considered. Britt Selvitelle and Aza Raskin believe that translation would open up the world of animals – but it could be the other way around that they would first have to open up the world of animals in order to decode their language. However, should there be breakthroughs in this area, it would be an opportunity for animal welfare. For example, social robots, autonomous cars, wind turbines, and other machines could use animal languages alongside mechanical signals and human commands to instruct, warn and scare away dogs, elks, pigs, and birds. Machine ethics has been developing animal-friendly machines for years. Among other things, the scientists use sensors together with decision trees. Depending on the situation, braking and evasive maneuvers are initiated. Maybe one day the autonomous car will be able to avoid an accident by calling out in deer dialect: Hello deer, go back to the forest!

Dogs Obey Social Robots

The field of animal-machine interaction is gaining new research topics with social robots. Meiying Qin from Yale University and her co-authors have brought together a Nao and a dog. From the abstract of their paper: “In two experiments, we investigate whether dogs respond to a social robot after the robot called their names, and whether dogs follow the ‘sit’ commands given by the robot. We conducted a between-subjects study (n = 34) to compare dogs’ reactions to a social robot with a loudspeaker. Results indicate that dogs gazed at the robot more often after the robot called their names than after the loudspeaker called their names. Dogs followed the ‘sit’ commands more often given by the robot than given by the loudspeaker. The contribution of this study is that it is the first study to provide preliminary evidence that 1) dogs showed positive behaviors to social robots and that 2) social robots could influence dog’s behaviors. This study enhance the understanding of the nature of the social interactions between humans and social robots from the evolutionary approach. Possible explanations for the observed behavior might point toward dogs perceiving robots as agents, the embodiment of the robot creating pressure for socialized responses, or the multimodal (i.e., verbal and visual) cues provided by the robot being more attractive than our control condition.” (Abstract) You can read the full paper via dl.acm.org/doi/abs/10.1145/3371382.3380734.

Imitating the Agile Locomotion Skills of Four-legged Animals

Imitating the agile locomotion skills of animals has been a longstanding challenge in robotics. Manually-designed controllers have been able to reproduce many complex behaviors, but building such controllers is time-consuming and difficult. According to Xue Bin Peng (Google Research and University of California, Berkeley) and his co-authors, reinforcement learning provides an interesting alternative for automating the manual effort involved in the development of controllers. In their work, they present “an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals” (Xue Bin Peng et al. 2020). They show “that by leveraging reference motion data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire behaviors for legged robots” (Xue Bin Peng et al. 2020). By incorporating sample efficient domain adaptation techniques into the training process, their system “is able to learn adaptive policies in simulation that can then be quickly adapted for real-world deployment” (Xue Bin Peng et al. 2020). For demonstration purposes, the scientists trained “a quadruped robot to perform a variety of agile behaviors ranging from different locomotion gaits to dynamic hops and turns” (Xue Bin Peng et al. 2020).