Robots Dancing Like Bees

Robot-robot communication and interaction usually takes place via networks. Spoken language can also be used. In certain situations, however, these methods reach their limits. For example, during a rescue operation in disaster areas, communication via radio signals might not be possible. With this in mind, Kaustubh Joshi from the University of Maryland and Abhra Roy Chowdhury from the Indian Institute of Science (IISc) have developed an alternative approach. Their paper states: “This research presents a novel bio-inspired framework for two robots interacting together for a cooperative package delivery task with a human-in the-loop. It contributes to eliminating the need for network-based robot-robot interaction in constrained environments. An individual robot is instructed to move in specific shapes with a particular orientation at a certain speed for the other robot to infer using object detection (custom YOLOv4) and depth perception. The shape is identified by calculating the area occupied by the detected polygonal route. A metric for the area’s extent is calculated and empirically used to assign regions for specific shapes and gives an overall accuracy of 93.3% in simulations and 90% in a physical setup. Additionally, gestures are analyzed for their accuracy of intended direction, distance, and the target coordinates in the map. The system gives an average positional RMSE of 0.349 in simulation and 0.461 in a physical experiment.” (Abstract) The way of interaction and communication is reminiscent of the bee dance – and indeed this served as a model. The paper can be accessed via www.frontiersin.org/articles/10.3389/frobt.2022.915884/full.