On March 18, 2024, the kick-off meeting for the project “The Animal Whisperer” took place at the FHNW School of Business. It was initiated by Prof. Dr. Oliver Bendel, who has been working on animal-computer interaction and animal-machine interaction for many years. Nick Zbinden, a student of business information systems, has been recruited to work on the project. As part of his final thesis, he will develop three GPT-4-based applications that can be used to analyze the body language and environment of cows, horses and dogs. The aim is to avert danger to humans and animals. For example, a hiker can receive a recommendation on their smartphone not to cross a pasture if a mother cow and her calves are present. All they have to do is call up the application and take a photo of the area. Nick Zbinden will evaluate literature and conduct several expert interviews to find out more about the situation of farm and domestic animals and their behavior. He will demonstrate the possibilities, but also the limitations of multimodal language models in this context. The results will be available in August 2024 (Image: DALL-E 3).
Project on the Potential of Be My AI
In a recent project, Prof. Dr. Oliver Bendel investigates the capabilities and limitations of the Be My AI feature of the Be My Eyes app. This development, based on GPT-4, is in the field of visual assistance for the blind and visually impaired. The study describes and evaluates Oliver Bendel’s own tests. Additionally, there is an ethical and social discussion. The study reveals the power of the tool, which can analyze still images in an astonishing way. Those affected gain new independence and a new perception of their environment. At the same time, they are dependent on the worldview and morality of the provider or developer, who dictates or withholds certain descriptions. Despite all the remaining weaknesses and errors, it is clear that a paradigm shift has occurred. The study’s outlook suggests that the analysis of moving images will be a significant advancement. It can be justifiably claimed that generative AI can fundamentally improve and change the situation of the blind and visually impaired in various ways. The project’s results will be published in the spring of 2024.