AI Workshop at the University of Potsdam

In 2018, Dr. Yuefang Zhou and Prof. Dr. Martin Fischer initiated the first international workshop on intimate human-robot relations at the University of Potsdam, “which resulted in the publication of an edited book on developments in human-robot intimate relationships”. This year, Prof. Dr. Martin Fischer, Prof. Dr. Rebecca Lazarides, and Dr. Yuefang Zhou are organizing the second edition. “As interest in the topic of humanoid AI continues to grow, the scope of the workshop has widened. During this year’s workshop, international experts from a variety of different disciplines will share their insights on motivational, social and cognitive aspects of learning, with a focus on humanoid intelligent tutoring systems and social learning companions/robots.” (Website Embracing AI) The international workshop “Learning from Humanoid AI: Motivational, Social & Cognitive Perspectives” will take place on 29 and 30 November 2019 at the University of Potsdam. Keynote speakers are Prof. Dr. Tony Belpaeme, Prof. Dr. Oliver Bendel, Prof. Dr. Angelo Cangelosi, Dr. Gabriella Cortellessa, Dr. Kate Devlin, Prof. Dr. Verena Hafner, Dr. Nicolas Spatola, Dr. Jessica Szczuka, and Prof. Dr. Agnieszka Wykowska. Further information is available at embracingai.wordpress.com/.

Virtual Reality for Cows?

Various media claimed in November 2019 that there would be very special experiments with cows in Russia. There are pictures circulating showing an animal wearing a virtual reality (VR) headset. This one could reduce anxiety and increase milk yield if it would show a pleasant environment – that’s at least the media’s assumption. But, according to The Verge, “it’s not at all clear whether this is a genuine trial or an elaborate marketing stunt” (The Verge, 26 November 2019). At the moment, there is hardly any evidence as to whether VR would work for cows. There is no doubt that it makes sense for humans, at least in the context of marketing. They could wear VR glasses to see a landscape with cows. They would then believe that most cows have a good life. But this good life does not exist. Cows suffer from what you do to them – some more, some less. “At the end of the day, what we can say is that someone took the time to make at least one mock-up virtual reality headset for a cow and took these pictures. We don’t need to milk the story any more than that.” (The Verge, 26 November 2019)

Human, Medicine and Society

In his lecture “Service Robots in Health Care” at the Orient-Institut Istanbul on 18 December 2019, Prof. Dr. Oliver Bendel from Zurich, Switzerland is going to deal with care robots as well as therapy and surgery robots. He will present well-known and less known examples and clarify the goals, tasks and characteristics of these service robots in the healthcare sector. Afterwards he will investigate current and future functions of care robots, including sexual assistance functions. Against this background, the lecture is going to consider both the perspective of information ethics and machine ethics. In the end, it should become clear which robot types and prototypes or products are available in health care, which purposes they fulfil, which functions they assume, how the healthcare system changes through their use and which implications and consequences this has for the individual and society. The program of the series “Human, medicine and society: past, present and future encounters” can be downloaded here.

Robots that Learn as They Go

“Alphabet X, the company’s early research and development division, has unveiled the Everyday Robot project, whose aim is to develop a ‘general-purpose learning robot.’ The idea is to equip robots with cameras and complex machine-learning software, letting them observe the world around them and learn from it without needing to be taught every potential situation they may encounter.” (MIT Technology Review, 23 November 2019) This was reported by MIT Technology Review on 23 November 2019 in the article “Alphabet X’s ‘Everyday Robot’ project is making machines that learn as they go”. The approach of Alphabet X seems to be well though-out and target-oriented. In a way, it is oriented towards human learning. One could also teach robots human language in this way. With the help of microphones, cameras and machine learning, they would gradually understand us better and better. For example, they observe how we point to and comment on a person. Or they perceive that we point to an object and say a certain term – and after some time they conclude that this is the name of the object. However, such frameworks pose ethical and legal challenges. You can’t just designate cities as such test areas. The result would be comprehensive surveillance in public spaces. Specially established test areas, on the other hand, would probably not have the same benefits as “natural environments”. Many questions still need to be answered.

Holograms that You can Feel and Hear

A hologram is a three-dimensional image produced with holographic techniques, which has a physical presence in real space. The term “holography” is used to describe procedures that exploit the wave character of light to achieve a realistic representation. Interference and coherence play an important role here. Colloquially, certain three-dimensional projections are also referred to as holograms. According to Gizmodo, researchers at the University of Sussex have created animated 3D holograms that can not only be seen from any angle, they can also be touched. “The researchers took an approach that was similar to one pioneered by engineers at Utah’s Brigham Young University who used invisible lasers to levitate and manipulate a small particle in mid-air, which was illuminated with RGB lights as it zipped around to create the effect of a 3D image. What’s different with the University of Sussex’s holograms is that instead of lasers, two arrays of ultrasonic transducers generating soundwaves are used to float and control a lightweight polystyrene bead just two millimeters in size.” (Gizmodo, 14 November 2019) A video of the Guardian shows quite impressive examples. Further information is available on the Gizmodo website.

Robophilosophy 2020

“Once we place so-called ‘social robots’ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.” (Website Robophilosophy Conference) With these words the next Robophilosophy conference is announced. It will take place from 18 to 21 August 2019 in Aarhus, Denmark. The CfP raises questions like that: “How can we create cultural dynamics with or through social robots that will not impact our value landscape negatively? How can we develop social robotics applications that are culturally sustainable? If cultural sustainability is relative to a community, what can we expect in a global robot market? Could we design human-robot interactions in ways that will positively cultivate the values we, or people anywhere, care about?” (Website Robophilosophy Conference) In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.

Talk to Transformer

Artificial intelligence is spreading into more and more application areas. American scientists have now developed a system that can supplement texts: “Talk to Transformer”. The user enters a few sentences – and the AI system adds further passages. “The system is based on a method called DeepQA, which is based on the observation of patterns in the data. This method has its limitations, however, and the system is only effective for data on the order of 2 million words, according to a recent news article. For instance, researchers say that the system cannot cope with the large amounts of data from an academic paper. Researchers have also been unable to use this method to augment texts from academic sources. As a result, DeepQA will have limited application, according to the researchers. The scientists also note that there are more applications available in the field of text augmentation, such as automatic transcription, the ability to translate text from one language to another and to translate text into other languages.” The sentences in quotation marks are not from the author of this blog. They were written by the AI system itself. You can try it via talktotransformer.com.

Canton of Geneva Bans Uber

According to SRF, Tages-Anzeiger and swissinfo.ch, the canton of Geneva prohibits Uber from continuing its activities in the canton. It now classifies the transportation intermediary as an employer, hence obliging it to pay social benefits to its drivers to continue operating. “Speaking to Swiss public television SRF, the head of the cantonal government Mauro Poggia said that the ride-hailing service was subject to the applicable taxi and transport law. This means Uber is currently not fulfilling its legal obligations and will have to hire its drivers and pay their social benefits, such as pensions, like other taxi companies. According to checks carried out by the canton of Geneva, criteria such as fares, invoices and even an evaluation system for drivers are used at Uber. For this reason, the authorities rejected the arguments of Uber’s lawyers that their drivers were self-employed.” (swissinfo.ch, 1 November 2019) One may be curious whether Uber will hire its drivers and pay social benefits or go to a Swiss court to appeal against the decision.