The pollution of water by plastic is a topic that has been in the media for a few years now. In 2015, the School of Engineering FHNW and the School of Business FHNW investigated whether a robotic fish – like Oliver Bendel’s CLEANINGFISH (2014) – could be a solution. In 2018, the information and machine ethicist commissioned another work to investigate several existing or planned projects dealing with marine pollution. Rolf Stucki’s final thesis in the EUT study program was based on “a literature research on the current state of the plastics problem worldwide and its effects, but also on the properties and advantages of plastics” (Management Summary, own translation). “In addition, interviews were conducted with representatives of the projects. In order to assess the internal company factors (strengths, weaknesses) and external environmental factors (opportunities, risks), SWOT analyses were prepared on the basis of the answers and the research” (Management Summary) According to Stucki, the results show that most projects are financially dependent on sponsors and donors. Two of them are in the concept phase; they should prove their technical and financial feasibility in the medium term. With regard to social commitment, it can be said that all six projects are very active. A poster shows a comparison (the photos were alienated for publication in this blog). WasteShark stands out among these projects as a robot. He is, so to speak, the CLEANINGFISH who has become reality.
Self-Repairing Robots
Soft robots with soft surfaces and soft fingers are in vogue. They can touch people, animals, and plants as well as fragile things in such a way that nothing is hurt or destroyed. However, they are vulnerable themselves. One cut, one punch, and they are damaged. According to the Guardian, a European commission-funded project is trying to solve this problem. It aims to create “self-healing” robots “that can feel pain, or sense damage, before swiftly patching themselves up without human intervention”. “The researchers have already successfully developed polymers that can heal themselves by creating new bonds after about 40 minutes. The next step will be to embed sensor fibres in the polymer which can detect where the damage is located. The end goal is to make the healing automated, avoiding the current need for heat to activate the system, through the touch of a human hand.” (Guardian, 8 August 2019) Surely the goal will not be that the robots really suffer. This would have tremendous implications – they would have to be given rights. Rather, it is an imaginary pain – a precondition for the self-repairing process or other reactions.
Deceptive Machines
“AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook’s AI lab and Carnegie Mellon University, has bested some of the world’s top players …” (The Verge, 11 July 2019) According to the magazine, Pluribus was remarkably good at bluffing its opponents. The Wall Street Journal reported: “A new artificial intelligence program is so advanced at a key human skill – deception – that it wiped out five human poker players with one lousy hand.” (Wall Street Journal, 11 July 2019) Of course you don’t have to equate bluffing with cheating – but in this context interesting scientific questions arise. At the conference “Machine Ethics and Machine Law” in 2016 in Krakow, Ronald C. Arkin, Oliver Bendel, Jaap Hage, and Mojca Plesnicar discussed on the panel the question: “Should we develop robots that deceive?” Ron Arkin (who is in military research) and Oliver Bendel (who is not) came to the conclusion that we should – but they had very different arguments. The ethicist from Zurich, inventor of the LIEBOT, advocates free, independent research in which problematic and deceptive machines are also developed, in favour of an important gain in knowledge – but is committed to regulating the areas of application (for example dating portals or military operations). Further information about Pluribus can be found in the paper itself, entitled “Superhuman AI for multiplayer poker”.
Happy Hedgehog
Between June 2019 and January 2020, the sixth artifact of machine ethics will be created at the FHNW School of Business. Prof. Dr. Oliver Bendel is the initiator, the client and – together with a colleague – the supervisor of the project. Animal-machine interaction is about the design, evaluation and implementation of (usually more sophisticated or complex) machines and computer systems with which animals interact and communicate and which interact and communicate with animals. While machine ethics has largely focused on humans thus far, it can also prove beneficial for animals. It attempts to conceive moral machines and to implement them with the help of further disciplines such as computer science and AI or robotics. The aim of the project is the detailed description and prototypical implementation of an animal-friendly service robot, more precisely a mowing robot called HAPPY HEDGEHOG (HHH). With the help of sensors and moral rules, the robot should be able to recognize hedgehogs (especially young animals) and initiate appropriate measures (interruption of work, expulsion of the hedgehog, information of the owner). The project has similarities with another project carried out earlier, namely LADYBIRD. This time, however, more emphasis will be placed on existing equipment, platforms and software. The first artifact at the university was the GOODBOT – in 2013.
Hologram Girl
The article “Hologram Girl” by Oliver Bendel deals first of all with the current and future technical possibilities of projecting three-dimensional human shapes into space or into vessels. Then examples for holograms from literature and film are mentioned, from the fictionality of past and present. Furthermore, the reality of the present and the future of holograms is included, i.e. what technicians and scientists all over the world are trying to achieve, in eager efforts to close the enormous gap between the imagined and the actual. A very specific aspect is of interest here, namely the idea that holograms serve us as objects of desire, that they step alongside love dolls and sex robots and support us in some way. Different aspects of fictional and real holograms are analyzed, namely pictoriality, corporeality, motion, size, beauty and speech capacity. There are indications that three-dimensional human shapes could be considered as partners, albeit in a very specific sense. The genuine advantages and disadvantages need to be investigated further, and a theory of holograms in love could be developed. The article is part of the book “AI Love You” by Yuefang Zhou and Martin H. Fischer and was published on 18 July 2019. Further information can be found via link.springer.com/book/10.1007/978-3-030-19734-6.