A markup language is a machine-readable language for structuring and formatting texts and other data. The best known is the Hypertext Markup Language (HTML). Other well-known artifacts are SSML (for the adaptation of synthetic voices) and AIML (for artificial intelligence applications). We use markup languages to describe properties, affiliations and forms of representation of sections of a text or set of data. This is usually done by marking them with tags. In addition to tags, attributes and values can also be important. A student paper at the School of Business FHNW will describe and compare known markup languages. It will examine whether there is room for further artifacts of this kind. A markup language, which would be suitable for the morality in the written and spoken as well as the morally adequate display of pictures, videos and animations and the playing of sounds, could be called MOML (Morality Markup Language). Is such a language possible and helpful? Can it be used for moral machines? The paper will also deal with this. The supervisor of the project, which will last until the end of the year, is Prof. Dr. Oliver Bendel. Since 2012, he and his teams have created formulas and annotated decision trees for moral machines and a number of moral machines themselves, such as GOODBOT, LIEBOT, BESTBOT, and LADYBIRD.
Towards Robots with Artificial Skin
“Sensitive synthetic skin enables robots to sense their own bodies and surroundings – a crucial capability if they are to be in close contact with people. Inspired by human skin, a team at the Technical University of Munich (TUM) has developed a system combining artificial skin with control algorithms and used it to create the first autonomous humanoid robot with full-body artificial skin.” (Press Release TUM, 10 October 2019) The robot skin consists of hexagonal cells which are about the size of a two-euro coin. Each of them is equipped with a microprocessor and sensors to detect contact, acceleration, proximity, and temperature. “Such artificial skin enables robots to perceive their surroundings in much greater detail and with more sensitivity. This not only helps them to move safely. It also makes them safer when operating near people and gives them the ability to anticipate and actively avoid accidents.” (Press Release TUM, 10 October 2019) The artificial skin could become important for service robots of all kinds, but also for certain industrial robots (Photo: Department of Electrical and Computer Engineering, Astrid Eckert).
Interpretable AI for Well-Being
The papers of the AAAI 2019 Spring Symposium “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness symposium” were published in October 2019. The participants had met at Stanford University at the end of March 2019 to present and discuss their findings. Session 5 (“Social Embeddedness”) includes the following publications: “Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?” (Oliver Bendel), “Context-based Network Analysis of Structured Knowledge for Data Utilization” (Teruaki Hayashi, Yukio Ohsawa), “Extended Mind, Embedded AI, and ‘the Barrier of Meaning'” (Sadeq Rahimi), “Concept of Future Prototyping Methodology to Enhance Value Creation within Future Contexts” (Miwa Nishinaka, Yusuke Kishita, Hisashi Masuda, Kunio Shirahada), and “Maintaining Knowledge Distribution System’s Sustainability Using Common Value Auctions” (Anas Al-Tirawi, Robert G. Reynolds). The papers can be downloaded via ceur-ws.org/Vol-2448/.
Honey, I shrunk the AI
Some months ago, researchers at the University of Massachusetts showed the climate toll of machine learning, especially deep learning. Training Google’s BERT, with its 340 million data parameters, emitted nearly as much carbon as a round-trip flight between the East and West coasts. According to Technology Review, the trend could also accelerate the concentration of AI research into the hands of a few big tech companies. “Under-resourced labs in academia or countries with fewer resources simply don’t have the means to use or develop such computationally expensive models.” (Technology Review, 4 October 2019) In response, some researchers are focused on shrinking the size of existing models without losing their capabilities. The magazine wrote enthusiastically: “Honey, I shrunk the AI” (Technology Review, 4 October 2019) There are advantages not only with regard to the environment and to the access to state-of-the-art AI. According to Technology Review, tiny models will help bring the latest AI advancements to consumer devices. “They avoid the need to send consumer data to the cloud, which improves both speed and privacy. For natural-language models specifically, more powerful text prediction and language generation could improve myriad applications like autocomplete on your phone and voice assistants like Alexa and Google Assistant.” (Technology Review, 4 October 2019)