Science-fiction regularly portrays deep friendship or even romantic relationships between a human and a machine, e.g., Dolores and William (Westworld, 2016), Joi and K (Bladerunner 2049, 2017) or Poe and Takeshi Kovacs (Altered Carbon, 2018 and 2020). In fact, for several years now, there has been a development approach aiming to create artificial companions. The so-called companion paradigm focuses on social abilities to create adaptive systems that adapt their behavior to the user and his/her environment [1]. This development approach creates a high degree of individualization and customization. The paradigm principally intends to reduce the complexity of innovative technology, which can go together with a lack of user-friendliness and frustrated users [2]. Ulm University hosted in 2015 the International Symposium of Companion Technology (ISCT). The ISCT conference paper gives a broad overview of the research issues in this field [3]. Some of the discussed research questions approached data input modalities for recognizing emotional states and the user’s current situation or dialog strategies of the artificial companions in order to create a trustworthy relationship. Although the paradigm is already approached quite interdisciplinary, Prof. Hepp (2020) has recently called on communication and media scientists to participate more influential in these discussions. Since in particular, the human-machine-communication was explored lacking pronounced participation of communication scholars [4]. In terms of perception, thrilling issues could consider e.g., possible gradations among different companion systems, and what effects these have on the interaction and communication with the technology? Such questions have to be discussed not only by computer scientists but also by psychology and philosophy scholars. Especially when it comes to the question of how human-machine-relationship will develop in the long run? Will companion systems drift into unemotional and function-centric routines as we have with other technologies, or can they become our forever friends?
References
[1] Wahl M., Krüger S., Frommer J. (2015). Well-intended, but not Well Perceived: Anger and Shame in Reaction to an Affect-oriented Intervention Applied in User-Companion Interaction. In: Biundo-Stephan S., Wendemuth A., & Rukzio E. (Eds.). (2015). Proceedings of the 1st International Symposium on Companion-Technology (ISCT 2015)—September 23rd-25th, Ulm University, Germany. p. 114-119. https://doi.org/10.18725/OPARU-3252
[2] Biundo S., Höller D., Schattenberg B., & Bercher P. (2016). Companion-Technology: An Overview. KI – Künstliche Intelligenz, 30(1), 11–20. https://doi.org/10.1007/s13218-015-0419-3
[3] Biundo-Stephan S., Wendemuth A., & Rukzio E. (Eds.). (2015). Proceedings of the 1st International Symposium on Companion-Technology (ISCT 2015)—September 23rd-25th, Ulm University, Germany. https://doi.org/10.18725/OPARU-3252
[4] Hepp A. (2020). Artificial companions, social bots and work bots: Communicative robots as research objects of media and communication studies. Media, Culture & Society, 016344372091641. https://doi.org/10.1177/0163443720916412