HMI as a Complex Socio-Linguistic Practice: The Interplay of Anthropomorphisation and the Degree of Relational Work in Users’ Linguistic Behaviour

The talk presents my socio-linguistic model of Human-Machine Interaction (HMI, Lotze 2025), examining the interplay of technological affordances, user cognitive awareness, and language strategies.
The model features three continua: technological affordances, users’ cognitive awareness, and language strategies. The first dimension evaluates the anthropomorphism degree of the system, including linguistic anthropomorphism and therefore tries to integrate Ruijten‘s et al. (2014/2019) Rasch-scale of human perception of anthropomorphic designs. The second dimension explores users’ cognitive awareness, ranging from pre-conscious alignment to conscious strategies. The third dimension depicts a continuum of user language, from pre-conscious alignment (Gandolfi et al. 2023) and linguistic routines and behaviors, transferred from HHC (CASA: Reeves and Nass 1996; MASA: Lombard and Xu 2021) to various simplification strategies as robot-directed speech (RDS), simplified registers (SR) (Fischer 2011), and computer talk (CT) (Zoeppritz 1985).
Within this framework, we discuss the case of athropomorphisation and relational work in the users‘ linguistic behaviour towards the AI as an example, that is able to illustrate the validity of the model and introduce our second model, which focuses on the interconnectedness of antropomorphisation and the degree of politeness in users’ speech (Lotze & Greilich in prep.). The talk argues from a diachronic perspective that HMI language evolution is influenced not only by anthropomorphic technology and user awareness but also by language variation, change, and societal factors. Therefore, the results of numerous studies of my own research group conducted between 2000 and the present (with a particular focus on Lotze 2016) will be summarized and interpreted in light of the model.

References
Fischer, Kerstin. 2011. “How people talk with robots: Designing dialog to reduce
user uncertainty.” Ai Magazine 32 (4): 31–38.
Gandolfi, Greta, Michael J. Pickering, and Simon Garrod. 2023. “Mechanisms
of alignment: shared control, social cognition and metacognition.” Philosophical Transactions of the Royal Society B 378 (1870).
Lombard, Matthew, and Kun Xu. 2021. “Social responses to media technologies
in the 21st century: The media are social actors paradigm.” Human-Machine
Communication 2: 29–55.
Lotze, Netaya. 2016. Chatbots: Eine linguistische Analyse. Frankfurt: Peter Lang.
Lotze, Netaya. 2025. Human-Machine Interaction as a Complex Socio-Linguistic Practice. Media in Action 7: 105
Lotze, Netaya and Greilich, Anna. In prep.: Sprachliche Höflichkeit gegenüber Chatbots und Amazon Alexa – zur Rolle der medialen Modalität für die sprachliche Realisierung von Höflichkeitsmarkern in Mensch-Maschine-Interaktion. In: Peter Schlobinski; Jens Runkehl; Torsten Siever (Hrsg.). SPRACHE+RESPEKT. Bd. 16. Gesellschaft für deutsche Sprache.Olms.
Reeves, Byron, and Clifford Nass. 1996. The Media Equation: How People Treat Com
puters, Television and New Media Like Real People and Places. Cambridge University Press.
Ruijten, Peter A., Diane H. L. Bouten, Dana C. J. Roushop, Jaap Ham, and Cees
J. H. Midden. 2014. “Introducing a rasch-type anthropomorphism scale.” In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 280–81.