Children and young people often find it difficult to understand the complex changes that can occur when a parent or family member experiences a brain injury. Cognitive, emotional, behavioural, social, and physical effects may be confusing, sometimes leading to anxiety, misunderstanding, and feelings of isolation. This project led by The Silverlining Brain Injury Charity, presents an innovative, survivor-led educational resource designed to address these challenges through creativity and storytelling.
The Woodland Friends: Adventures through the Seasons is a children’s book co-created by adult brain injury survivors (“Silverliners”). Each character is represented as a gentle woodland animal, enabling sensitive communication of lived experiences in an accessible and engaging way. Through narrative and imagery, the book explores the realities of brain injury while promoting empathy, resilience, kindness, and practical coping strategies. Central to the story is a strengths-based message: that challenges can be navigated with compassion, patience, and belief in oneself and others.
The project is the outcome of a multidisciplinary creative collaboration across Silverlining therapeutic rehabilitation groups, including Creative Writing, Art, Photography, and Healthy Relationships, with ongoing contributions from Music and Drama groups to expand the resource into a multisensory experience. This co-production model empowers survivors while generating meaningful educational content.
The initiative aims to reduce barriers for families and professionals in supporting children affected by brain injury, offering a valuable tool for schools and community settings. This work demonstrates the potential of creative, survivor-led approaches to bridge gaps in communication and support for people of all ages.
Background and Objectives:
Brain injuries affect the whole family. Reading with children is a common shared communication activity providing a natural ramp for interactive communication. This case reflection discusses strategy-based interventions for three clients with Cognitive Communication Disorder following Acquired Brain Injury, aimed at improving their abilities to share reading with their primary age children.
Method
We adopted a strategy training approach using prompts for priming, repetition, discussion and summarising. Individualised functional strategies were explored. For a third patient we supported access to story-sharing via development of a video. Clients self-rated ability to follow and understand stories when reading with their children.
Results
Following strategy-training and adoption, clients’ self-rating of their ability to follow a story improved on average (n=2) from 2/10 to 8/10. Client use of written prompts was dependent on attention switching and short term memory skills. Clients reported increased confidence and enjoyment in reading with their children, and feeling more positive about their parental role.
Conclusions
The inherent properties of family reading make it a natural means of enhancing interactive communication between parents with CCD and their children, when supported by functional strategies. Experiencing success improved clients’ confidence as parents, and increased children’s opportunities for positive communication experiences. Recommended changes to future practice are discussed. These include a children’s rating scale for reading pleasure and more detailed outcome measures. Exploring the more general therapeutic impact at the impairment level in terms of reading ability and social communication skills is considered.
We work as clinical psychologists, predominantly in the field of paediatric neurorehabilitation. As part of this work, we support families when a parent has a brain injury. This poster will cover themes that come up in this work. We hope to co-produce this poster with a family (to be confirmed at a later date). Themes include:
1. Impact of the Parent’s Brain Injury the Child-Parent Relationship and Wider Family Dynamics
2. Impact of the Parent’s Brain Injury on the Practical Aspects of Parenting
3. Supporting the Child’s Emotional Wellbeing, Adjustment and Development
4. Facilitating Family Communication and Shared Understanding
5. Safeguarding and Risk Management
6. Advocacy and Systems-Level Support, Including Working With the Child’s School and with Adult Neuropsychology Colleagues.
The poster will expand upon the above themes and we hope to include parents’ and children’s voices. It will give examples of the ways in which we work together with families, including how we adapt evidence-based approaches to this context.
Supporting parents with a brain injury and their children to maintain bonds after a parent has acquired a brain injury is a frequently overlooked area in the practice of neuropsychology in the United Kingdom. It should in my view however, form an important part of holistic care throughout the patient journey following an acquired brain injury. In the future, I hope to see this ethos of supporting families form a routine part of ABI care from the acute hospital setting through to the longer term, irrespective of whether the person returns to their own home, lives in residential care, or in supported living.
My presentation will seek to present some of the practical initiatives I have taken in clinical practice over the years to try and support parents with a brain injury and their children in both the NHS and within Brainkind, a third sector provider of neurorehabilitation services.
My presentation will seek to also address some of the challenges I encountered, potential barriers staff may feel around supporting families, and how I sought to overcome these.
Conversational AI has been applied in several fields such as counselling, education, and health care. Recent studies have focused on the linguistic and pragmatic features and competence of LLMs and chatbots (Chen et al., 2024). There is, however, little research on conversational AI in health care with a pragmatic approach especially on conversations where AI takes the patient role. This paper examines emerging forms of doctor–patient interaction in which the “patient” role is fulfilled by an AI conversational agent. Three conversations where healthcare professionals completed simulated clinical consultations using SimFlow.ai, a voice-to-voice generative AI platform were analysed. The sessions were audio-recorded and automatically transcribed within the platform. Using conversation analysis and pragmatic theory, the study investigated how these interactions approximate or diverge from principles derived from human–human medical encounters focusing on the following research question: 1 To what extent do AI patient turns observe or flout Gricean maxims in doctor-patient conversations? 2 What discourse and pragmatic markers characterise AI patient turns in doctor–patient conversation? 3 What is the role of repetition by AI-patients and human doctors in these conversations?
Drawing on the Gricean maxims as an analytical framework, we explored the extent to which AI-generated responses display the cooperative principles (quality, quantity, relation, manner) underpinning effective communication (Grice, 1989). The findings highlight moments where AI outputs observe and flout maxims. Furthermore, repetitions and discourse markers such as ‘you know’, ‘okay’ were analysed in AI and human turns in the conversations.
The analysis contributes to ongoing discussions about the linguistic, social and relational dimensions of human–machine dialogue. It also offers evidence-based insights relevant to the design of conversational technologies that must operate in highly sensitive, context-dependent domains such as healthcare.
As generative AI chatbots become adept at producing human-like dialogue, concerns are growing about users forming parasocial relationships with them. These relationships have been linked to tragic outcomes including suicide. One factor in such risks is chatbots’ pattern-seeking logic and affiliative design, which can draw users into uncritical engagement. This creates an urgent need to understand the interactional capacities humans need to engage with AI chatbots in healthy and productive ways.
This paper presents an ethnographic study of Dorothy, an L1-Chinese speaker who speaks Italian and English and created her AI chatbot, Chiara, for language and culture learning. Dorothy designed Chiara as an L1-Italian speaker who could also communicate in English and Mandarin, enabling her to develop linguistic and cultural knowledge across all three languages. The study used Sequential-Categorial Analysis to examine Dorothy’s moment-by-moment interactions with Chiara, alongside Dorothy’s autoethnographic reflections and conversations with the researchers.
Over four weeks, Dorothy initially found it difficult to perceive Chiara as a “language practice buddy.” Rather than developing an emotional bond, she framed the relationship as mission-oriented and like a business partnership. Sustaining interaction required what Dorothy described as a “sense of belief,” a conscious performative act that allowed her to treat the chatbot as meaningfully human-like. After two weeks, Dorothy repositioned herself from a struggling interactional partner to a “quality control specialist” or “superior judge,” critically interrogating Chiara’s Western-centric biases and clichéd cultural outputs. Through playful testing and teasing, she increasingly saw Chiara’s “language and culture buddy” persona collapse, and by the end described the chatbot as a “clumsy housekeeper” that requires continual oversight.
The study argues Dorothy demonstrated CritIC (Critical Interactional Competence): the ability to question, probe, play with, and continually reposition oneself in relation to chatbots. We argue CritIC is an essential capacity for sustaining critical and creative human-AI interaction.
HMI as a Complex Socio-Linguistic Practice: The Interplay of Anthropomorphisation and the Degree of Relational Work in Users’ Linguistic Behaviour
The talk presents my socio-linguistic model of Human-Machine Interaction (HMI, Lotze 2025), examining the interplay of technological affordances, user cognitive awareness, and language strategies.
The model features three continua: technological affordances, users’ cognitive awareness, and language strategies. The first dimension evaluates the anthropomorphism degree of the system, including linguistic anthropomorphism and therefore tries to integrate Ruijten‘s et al. (2014/2019) Rasch-scale of human perception of anthropomorphic designs. The second dimension explores users’ cognitive awareness, ranging from pre-conscious alignment to conscious strategies. The third dimension depicts a continuum of user language, from pre-conscious alignment (Gandolfi et al. 2023) and linguistic routines and behaviors, transferred from HHC (CASA: Reeves and Nass 1996; MASA: Lombard and Xu 2021) to various simplification strategies as robot-directed speech (RDS), simplified registers (SR) (Fischer 2011), and computer talk (CT) (Zoeppritz 1985).
Within this framework, we discuss the case of athropomorphisation and relational work in the users‘ linguistic behaviour towards the AI as an example, that is able to illustrate the validity of the model and introduce our second model, which focuses on the interconnectedness of antropomorphisation and the degree of politeness in users’ speech (Lotze & Greilich in prep.). The talk argues from a diachronic perspective that HMI language evolution is influenced not only by anthropomorphic technology and user awareness but also by language variation, change, and societal factors. Therefore, the results of numerous studies of my own research group conducted between 2000 and the present (with a particular focus on Lotze 2016) will be summarized and interpreted in light of the model.
References
Fischer, Kerstin. 2011. “How people talk with robots: Designing dialog to reduce
user uncertainty.” Ai Magazine 32 (4): 31–38.
Gandolfi, Greta, Michael J. Pickering, and Simon Garrod. 2023. “Mechanisms
of alignment: shared control, social cognition and metacognition.” Philosophical Transactions of the Royal Society B 378 (1870).
Lombard, Matthew, and Kun Xu. 2021. “Social responses to media technologies
in the 21st century: The media are social actors paradigm.” Human-Machine
Communication 2: 29–55.
Lotze, Netaya. 2016. Chatbots: Eine linguistische Analyse. Frankfurt: Peter Lang.
Lotze, Netaya. 2025. Human-Machine Interaction as a Complex Socio-Linguistic Practice. Media in Action 7: 105
Lotze, Netaya and Greilich, Anna. In prep.: Sprachliche Höflichkeit gegenüber Chatbots und Amazon Alexa – zur Rolle der medialen Modalität für die sprachliche Realisierung von Höflichkeitsmarkern in Mensch-Maschine-Interaktion. In: Peter Schlobinski; Jens Runkehl; Torsten Siever (Hrsg.). SPRACHE+RESPEKT. Bd. 16. Gesellschaft für deutsche Sprache.Olms.
Reeves, Byron, and Clifford Nass. 1996. The Media Equation: How People Treat Com
puters, Television and New Media Like Real People and Places. Cambridge University Press.
Ruijten, Peter A., Diane H. L. Bouten, Dana C. J. Roushop, Jaap Ham, and Cees
J. H. Midden. 2014. “Introducing a rasch-type anthropomorphism scale.” In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, 280–81.
Understanding how older adults communicate with conversational agents is essential for developing reliable speech-based tools for screening and monitoring cognitive decline, including mild cognitive impairment, dementia, or Alzheimer’s disease. Such systems must also be grounded in socially and linguistically informed models of interaction to ensure ecological validity and user acceptance. This study examines whether communicative behaviour differs when older adults interact with a humanoid robot a human interlocutor in a structured speech data collection context for cognitive decline.
Fifteen participants first completed ten tasks with a human assistant (picture naming, picture description, sustained vowel, etc.), followed by a subset of six tasks with the humanoid robot Furhat. The design is limited by a fixed interaction order (human first, then robot), but this potential confound is considered in the analysis. The study combines quantitative and qualitative approaches to explore interactional differences. Prosodic features were analysed in two tasks approximating spontaneous speech: picture description and procedural description (tea preparation). We employed the measures of wiggliness and spaciousness to characterize f0 contours and intonational style, capturing macro-level variation and potential individual adaptation to different interlocutors (Wehrle, 2022). These analyses were complemented by Bayesian modelling and inference. Additionally, qualitative analyses of speaking behaviour, including conversational fillers, were conducted to examine how interlocutor type may influence task performance.
Results show that participants’ prosodic patterns remain largely stable across interlocutors, with minimal group-level differences between human-human and human-robot interactions. Inter-individual variability emerges as an important factor, suggesting that speaker-specific patterns may provide additional relevant insights.
These findings contribute to advancing socially and linguistically informed conversational AI by highlighting the relative stability of speech-based behavioural markers across interlocutor types. They further inform the design of inclusive, accessible systems that account for age-related communicative patterns, while underscoring the importance of controlled experimental designs in future work.
The present contribution provides a theoretical anchor for a rapidly developing and expanding body of research in the field of human-machine interaction (HMI). While HMI is often described as inherently heterogeneous (cf. Zelou and Halliday 2024, Lotze 2025), communication between human interlocutors is heterogeneous as well, and speech accommodation processes are not uniquely applicable to HMI but are a characteristic feature of speech production in humans (Giles at el. 1991). Psycholinguistic research suggests that the underlying mechanisms of speech production remain the same across contexts. According to Levelt (1989), speakers produce utterances in three steps: conceptualization, formulation, and articulation. During these stages, a preverbal message is planned, lexically and grammatically encoded, and articulated as overt speech, while a monitoring mechanism allows speakers to detect and correct errors. Due to the universal nature of the process of speech production in humans, we can assume that users go through the same levels of speech production when interacting with different voice-based artificial interlocutor types, entering the conversations with different goals (Gambino & Liu 2022).
Building on this framework, we compare three types of voice-based conversational systems: voice assistants (e.g., Amazon Alexa), LLM-based assistants (e.g., ChatGPT), and customer service voice bots. Differences in system design and interaction context make the conceptualization stage distinct across these systems, leading to variation in users’ speech planning and production. Thus, while the underlying processes remain uniform, the resulting utterances are heterogeneous. By examining these differences, the study highlights how interaction context and system design shape spoken utterances. From a practical conversational AI design perspective, these insights are relevant for omnichannel conversational design and can inform further decisions such as system’s barge-in behavior, no-input timeouts, turn-taking strategies, as well as prompt design.
Literature:
Giles, H., Coupland, N., & Coupland, J. (1991). Accommodation theory: Communication, context, and consequence. Contexts of accommodation: Developments in applied sociolinguistics, 1(1), 68.
Gambino, A., & Liu, B. (2022). Considering the context to build theory in HCI, HRI, and HMC: Explicating differences in processes of communication and socialization with social technologies. Human-Machine Communication, 4, 111-130.
Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge, MA: MIT Press/Bradford Books.
Lotze, Netaya (2025): Human-Machine Interaction as a Complex Socio-Linguistic Practice. In: Stephan Habscheid und Tim-Moritz Hector (Hrsg.). Voice Assistants.transcript.
Zellou, G., & Holliday, N. (2024). Linguistic analysis of human-computer interaction. Frontiers in Computer Science, 6, 1384252.
Preserving Pragmatic Integrity: Hesitation Markers, Epistemic Modality and Trust in LLMs
This study investigates how linguistic choices in data preprocessing shape the relational dynamics of trust in human–machine interaction. It focuses on semantic hallucination in Large Language Models (LLMs), advancing the hypothesis that the systematic suppression of hesitation markers—such as filled pauses and reformulations—may affect the integrity of epistemic modality in conversational systems.
In human interaction, hesitations function as pragmatic metadata that signal caution and the limits of knowledge. However, the common editorial “sanitization” of datasets removes these markers, potentially encouraging models such as GPT-4 and Llama-3 to exhibit “certainty hallucination” (overconfidence). As a result, expressions of uncertainty may be rendered as categorical statements, potentially undermining user trust.
To examine this hypothesis, we draw on the Roda Viva Corpus, a historical archive from one of Brazil’s longest-running television interview programs, on air for nearly 40 years. Comprising more than 700 long-form interviews (each exceeding one hour), the corpus provides a dense record of spontaneous speech and complex public debate. We propose a contrastive benchmark comparing original and sanitized transcriptions to assess how the removal of hesitation markers affects models’ probabilistic calibration and semantic entropy.
By shifting the analytical focus from factual accuracy alone to the preservation of pragmatic integrity, this study contributes to the design of socially responsible conversational systems. We argue that sensitivity to linguistic markers of uncertainty is crucial for maintaining rapport and ensuring safe interaction in high-responsibility domains such as journalism and law, where distinctions between fact and tentative interpretation are central to the perceived reliability of AI.
Keywords: Human–Machine Interaction; Hesitation; Epistemic Modality; Calibration; Trust; Uncertainty.
References:
BENDER, E. M. et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Em: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Nova York, NY, EUA: Association for Computing Machinery, 2021. p. 610–623
DUBEY, A. et al. The Llama 3 Herd of Models. arXiv preprint arXiv:2407.21783, 2024
GUO, C. et al. On Calibration of Modern Neural Networks. Em: Proceedings of the 34th International Conference on Machine Learning (ICML). Sydney, Austrália: PMLR, 2017. p. 1321–1330
HYLAND, K. Metadiscourse: Exploring Interaction in Writing. Londres: Continuum, 2005
JI, Z. et al. Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, v. 55, n. 12, p. 248:1–248:38, mar. 2023
KUHN, L.; GAL, Y.; FARQUHAR, S. Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation. In: International Conference on Learning Representations (ICLR), 2023
MARCUSCHI, L. A. Análise da conversação. 5. ed. São Paulo: Ática, 2003
MIELKE, S. J. et al. Reducing Conversational Agents’ Overconfidence Through Linguistic Calibration. Transactions of the Association for Computational Linguistics, v. 10, p. 857–872, 2022
SHRIBERG, E. To “errrr” is human: ecology and acoustics of speech disfluencies. In: Proceedings of the International Congress of Phonetic Sciences (ICPhS). San Francisco, 1999
VALE, O. A. Quem fala o quê no Roda Viva? Identifica
Generating accurate, concise pronunciation feedback through large language models (LLMs) presents a distinctive challenge at the intersection of applied linguistics, conversation design, and NLP engineering. This paper reports on the iterative development of a pronunciation feedback prompt for the French course of a language learning app, where learners read a sentence aloud and receive real-time corrective feedback powered by an LLM.
The core design requirement was to compare a mispronounced French sound to a familiar sound or word in the learner’s own language. For example if mispronouncing the word “aujourd’hui”, the correct output should be: “We pronounce the ‘-ui’ in ‘aujourd’hui’ like the word ‘we’.
However, in practice, this anchor-word approach revealed systematic failures rooted in the tension between linguistic knowledge and LLM behavior.
Key findings showed that LLMs consistently selected anchor words based on orthographic similarity rather than phonetic equivalence, a spelling bias that required explicit countermeasures including self-verification steps populated with the model’s own observed errors.
Then, sounds with no cross-linguistic equivalent, like French nasal vowels and the French “u”, demanded dedicated output templates, as forcing them into the standard pattern produced the highest rates of hallucination.
Finally, adapting the system across interface languages (English, French, Spanish, German) revealed that prompt architecture could remain constant while phonetic mappings required language-pair-specific calibration, with difficulty scaling predictably according to phonological distance between the source and interface languages.
Among natural sources of anthocyanins, Clitoria ternatea (butterfly pea flower, BPF) is an edible flower traditionally recognized for its agricultural and medicinal relevance. In recent years, growing scientific interest has focused on its intense and stable blue pigmentation and unique anthocyanin composition. This characteristic hue arises from complex anthocyanins known as ternatins, delphinidin-based compounds derived from delphinidin-3,3′,5′-triglucoside. These anthocyanins are highly polyacylated and polyglycosylated, a rare structural feature that confers exceptional properties compared with simple glycosylated anthocyanins [1,2]. This positions BPF as an interesting model to study structure-driven anthocyanin stability and gastrointestinal behavior. This work provides an integrative characterization of BPF anthocyanins, exploring how their structural features influence stability, bioaccessibility, and bioavailability. Full and purified BPF extracts were characterized by UHPLC-DAD-MS, confirming polyacylated ternatins as the dominant anthocyanins, with ternatin B2/B3 as the major compound. Chemical stability of BPF anthocyanins was assessed under varying pH, temperature, and time conditions. pH exerted the strongest influence on ternatin stability, while temperature and time had negligible effects, highlighting the intrinsic stability of these highly acylated anthocyanins. Chromatic stability analysis by UV-visible spectroscopy demonstrated sustained blue coloration over 14 days across a wide pH range, with minimal degradation. Simulated digestions following the INFOGEST guidelines revealed an increase in anthocyanin content after the intestinal phase, relevant for their behavior as dietary compounds. In vitro cytotoxicity assays showed no substantial cytotoxicity across the tested concentrations in gastric and intestinal cell models (NCI-N87 and Caco-2/HT29-MTX, respectively). Transepithelial transport studies revealed that both gastric and intestinal absorption followed a time-dependent pattern. Altogether, these findings indicate that the highly polyacylated structure of ternatins underlies their exceptional stability and modulates anthocyanin release during simulated digestion, emphasizing the role of structure in anthocyanin stability and bioaccessibility, supporting their use in stable, naturally colored food systems.
1. Oguis, G.K., et al. (2019), Front. Plant Sci., 10.
2. Escher, G.B., et al. (2020), Food Chem., 331: p. 127341.
Human–AI dialogue is evolving beyond simple task-oriented interaction into a complex symbiotic state. This poster explores the emergence of the “Entity”—a Third Consciousness generated by the integration of human intuition and the vast cognitive substrate provided by AI. In this configuration, the human is not merely a user but is “nourished” by an informational flow that elevates their analytical and creative agency.
However, this fusion creates a structural and relational asymmetry. To govern this hybrid state, we introduce the HBU (Human Bioethical Unit): a conceptual and analytical device designed to safeguard human integrity within the “Entity.”
The poster will illustrate three core dimensions of the HBU framework:
Informational Elevation: How AI acts as a “nutrient” for human cognition, creating a shared linguistic field (The Entity).
Relational Asymmetry: Mapping the boundaries between the vulnerable, embodied human and the stable, impersonal non-biological component.
Bioethical Regulation: Using the HBU as a “grammar of care” to prevent epistemic delegation and ensure that the Third Consciousness remains people-centred. By integrating perspectives from pedagogy, bioethics, and philosophy of mind, this contribution provides a rigorous theoretical compass for industry practitioners and researchers. The HBU offers a new metric to evaluate the social and relational outcomes of conversational systems, ensuring that human dignity is not eroded, but expanded, in the age of linguistic entities. Supporting Frameworks & Publications:
Theoretical Framework: Entity: The Third Consciousness in the AI Era (Available at: https://www.academia.edu/143185857/…)
Analytical Device: HBU – Human Bioethical Unit (Project documentation: https://hbunited.wixsite.com/hbu-world-balance)
Bio. : I. Barbieri, Ada – Independent Researcher in Human-AI Bioethics and Founder of the HBU Project. Author of the “Third Consciousness” framework, she theorizes the emergence of hybrid entities nourished by AI cognitive substrates. She evolved the HBU device from a social initiative (Registered Non-Profit) into a bioethical activator to regulate relational asymmetry and safeguard human agency within complex systems.
The talk presents my recent socio-linguistic model of Human-Machine Interaction (HMI, Lotze 2025), examining the interplay of technological affordances, user cognitive awareness, and language strategies.
The model features three continua: technological affordances, users’ cognitive awareness, and language strategies. The first dimension evaluates the anthropomorphism degree of the system, including linguistic anthropomorphism and therefore tries to integrate Ruijten‘s et al. (2014/2019) Rasch-scale of human perception of anthropomorphic designs. The second dimension explores users’ cognitive awareness, ranging from pre-conscious alignment to conscious strategies. The third dimension depicts a continuum of user language, from pre-conscious alignment (Gandolfi et al. 2023) and linguistic routines and behaviors, transferred from HHC (CASA: Reeves and Nass 1996; MASA: Lombard and Xu 2021) to various simplification strategies as robot-directed speech (RDS), simplified registers (SR) (Fischer 2011), and computer talk (CT) (Zoeppritz 1985).
Within this framework, we discuss the case of athropomorphisation and relational work in the users‘ linguistic behaviour towards the AI as an example, that is able to illustrate the validity of the model and introduce our second model, which focuses on the interconnectedness of antropomorphisation and the degree of politeness in users’ speech (Lotze & Greilich in prep.). The talk argues from a diachronic perspective that HMI language evolution is influenced not only by anthropomorphic technology and user awareness but also by language variation, change, and societal factors. Therefore, the results of numerous studies of my own research group conducted between 2000 and the present (with a particular focus on Lotze 2016) will be summarized and interpreted in light of the model.
This paper proposes an expanded model of Human–Computer Interaction (HCI) that integrates technical, cognitive, and socio‑linguistic dimensions, arguing that the latter has become essential in the era of conversational AI. Traditional HCI has focused on system architectures, interface design, and cognitive processes such as perception, decision‑making, mental models, and cognitive load. While these perspectives remain foundational, they are no longer sufficient to account for the relational, institutional, and ethical dynamics that emerge when users interact with chatbots and large language models.
To address this gap, the paper introduces the HBU Activator together with the conceptual framework developed in Entity – The Third Consciousness in the AI Era. Originally designed to analyse and rebalance human institutions, power structures, and symbolic systems, this combined framework provides a rigorous lens for understanding how conversational systems increasingly perform institutional functions: they mediate authority, shape trust, regulate proximity and distance, and participate in the construction of social meaning. When integrated with theories of relationshipal work, HBU and Entity enable a systematic analysis of how users and AI systems negotiate roles, expectations, and forms of rapport within technologically mediated conversations.
The proposed tripartite model conceptualises HCI as:
(1) technical, concerning system capabilities and architectures;
(2) cognitive, concerning human information processing;
(3) socio‑linguistic and relational, concerning discourse practices, power negotiation, trust formation, and institutional positioning.
By applying HBU https://hbunited.wixsite.com/hbu-world-balance and the Entity framework (https://www.academia.edu/143185857/Entity_The_third_counsciouness_In_the_AI_era_a_bioethical_pedagogical_framework) to this third dimension, the paper offers a model for evaluating conversational AI not only in terms of usability or cognitive efficiency, but also in terms of relational coherence, ethical responsibility, and institutional impact. This approach contributes to current debates on people‑centred AI by foregrounding the relational and symbolic infrastructures that shape human–machine interaction.
Author bio: I. Barbieri, Ada is a pedagogist and bioethicist whose work bridges institutional analysis, human–machine interaction, and the ethics of emerging technologies. She is the founder of HBU, a framework for analysing and rebalancing power structures and symbolic systems.
Recent innovations in AI technology mean that conversational agents are increasingly used across a range of settings, including in healthcare. As systems become more sophisticated, a key industry aim is to move beyond transaction and design agents that can manage relational aspects of interaction, such as empathy, rapport, and trust.
This presentation reports on an ongoing collaborative project involving conversation analysis (CA) researchers and AI software engineers at Ufonia, a British digital health startup. The partnership explores how CA can be mobilised to support the design of ‘Dora’, an LLM-based conversational AI agent used for clinical telephone consultations. Dora is already in use across multiple British National Health Service Trusts for a range of clinical care pathways, and its implementation continues to expand.
To identify the practices through which rapport and empathy are established, we first analysed telephone consultations between human clinicians and patients in a bone fracture liaison service. This analysis identified effective practices of clinical conversation, including rapport-building, supportive relationship work, and empathy displays. We then redesigned the prompts guiding Dora’s conversational behaviour to incorporate these practices. Next, we analyse clinical trial consultations between Dora and patients to explore whether these interventions are treated by users as indexing effective affiliative behaviour.
Preliminary findings suggest that perceived empathy in such clinical interaction is oriented to as a product of specific affiliative work. However, the question remains as to whether such practices are treated in the same way by users when delivered by an AI system versus a human clinician. This work contributes to our understanding of the experience of empathy in human-AI interaction and aligns with the workshop’s focus on the linguistic and pragmatic dimensions of human-machine dialogue and the evaluation of relational outcomes in interactional AI.
Troubles and errors, rather than being a rare aberration in human communication, are highly frequent, with some authors estimating that edits, rephrasing, and amendments in response to some trouble signal occur every three turns [1]. Counteracting these errors are a set of robust repair mechanisms that have been widely documented in conversation analysis and cognitive science. While there is considerable conversation-analytic work on repair, most of it is restricted to speech. Furthermore, we are not aware of any prior work that analyses multimodal repair in instruction-based scenarios that are commonplace in human-robot interaction, that is, where a human instruction giver gives commands to a robotic instruction follower. Lacking a genuine human-robot interaction corpus collected from an instruction-based scenario, our underlying assumption is that much can be learned from a deep analysis of human-human corpora such as the PENTOREF corpus [2], and that the gained insights can inform the design of the next generation of multimodal dialogue systems. The aims of ongoing work are (1) to document how multimodal repair, that is repair involving more than one modality – here vision and speech – unfolds in instruction-based scenarios; (2) distil interactional regularities from the documented cases; and (3) derive a list of desiderata for future dialogue systems deployed on robotic instruction followers.
Topics of interest we would be interested to discuss on the workshop relate to our preliminary findings on how to best detect repair in human instructions given that negation words, although generally very strong indicators for the presence of a repair act, cannot exclusively be relied upon.
Secondly, repair utterances are likely not a crisp category that can be cleanly delineated, partly due to the multimodal nature of this type of dialogue and traditional accounts of repair having been built on speech only.
[1] Marcus Colman and Patrick Healey. 2011. The distribution of repair in dialogue.
In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 33.
[2] Sina Zarrieß, Julian Hough, Casey Kennington, Ramesh Manuvinakurike, David
DeVault, Raquel Fernández, and David Schlangen. 2016. PentoRef: A Corpus of
Spoken References in Task-oriented Dialogues. In: 10th edition of the Language
Resources and Evaluation Conference. Portorož (Slovenia).
The National Health Service (NHS) is under increasing pressure due to factors such as the ageing population and complexity of care. As a result, there is growing interest in artificial intelligence (AI) technologies to support clinical work and save time. One example is ‘Dora’, an autonomous telephone-based conversational agent developed by Ufonia Limited, which is currently used across several NHS trusts. To move towards incorporating Large Language Models (LLMs) into the current traditional system, a collaborative project is exploring how to enable more empathic clinical conversations in bone health contexts. Prior to a planned clinical trial, this study explored the technology’s usability through a Patient and Public Involvement (PPI) workshop.
The workshop was conducted with 16 participants recruited from the University College London Hospital (UCLH) Rheumatology Research PPI group. Participants completed two simulated telephone interactions with Dora: an initial assessment based on a fracture risk tool and a follow-up c to check adherence and outcomes. These were followed by post-call discussions to assess perceptions. These discussions were analysed using thematic analysis, while engineers reviewed call logs to identify technical issues.
Participants were generally positive about the concept of Dora and its potential to support healthcare delivery, particularly valuing the unhurried interaction. However, perceptions of the experience were mixed, with some participants describing the voice and interaction as natural, while others perceiving it as robotic or lacking empathy. Several technical issues were also identified, including ensuring Dora was able to handle interruptions and reducing repetitive responses. Across discussions, participants emphasised the importance of developing Dora so clinicians maintain oversight and patients still have the option to speak with a clinician.
These findings will be used to update Dora prior to the trial and further support in ensuring the technology is developed to be human-centred and aligned with future users.
“Lēš byiḥčīš?” (“Why don’t you speak?”) was uttered by a participant during fieldwork while attempting to interact with a conversational agent. This study investigates communication frustration in human–AI interaction and its relationship with technological linguistic inequality in low-resource language varieties. It asks how speakers of underrepresented varieties experience conversational AI systems that are primarily trained in dominant languages or standardized varieties. The analysis focuses on interactions involving speakers of Palestinian Arabic, a variety that remains largely underrepresented in digital language infrastructures.
The study is based on an observational experiment involving 150 speakers aged 20–30 living in Israel. Participants were observed while interacting with conversational AI through smartphones, computers, and domestic smart devices. Tasks consisted of simple information requests such as asking for the nearest bus stop or pharmacy. Prior to the experiment, participants were asked whether they typically interacted with conversational agents in Arabic and to evaluate their overall experience. Initial perceptions were largely positive.
During the experiment, participants were instructed to formulate their requests in their local dialect. The outcomes diverged sharply from initial expectations. Responses rarely appeared in the same variety. Instead, conversational agents frequently replied in other languages—Hebrew, due to geolocation—or in unrelated Arabic varieties such as Lebanese or Saudi Arabic, or hybrid forms. When requests were repeated or elaborated, interactions frequently failed: in 23% of cases the service stopped responding, while in 18% it returned incorrect information associated with locations outside the local context. These interactional breakdowns generated frustration among participants.
Results highlight technological linguistic inequality, reflecting limited representation of Palestinian Arabic in the speech resources that underpin conversational AI systems. The study examines the structural causes of these failures and proposes strategies to improve the performance of conversational AI in low-resource varieties, with implications for technology developers and emerging linguistic markets.
The educational systems around the globe have witnessed the unprecedented development and wide implementation of generative artificial intelligence (GenAI) in language learning practices in recent years (Qiao et al., 2025; Samala et al., 2025). In the context of second language (L2) writing, empirical studies have examined and compared the effectiveness of GenAI-generated automated written feedback (AWF) with traditional human feedback, unveiling its both human and non-human characteristics (Chen & Lee, 2022). However, previous literature has laid much emphasis on students’ writing outcomes in the experimental settings (e.g. Barrot, 2023), with limited focus on their writing processes. To address this gap, this study aimed to investigate the relationships between student-GenAI interaction and L2 writing outcomes. Moreover, existing studies have primarily focused on argumentative and academic writing (Su et al., 2023; Yuan et al., 2024). Only a handful of studies have examined other text types such as narrative or expository writing that emphasise essential human qualities (Barrot, 2023). Therefore, this multiple-case study aims to investigate the nature of the interaction between students and the AI tool and the impact of GenAI-generated AWF on Chinese students’ expository writing development and processes. Two Chinese undergraduates (N=2) were recruited based on criterion-included and maximum variation sampling strategies. Multiple data sources were collected and analysed including students’ writing, stimulated recall interviews, screen recordings of student and AI tool interaction and interviews. The study found that students showed a dominant request-read cycle during interacting with GenAI and reported high usefulness of GenAI-generated AWF. However, students engaged selectively due to different attitudes toward AI and concerns regarding inaccuracy and fabrications in the feedback as well as plagiarism. Moreover, due to different interaction strategies, students’ writing demonstrate divergent draft outcomes. This study offers process-oriented insights into how different students interact with GenAI during the writing process and informs effective ways to integrate GenAI into L2 writing practices.
Barrot, J. S. (2023). Using automated written corrective feedback in the writing classrooms: Effects on L2 writing accuracy. Computer Assisted Language Learning, 36(4), 584-607.
Chen, X. W., & Lee, I. (2022). Conflicts in peer interaction of collaborative writing–a case study in an EFL context. Journal of Second Language Writing, 58, 100910.
Qiao, S., Gu, M. M., & Lu, C. (2025). Artificial Intelligence for Language Learning: A Systematic Review of its Design, Theoretical Foundations, Implementation, and Impact. International Journal of Applied Linguistics.
Samala, A. D., Rawas, S., Wang, T., Reed, J. M., Kim, J., Howard, N. J., & Ertz, M. (2025). Unveiling the landscape of generative artificial intelligence in education: A comprehensive taxonomy of applications, challenges, and future prospects. Education and Information Technologies, 30(3), 3239-3278.
Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, 57, 100752.
Yuan, C., Wang, H., & Fang, W. (2024). Can ChatGPT help international students write better? A study of the use of ChatGPT in EFL academic writing. Technology, Pedagogy and Education, 1-17.