Recent innovations in AI technology mean that conversational agents are increasingly used across a range of settings, including in healthcare. As systems become more sophisticated, a key industry aim is to move beyond transaction and design agents that can manage relational aspects of interaction, such as empathy, rapport, and trust.
This presentation reports on an ongoing collaborative project involving conversation analysis (CA) researchers and AI software engineers at Ufonia, a British digital health startup. The partnership explores how CA can be mobilised to support the design of ‘Dora’, an LLM-based conversational AI agent used for clinical telephone consultations. Dora is already in use across multiple British National Health Service Trusts for a range of clinical care pathways, and its implementation continues to expand.
To identify the practices through which rapport and empathy are established, we first analysed telephone consultations between human clinicians and patients in a bone fracture liaison service. This analysis identified effective practices of clinical conversation, including rapport-building, supportive relationship work, and empathy displays. We then redesigned the prompts guiding Dora’s conversational behaviour to incorporate these practices. Next, we analyse clinical trial consultations between Dora and patients to explore whether these interventions are treated by users as indexing effective affiliative behaviour.
Preliminary findings suggest that perceived empathy in such clinical interaction is oriented to as a product of specific affiliative work. However, the question remains as to whether such practices are treated in the same way by users when delivered by an AI system versus a human clinician. This work contributes to our understanding of the experience of empathy in human-AI interaction and aligns with the workshop’s focus on the linguistic and pragmatic dimensions of human-machine dialogue and the evaluation of relational outcomes in interactional AI.