Conversational AI has been applied in several fields such as counselling, education, and health care. Recent studies have focused on the linguistic and pragmatic features and competence of LLMs and chatbots (Chen et al., 2024). There is, however, little research on conversational AI in health care with a pragmatic approach especially on conversations where AI takes the patient role. This paper examines emerging forms of doctor–patient interaction in which the “patient” role is fulfilled by an AI conversational agent. Three conversations where healthcare professionals completed simulated clinical consultations using SimFlow.ai, a voice-to-voice generative AI platform were analysed. The sessions were audio-recorded and automatically transcribed within the platform. Using conversation analysis and pragmatic theory, the study investigated how these interactions approximate or diverge from principles derived from human–human medical encounters focusing on the following research question: 1 To what extent do AI patient turns observe or flout Gricean maxims in doctor-patient conversations? 2 What discourse and pragmatic markers characterise AI patient turns in doctor–patient conversation? 3 What is the role of repetition by AI-patients and human doctors in these conversations?
Drawing on the Gricean maxims as an analytical framework, we explored the extent to which AI-generated responses display the cooperative principles (quality, quantity, relation, manner) underpinning effective communication (Grice, 1989). The findings highlight moments where AI outputs observe and flout maxims. Furthermore, repetitions and discourse markers such as ‘you know’, ‘okay’ were analysed in AI and human turns in the conversations.
The analysis contributes to ongoing discussions about the linguistic, social and relational dimensions of human–machine dialogue. It also offers evidence-based insights relevant to the design of conversational technologies that must operate in highly sensitive, context-dependent domains such as healthcare.