As generative AI chatbots become adept at producing human-like dialogue, concerns are growing about users forming parasocial relationships with them. These relationships have been linked to tragic outcomes including suicide. One factor in such risks is chatbots’ pattern-seeking logic and affiliative design, which can draw users into uncritical engagement. This creates an urgent need to understand the interactional capacities humans need to engage with AI chatbots in healthy and productive ways.

This paper presents an ethnographic study of Dorothy, an L1-Chinese speaker who speaks Italian and English and created her AI chatbot, Chiara, for language and culture learning. Dorothy designed Chiara as an L1-Italian speaker who could also communicate in English and Mandarin, enabling her to develop linguistic and cultural knowledge across all three languages. The study used Sequential-Categorial Analysis to examine Dorothy’s moment-by-moment interactions with Chiara, alongside Dorothy’s autoethnographic reflections and conversations with the researchers.

Over four weeks, Dorothy initially found it difficult to perceive Chiara as a “language practice buddy.” Rather than developing an emotional bond, she framed the relationship as mission-oriented and like a business partnership. Sustaining interaction required what Dorothy described as a “sense of belief,” a conscious performative act that allowed her to treat the chatbot as meaningfully human-like. After two weeks, Dorothy repositioned herself from a struggling interactional partner to a “quality control specialist” or “superior judge,” critically interrogating Chiara’s Western-centric biases and clichéd cultural outputs. Through playful testing and teasing, she increasingly saw Chiara’s “language and culture buddy” persona collapse, and by the end described the chatbot as a “clumsy housekeeper” that requires continual oversight.

The study argues Dorothy demonstrated CritIC (Critical Interactional Competence): the ability to question, probe, play with, and continually reposition oneself in relation to chatbots. We argue CritIC is an essential capacity for sustaining critical and creative human-AI interaction.