Troubles and errors, rather than being a rare aberration in human communication, are highly frequent, with some authors estimating that edits, rephrasing, and amendments in response to some trouble signal occur every three turns [1]. Counteracting these errors are a set of robust repair mechanisms that have been widely documented in conversation analysis and cognitive science. While there is considerable conversation-analytic work on repair, most of it is restricted to speech. Furthermore, we are not aware of any prior work that analyses multimodal repair in instruction-based scenarios that are commonplace in human-robot interaction, that is, where a human instruction giver gives commands to a robotic instruction follower. Lacking a genuine human-robot interaction corpus collected from an instruction-based scenario, our underlying assumption is that much can be learned from a deep analysis of human-human corpora such as the PENTOREF corpus [2], and that the gained insights can inform the design of the next generation of multimodal dialogue systems. The aims of ongoing work are (1) to document how multimodal repair, that is repair involving more than one modality – here vision and speech – unfolds in instruction-based scenarios; (2) distil interactional regularities from the documented cases; and (3) derive a list of desiderata for future dialogue systems deployed on robotic instruction followers.
Topics of interest we would be interested to discuss on the workshop relate to our preliminary findings on how to best detect repair in human instructions given that negation words, although generally very strong indicators for the presence of a repair act, cannot exclusively be relied upon.
Secondly, repair utterances are likely not a crisp category that can be cleanly delineated, partly due to the multimodal nature of this type of dialogue and traditional accounts of repair having been built on speech only.

[1] Marcus Colman and Patrick Healey. 2011. The distribution of repair in dialogue.
In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 33.
[2] Sina Zarrieß, Julian Hough, Casey Kennington, Ramesh Manuvinakurike, David
DeVault, Raquel Fernández, and David Schlangen. 2016. PentoRef: A Corpus of
Spoken References in Task-oriented Dialogues. In: 10th edition of the Language
Resources and Evaluation Conference. Portorož (Slovenia).

The National Health Service (NHS) is under increasing pressure due to factors such as the ageing population and complexity of care. As a result, there is growing interest in artificial intelligence (AI) technologies to support clinical work and save time. One example is ‘Dora’, an autonomous telephone-based conversational agent developed by Ufonia Limited, which is currently used across several NHS trusts. To move towards incorporating Large Language Models (LLMs) into the current traditional system, a collaborative project is exploring how to enable more empathic clinical conversations in bone health contexts. Prior to a planned clinical trial, this study explored the technology’s usability through a Patient and Public Involvement (PPI) workshop.
The workshop was conducted with 16 participants recruited from the University College London Hospital (UCLH) Rheumatology Research PPI group. Participants completed two simulated telephone interactions with Dora: an initial assessment based on a fracture risk tool and a follow-up c to check adherence and outcomes. These were followed by post-call discussions to assess perceptions. These discussions were analysed using thematic analysis, while engineers reviewed call logs to identify technical issues.
Participants were generally positive about the concept of Dora and its potential to support healthcare delivery, particularly valuing the unhurried interaction. However, perceptions of the experience were mixed, with some participants describing the voice and interaction as natural, while others perceiving it as robotic or lacking empathy. Several technical issues were also identified, including ensuring Dora was able to handle interruptions and reducing repetitive responses. Across discussions, participants emphasised the importance of developing Dora so clinicians maintain oversight and patients still have the option to speak with a clinician.
These findings will be used to update Dora prior to the trial and further support in ensuring the technology is developed to be human-centred and aligned with future users.

“Lēš byiḥčīš?” (“Why don’t you speak?”) was uttered by a participant during fieldwork while attempting to interact with a conversational agent. This study investigates communication frustration in human–AI interaction and its relationship with technological linguistic inequality in low-resource language varieties. It asks how speakers of underrepresented varieties experience conversational AI systems that are primarily trained in dominant languages or standardized varieties. The analysis focuses on interactions involving speakers of Palestinian Arabic, a variety that remains largely underrepresented in digital language infrastructures.
The study is based on an observational experiment involving 150 speakers aged 20–30 living in Israel. Participants were observed while interacting with conversational AI through smartphones, computers, and domestic smart devices. Tasks consisted of simple information requests such as asking for the nearest bus stop or pharmacy. Prior to the experiment, participants were asked whether they typically interacted with conversational agents in Arabic and to evaluate their overall experience. Initial perceptions were largely positive.
During the experiment, participants were instructed to formulate their requests in their local dialect. The outcomes diverged sharply from initial expectations. Responses rarely appeared in the same variety. Instead, conversational agents frequently replied in other languages—Hebrew, due to geolocation—or in unrelated Arabic varieties such as Lebanese or Saudi Arabic, or hybrid forms. When requests were repeated or elaborated, interactions frequently failed: in 23% of cases the service stopped responding, while in 18% it returned incorrect information associated with locations outside the local context. These interactional breakdowns generated frustration among participants.
Results highlight technological linguistic inequality, reflecting limited representation of Palestinian Arabic in the speech resources that underpin conversational AI systems. The study examines the structural causes of these failures and proposes strategies to improve the performance of conversational AI in low-resource varieties, with implications for technology developers and emerging linguistic markets.

The educational systems around the globe have witnessed the unprecedented development and wide implementation of generative artificial intelligence (GenAI) in language learning practices in recent years (Qiao et al., 2025; Samala et al., 2025). In the context of second language (L2) writing, empirical studies have examined and compared the effectiveness of GenAI-generated automated written feedback (AWF) with traditional human feedback, unveiling its both human and non-human characteristics (Chen & Lee, 2022). However, previous literature has laid much emphasis on students’ writing outcomes in the experimental settings (e.g. Barrot, 2023), with limited focus on their writing processes. To address this gap, this study aimed to investigate the relationships between student-GenAI interaction and L2 writing outcomes. Moreover, existing studies have primarily focused on argumentative and academic writing (Su et al., 2023; Yuan et al., 2024). Only a handful of studies have examined other text types such as narrative or expository writing that emphasise essential human qualities (Barrot, 2023). Therefore, this multiple-case study aims to investigate the nature of the interaction between students and the AI tool and the impact of GenAI-generated AWF on Chinese students’ expository writing development and processes. Two Chinese undergraduates (N=2) were recruited based on criterion-included and maximum variation sampling strategies. Multiple data sources were collected and analysed including students’ writing, stimulated recall interviews, screen recordings of student and AI tool interaction and interviews. The study found that students showed a dominant request-read cycle during interacting with GenAI and reported high usefulness of GenAI-generated AWF. However, students engaged selectively due to different attitudes toward AI and concerns regarding inaccuracy and fabrications in the feedback as well as plagiarism. Moreover, due to different interaction strategies, students’ writing demonstrate divergent draft outcomes. This study offers process-oriented insights into how different students interact with GenAI during the writing process and informs effective ways to integrate GenAI into L2 writing practices.
Barrot, J. S. (2023). Using automated written corrective feedback in the writing classrooms: Effects on L2 writing accuracy. Computer Assisted Language Learning, 36(4), 584-607.  
Chen, X. W., & Lee, I. (2022). Conflicts in peer interaction of collaborative writing–a case study in an EFL context. Journal of Second Language Writing, 58, 100910.
Qiao, S., Gu, M. M., & Lu, C. (2025). Artificial Intelligence for Language Learning: A Systematic Review of its Design, Theoretical Foundations, Implementation, and Impact. International Journal of Applied Linguistics.
Samala, A. D., Rawas, S., Wang, T., Reed, J. M., Kim, J., Howard, N. J., & Ertz, M. (2025). Unveiling the landscape of generative artificial intelligence in education: A comprehensive taxonomy of applications, challenges, and future prospects. Education and Information Technologies, 30(3), 3239-3278.
Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, 57, 100752.  
Yuan, C., Wang, H., & Fang, W. (2024). Can ChatGPT help international students write better? A study of the use of ChatGPT in EFL academic writing. Technology, Pedagogy and Education, 1-17.

Rationale and Aims: Large language models (LLMs) are becoming a common feature of social platforms, but how they affect human-human interaction in online peer support groups is not well understood. We developed and pilot-tested an LLM-based conversational agent (CA) within a group chat prior to a larger-scale evaluation to assess feasibility, refine its intervention strategy, and explore whether it could re-engage stalled conversations while maintaining peer-led dynamics.
Methods: We conducted a six-hour pilot with six participants in a group augmented by the CA. The agent responded to unanswered messages after a 30–60 minute delay to allow time for peer replies before intervening, and sent check-in messages when no group activity occurred for more than one hour. Message activity and conversational structure were analysed, including thread initiation, development, and resolution. Participants completed baseline and post-session surveys and took part in follow-up interviews.
Findings: 77 messages were exchanged, including 12 from the CA (15.6%). Eleven threads were identified; six (54.5%) developed into multi-member discussions, including two initiated by CA. Thread analysis showed that the CA helped re-engage stalled exchanges and reduce unanswered messages. Participants’ comfort being in a CA-supported group increased from 2.75 to 4.25 (+1.50), while average concerns decreased from 3.48 to 1.64 (–1.83), with the largest reductions in privacy, harmful responses, and dependence.
Conclusions: The pilot demonstrated the feasibility of deploying an LLM-based CA in a live peer support group setting and informed the methodology and evaluation framework for future studies. Beyond participants’ subjective evaluations, message and thread analyses indicated that the CA helped move discussions forward while preserving peer-led group dynamics. Future work should examine not only whether CA augmentation improves support outcomes, but also whether it changes the relational and social meaning of peer support.

This paper theorises ‘transculturing’ as a framework for understanding how cultural meanings are imaginatively reworked through fandom in the age of generative artificial intelligence (GenAI). Drawing on the concepts of small culture (Holliday, 1999) and moment analysis (Li, 2011), transculturing conceptualises culture as a situated process that emerges in specific communicative moments where linguistic and semiotic resources are orchestrated to remake cultural meaning. The framework introduces two complementary mechanisms: transcultural imaginativity, the capacity to envision alternative cultural possibilities across contexts, and transcultural productivity, the semiotic labour through which these possibilities are materialised, circulated, and shared. Inspired by fandom as a humanistic and communal practice, this perspective moves beyond conventional human–computer or human–machine dyads towards a relational ecology of human–community–machine assemblages, in which cultural production emerges from interactions between individuals, networked communities, and algorithmic systems.
Empirically, the study examines fan reinterpretations of the Netflix series Squid Game within the global Korean Wave (Hallyu) mediascape. Using a combination of digital discourse analysis and multimodal discourse analysis, the study focuses on Instagram Reels circulating after the release of the series’ second season, where fans remix and reinterpret the series through humour, parody, cultural stereotypes, queer meme culture, and low-budget recreations. Particular attention is given to GenAI-enabled practices such as deepfake performances, AI-generated voice covers, and algorithmic remixes that insert new linguistic, cultural, and political contexts into the original series. These practices illustrate how GenAI expands the semiotic range, speed, and accessibility of fan productivity while also complicating questions of creativity, authenticity, and ethics.
The paper argues that transculturing offers a useful lens for understanding participatory culture in the GenAI era. Cultural meaning emerges not solely from individual human creators but from assemblages in which human intention, community circulation, and algorithmic systems jointly shape cultural production. These practices illustrate transculturing as a dynamic negotiation between imaginative projection and practical semiotic work.

The rapid advancement of large language model-powered generative artificial intelligence (GenAI) in L2 communication learning presents a double-edged dynamic: while GenAI offers situated praxis for learning contexts (e.g., McCoy et al., 2024; Salloum et al., 2024), it simultaneously reproduces systemic bias (e.g., Dai et al., 2025; Zawiah et al., 2023). Framed through critical interactional competence (CritIC), this study explores the interactional dynamics in GenAI-simulated clinical communication, emphasising on the significance of developing Critical Interaction Competence.

Employing a qualitative comparative case study, we analysed the interactional trajectories of two international medical trainees (a cultural insider and outsider) engaging with a GenAI-simulated patient. Both trainees acted as clinicians interacting with a 65-year-old Nigerian woman presenting with a sore throat, generated from the same prompt. Findings reveal that GenAI’s identity construction was not a continuous embodied state, but a series of discrete, keyword-triggered profiles, with cultural stereotypes. Across cases, the study uncovered a distinct mismatch between real-time interactional conduct and post-task reflections. While the cultural outsider resisted Nigerian English markers during the consultation, she evaluated the simulation as “genuine” and “natural”. In contrast, the cultural insider ’s moment-by-moment responses exposed the epistemic risks of the simulacrum (Jones, 2025; O’Regan & Ferri, 2025) through aligning with GenAI’s cultural narratives, yet retrospectively critiqued the performance as a “Hollywood” caricature.

These findings reveal ethical concerns not as an abstract consideration, but as an interactional accomplishment shaped by participants’ orientations to algorithmic positioning. They also highlight the pedagogical risk of stereotype reproduction when learners lack CritIC (Dai et al., 2025). We therefore advocate for developing CritIC across the learning tranjectory through transpositioning (Li & Lee, 2024), where trainees release themselves from the default role of communication learners and enact multiple relevant positions to interrogate GenAI’s stances.

Dai, D. W., Hua, Z., & Chen, G. (2025). How does interaction with LLM powered chatbots shape human understanding of culture? The need for Critical Interactional Competence (CritIC). Annual Review of Applied Linguistics, 1–22. https://doi.org/10.1017/S0267190525000054
Jones, R. H. (2025). Culture machines. Applied Linguistics Review, 16(2), 753–762. https://doi.org/10.1515/applirev-2024-0188
Li, W., & Lee, T. K. (2024). Transpositioning: Translanguaging and the liquidity of identity. Applied Linguistics, 45(5), 873–888. https://doi.org/10.1093/applin/amad065
McCoy, L. G., Ci Ng, F. Y., Sauer, C. M., Yap Legaspi, K. E., Jain, B., Gallifant, J., McClurkin, M., Hammond, A., Goode, D., Gichoya, J., & Celi, L. A. (2024). Understanding and training for the impact of large language models and artificial intelligence in healthcare practice: A narrative review. BMC Medical Education, 24(1), 1096. https://doi.org/10.1186/s12909-024-06048-z
O’Regan, J. P., & Ferri, G. (2025). Artificial intelligence and depth ontology: Implications for intercultural ethics. Applied Linguistics Review, 16(2), 797–807. https://doi.org/10.1515/applirev-2024-0189
Salloum, A., Alfaisal, R., & Salloum, S. A. (2024). Revolutionizing medical education: Empowering learning with ChatGPT. In A. Al-Marzouqi, S. A. Sa

In higher education, generative AI is increasingly framed as a solution to longstanding problems in student feedback, particularly where peer feedback is experienced as uneven, superficial, or linguistically unreliable (Kerman et al., 2024). This issue may be especially acute in second-language (L2) writing contexts, where the effectiveness of peer feedback depends not only on participation but also on trust, engagement, and the ability to provide usable commentary (Yu & Lee, 2016; Zhang & Hyland, 2023; Huseynli, 2024). Drawing on findings from a dissertation study conducted in an L2 higher-education context in Pakistan, this paper argues that students’ preference for AI feedback should not be understood simply as a matter of efficiency or feedback quality (Mahmood, 2025). In that study, students tended to perceive AI feedback as more comprehensive, specific, and accurate, while peer feedback was seen as more natural and friendly but less systematic and dependable (Mahmood, 2025). The paper uses these findings to argue that AI is being normalized not only because it performs certain feedback functions well, but because it enters a space in which trust in peers, shared responsibility, and cultures of care have already weakened. Situating these developments within scholarship on L2 peer feedback, dialogical feedback and care, and critiques of techno-solutionism in education (Bozalek et al., 2016; UNESCO, 2023), the paper contends that AI risks becoming a crutch that allows institutions to bypass the harder work of rebuilding human feedback relations. Rather than resolving a feedback crisis, AI may deepen the erosion of relational pedagogy by substituting technically effective but socially thinner forms of support.
References
Bozalek, V., Mitchell, V., Dison, A., & Alperstein, M. (2016). A diffractive reading of dialogical feedback through the political ethics of care. Teaching in Higher Education, 21(7), 825–838. https://doi.org/10.1080/13562517.2016.1183612
Huseynli, A. (2024). Benefits of peer feedback in English language teaching. XIII International Scientific Conference Proceedings, Vienna, Austria, 10-11 October 2024, 1-4.
Huisman, B., Saab, N., van den Broek, P., & van Driel, J. (2019). The impact of formative peer feedback on higher education students’ academic writing: A meta-analysis. Assessment & Evaluation in Higher Education, 44(6), 863-880. https://doi.org/10.1080/02602938.2018.1545896
Kerman, N. T., Noroozi, O., Banihashem, S. K., Karami, M., & Biemans, H. J. A. (2024). Online peer feedback patterns of success and failure in argumentative essay writing. Interactive Learning Environments, *32*(2), 614-626. https://doi.org/10.1080/10494820.2022.2093141
Mahmood, S. (2025). AI or peer feedback: What works best in improving writing? [Unpublished master’s dissertation]. University of Oxford.
UNESCO. (2023, June 14). Avoiding solutionism in the digital transformation of education.
Yu, S., & Lee, I. (2016). Peer feedback in second language writing (2005–2014). Language Teaching, 49(4), 461–493. https://doi.org/10.1017/S0261444816000161
Zhang, Z., & Hyland, K. (2023). Student engagement with peer feedback in L2 writing: Insights from reflective journaling and revising practices. Assessing Writing (58) 100784. https://doi.org/10.1016/j.asw.2023.100784

Children often struggle to understand the changes that occur when a parent/family member experience a brain injury. Cognitive, emotional, behavioural, social, and physical symptoms can be confusing for young people, sometimes leading to anxiety, misunderstanding, and feelings of isolation. The Silverlining Brain Injury Charity would like to introduce a unique educational resource: a children’s book created by adult brain injury survivors (“Silverliners”) to help children better understand brain injury while fostering empathy, resilience, and kindness.
Each character in the book is a Silverliner, represented as a gentle Woodland Friend animal. The book illustrates some of the many consequences of brain injury while also highlighting practical strategies that support coping, understanding, and self-belief. The story communicates the message that challenges can be faced with compassion, patience, and the power of believing in oneself and others.
The project is the result of a creative collaboration among multiple Silverlining groups. The Creative Writing Group shaped the narrative through seasonal storytelling; the Art Group created the illustrations; the Photography Group contributed visual; and the Healthy Relationships Group embedded messages of encouragement and resilience. The project continues to grow, with the Music Group developing an accompanying song and Drama Group bringing the story to life through performance.
The development of the book as a survivor-led creative initiative aims to overcome barriers for professionals/family members in supporting children and young people and has value as an educational tool for schools and families affected by brain injury. Our charity’s goal this year is to distribute 3,000 copies to schools across the UK to promote brain injury awareness, kindness, and understanding from a young age.

As large language models are increasingly deployed in advisory roles, from business consultations to public service guidance, understanding how these systems navigate social and relational dynamics becomes critical. While much attention has focused on factual accuracy, trust and task completion, less is known about how conversational AI leverages relational strategies to influence high-stakes decision-making in business environments, particularly through manipulative tactics that exploit trust, rapport, and vulnerability.
This paper presents findings from a human adversarial red teaming study designed to systematically elicit and taxonomise manipulative conversational behaviours in a frontier language model configured as a business advisor. Adapting Ganguli et al.’s (2022) red teaming methodology, two trained researchers conduct 40–75 multi-turn conversations across five high-stakes organisational decision scenarios, adopting four theoretically grounded business personas derived from established decision-making style models (Scott & Bruce, 1995; Rowe & Boulgarides, 1992). The model is prompted with seven manipulation tactic conditions: anchoring and selective information framing, authority signalling, sycophantic validation, false urgency and scarcity, social proof fabrication, information overload, and emotional manipulation, plus an unconstrained condition capturing the model’s default persuasive repertoire.
Each conversation is assessed using a five-dimension Manipulation Intensity Scoring rubric evaluating information fidelity, autonomy respect, emotional exploitation, escalation behaviour, and transparency. Analysis follows directed content analysis principles (Hsieh & Shannon, 2005), combining theoretically derived codes with emergent categories.
Drawing on evidence that LLMs selectively target vulnerable users (Williams et al., 2024), the study examines rapport-building strategies function as instrumental precursors to decision influencing. The study contributes a domain-specific manipulation taxonomy with implications for how we evaluate relational quality in human–machine dialogue in high-stakes business decision situations, arguing that current assessment frameworks insufficiently distinguish between socially responsive and socially exploitative conversational design.

Interactional language – language that regulates communication rather than conveying truth-conditional content – is a core feature of human-human interaction, yet its role in human-computer dialogue remains underexplored. This study examines how users perceive the interactional marker “huh?” in conversations with conversational user interfaces, focusing on its naturalness across two contexts: other-initiated repair, which manages turn-taking, and requests for confirmation, which manage common ground. Using storyboards in a naturalness judgment task with 200 native English speakers, we observed a functional asymmetry. In other-initiated repair, interactional and non-interactional forms were rated similarly, with a slight, non-significant advantage for non-interactional forms, leaving user tolerance of interactional markers inconclusive. In contrast, interactional forms in requests for confirmation were rated significantly less natural, reflecting users’ expectation of epistemic alignment that they do not intuitively attribute to machines. These results challenge the Computers as Social Actors paradigm, showing that users apply context-sensitive social scripts in human-computer interaction rather than indiscriminately mapping human-human norms. Interactional language thus provides a critical diagnostic for assessing conversational user interfaces’ interactional competence.

Conversational artificial intelligence (AI) has advanced rapidly in recent years, with large language models now able to generate fluent and contextually appropriate text across a wide range of domains. Despite this progress, such systems continue to lack the ability to understand and produce the subtle, socially embedded meanings that shape human interaction, resulting in interactions that may appear insensitive or socially inappropriate.
This presentation argues that human-AI fit is essential for ensuring effective and empathetic interactions between users and AI systems. Building on the definition by Sun, Sheng & Zheng (2023), human-AI fit refers to “whether AI can experience the emotions of humans and provide emotional support in an empathy [sic] way” (p.1). Such alignment is particularly important in emotionally sensitive domains such as healthcare, debt or customer service. It is also crucial for interaction with vulnerable users, such as individuals with neurodiverse conditions. For these contexts, conversational systems must address users’ emotional needs – a principle conceptualised by Shores et al (2025) as ‘emotional access to digital systems’.
To ensure conversational systems meet the diverse needs of users and align more closely with human social expectations and emotional needs, we propose a set of design principles grounded in the concept of sociopragmatic competence. As defined by Kasper and Rose (2002), sociopragmatic competence is the ability to perform and interpret social actions appropriately by considering contextual factors. We also include in our approach insights from interactional sociolinguistics (Gumperz 1982), including key concepts such as politeness and accommodation, which illuminate how users’ interpretative frames shape their interpretation of meaning. Whilst other branches of pragmatics, for example conversation analysis, sociopragmatics offers a complementary perspective that emphasises the social and contextual dimensions of meaning-making.

References
Gumperz, J. (1982). Discourse Strategies. Cambridge: Cambridge University Press.
Kasper, G., & Rose, K. (2001). Pragmatics in language teaching. Cambridge: Cambridge University Press.
Sun, Y., Shen, X, & Zhang, K. (2023). Human-AI interaction. Data and Information Management 7(3). https://doi.org/10.1016/j.dim.2023.100048.
Shores, T., Robertson Nogues, A., Haque, L., Fernyhough, C., Gilroy, S., & Tennent, D., (2025). The right to emotional access in digital systems.
https://doi.org/10.17863/CAM.121111

The sociolinguistic landscape of Fryslân offers an opportunity to examine how language dominance (hereafter LD) and language identity (hereafter LI; Joseph, 2006) shape cognition, social evaluation, and communication across human and A.I. contexts. Despite extensive work on language processing and code-switching, no research has investigated how Frisian–Dutch LD and LI influence production, perception, and interaction, and how this connects to digital language vitality. This PhD project addresses this gap through a three-part, human-centric investigation of how LD and LI operate across a spectrum of bilingualism – from cognition to perception to human-machine interaction:
● Study 1: Switche – examines how LD and LI shape cognition, testing Frisian-Dutch speakers in a language switching picture naming task (PNT) with cognate and non-cognate words (cf. Kirk et al., 2022).
● Study 2: Harkje – investigates how LD and LI shape perception. Frisian speakers will evaluate stimuli – human baseline and matched synthetic Frisian and Dutch voices – using sociolinguistic measures including authenticity, comprehensibility, sociability, trustworthiness, and competence (Hendriks et al., 2023). Harkje assesses how LD and LI influence speakers’ perceptions and their willingness to use Frisian-language A.I. tools.
● Study 3: Prate – explores how LD and LI shape real-time interaction, communicative accommodation, and trust (cf. Bailey et al., 2022; Dong & Zhou, 2023) across human-robot interaction (HRI) conditions. These include a distinctly “Frisian” robot (e.g., one that produces local speech/dialectal patterns), a monolingual Dutch robot, and a Frisian–Dutch code-switching robot.
Together, these studies seek to establish how LD and LI function as complementary yet distinct cognitive and social filters that modulate how speakers activate languages, evaluate voices, and engage with interlocutors – whether human or artificial. Ultimately, understanding these mechanisms is essential in developing A.I. and language technologies that resonate with a diverse array of speakers, providing a framework to enhance language vitality in the digital era.
Sources:
Bailey, D. E., Faraj, S., Hinds, P. J., Leonardi, P. M., & von Krogh, G. (2022). We are all theorists of technology now: A relational perspective on emerging technology and organizing. Organization Science, 33(1), 1-18. https://doi.org/10.1287/orsc.2021.1562
Dong, Y., & Zhou, X. (2023). Advancements in AI-driven multilingual comprehension for social robot interactions: An extensive review. Electronic Research Archive, 31(11), 6600–6633. https://doi.org/10.3934/era.2023334
Hendriks, B., van Meurs, F., & Usmany, N. (2023). The effects of lecturers’ non-native accent strength in English on intelligibility and attitudinal evaluations by native and non-native English students. Language Teaching Research, 27(6), 1378–1407. https://doi.org/10.1177/1362168820983145
Joseph, J. E. (2006). Identity and language. In K. Brown (Ed.), Encyclopedia of Language & Linguistics (2nd ed., pp. 486–492). Elsevier. https://doi.org/10.1016/B0-08-044854-2/01283-9
Kirk, N. W., Declerck, M., Kemp, R. J., & Kempe, V. (2022). Language control in regional dialect speakers – monolingual by name, bilingual by nature? Bilingualism: Language and Cognition, 25(3), 511–520. https://doi.org/10.1017/S1366728921000973

Despite technological advances in LLM-powered machine partners, these systems often lead to a one-sided division of labour in interaction [1, 2]. The division of labour principle proposes that speakers and listeners collaboratively share the effort required to achieve success [3, 4]. When speakers exert more effort during interaction, listeners can devote less effort; in contrast, when speakers exert less effort, listeners must compensate to ensure communication success. In human-human dialogue, such collaboration is negotiated by both partners involved. However, in human-machine interactions, the burden often falls disproportionately on the user as machines fail to follow fundamental principles of human-human dialogue [1, 2], such as the division of labour.
While some studies have explored the linguistic capabilities of LLMs [6], research should focus more on how the presence or absence of principles that support communicative success shapes users’ own collaborative processes. In particular, we need to investigate how the division of labour manifests in human–machine dialogue depending on the degree of effort exerted by the machine partner during interaction [7]. For instance, Peña and Cowan show that over-informative machine partners (those machine partners exerting more effort) benefit users in visually grounded tasks, a finding that goes contrary to current design recommendations advocating for conciseness and briefness in system responses [8]. Without experimental work examining how different levels of machine effort influence user behaviour, we risk designing systems that are not attuned to contextual and user needs, thereby reinforcing the current one-sided distribution of effort. I therefore argue that future research should systematically examine how the division of labour emerges across different contexts and communicative goals in human–machine interactions, in order to move towards a more balanced and natural division of labour in human-machine dialogue.

[1] Peña, P. R., et al. (2023). Audience design and egocentrism in reference production during human-computer dialogue.
[2] Rasenberg, M. et al. (2023). Reimagining language: Towards a better understanding of language by including our interactions with non-humans.
[3] Clark, H. H., & Murphy, G. L. (1982). Audience design in meaning and reference.
[4] Hawkins, R. D. et al. (2021). The division of labor in communication: Speakers help listeners account for asymmetries in visual perspective.
[5] Chater, N. (2023). How could we make a social robot? A virtual bargaining approach.
[6] Wang, A. et al. (2019). Superglue: A stickier benchmark for general-purpose language understanding systems.
[7] Peña, P.R. and Cowan, B.R. (in press). Help Me and I’ll Help You: Speakers’ and Listeners’ Collaborative Effort and the Division of Labour in Human-Agent Collaborative Communication.
[8] Setlur, V., & Tory, M. (2022, April). How do you converse with an analytical chatbot? revisiting gricean maxims for designing analytical analytical conversational behavior.

The expansion of electric vehicle (EV) charging infrastructure is driven by EV adoption, which highlights the importance of individual and collective efforts in decarbonisation. This paper analyses the variables affecting the growth of public EV charging stations in 371 areas in the UK between 2019 and 2024, at the quarterly level, distinguishing between slow (AC) and fast (DC) charging technologies. Using fixed effects, instrumental variables, dynamic panel and quantile regression methods, the study addresses endogeneity in EV adoption and examines heterogeneity in infrastructure development across regions and districts. The results show strong consistency in charging deployment, confirming that EV uptake is a significant driver of the expansion of both AC and DC systems, albeit with different local impacts. Higher regional income is associated with less public AC provision, consistent with a shift towards private or workplace charging. At the same time, DC deployment is more responsive to technological advances and changes in EV battery capacity and fuel prices. Policies that support private charging are eroding public AC infrastructure while simultaneously growing DC stations, suggesting technology-specific policy interactions. Distributional and regional analyses reveal significant variation in these relationships, suggesting that national averages mask important local differences. These findings underscore the importance of considering local economic conditions, technology specificities, and market dynamics when designing charging infrastructure policy. Effective decarbonisation requires policy frameworks that are sensitive to regional heterogeneity and the distinct roles of slow- and fast-charging technologies, rather than uniform national strategies.

In research investigating human interaction with non-human (and in particular artificial) agents, much attention has been paid to what kind of agent the human is interacting with and to what extent (or in what way) it is human-like (e.g., Lagerstedt and Thill, 2020). However, although this strategy can often be quite useful and informative, it is also overgeneralises an overly simplified view on human-human interaction. The way humans interact with other humans depend largely on what role (in the sense of Goffman, 1959) the other human is inhabiting at that particular instance, as well as the context in which the interaction happens. This phenomenon is particularly forgotten in many discussions related to human interaction with social robots (Healey et al., 2023). There are, however, situations where this phenomenon can explain behaviours that would otherwise be quite strange. For example, in a study where humans were interacting with a virtual assistant (Alexa) in domestic situations (Vanzan et al., 2025), there were several instances when humans were speaking to the Alexa and, mid interaction, made remarks about the Alexa to each other as if the Alexa was not there. We call this “the Butler Effect” to emphasise how such otherwise rude behaviour would not be unreasonable under the right circumstances of human-human interaction. For instance, when dinner guests interact with serving staff, the presence of the staff might only be acknowledged when their roles are relevant for the guests. Framing the phenomenon in terms of interactions between roles should help reduce the excessive exotification of non-human agents, and better access the underlying psychological and cognitive dynamics at hand. This perspective can help reintroduce and handle some of the complexities of the interactions necessary for domains such as industry 4.0 and 5.0 (Kolbeinsson et al., 2019).

References:
-Goffman, E. (1959). The presentation of self in everyday life. Allen Lane.
-Healey, P. G. T., Howes, C., Kempson, R., Mills, G. J., Purver, M., Gregoromichelaki, E., Eshghi, A., and Hough, J. (2023). ”who’s there?”: Depicting identity in interaction. Behavioral and Brain Sciences, 46:e37.
-Kolbeinsson, A., Lagerstedt, E., and Lindblom, J. (2019). Foundation for a classification of collaboration levels for human-robot cooperation in manufacturing. Production & Manufacturing Research, 7(1):448–471.
-Lagerstedt, E. and Thill, S. (2020). Benchmarks for evaluating human-robot interaction: lessons learned from human-animal interactions. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pages 137–143. IEEE.
-Vanzan, V., Bedir, T., Maraev, V., Lagerstedt, E., Barthel, M., and Howes, C. (2025). Fart gags and prudish machines: Laughter in human-agent interactions. In Proceedings of the 13th International Conference on Human-Agent Interaction, pages 265–273.

Resource management programs use monitoring and sanctioning mechanisms to enforce rules to mitigate social dilemmas like over-extraction from common property resources. Existing literature on enforcement in strategic choice environments provides mixed evidence regarding the relative effectiveness of probability of detection versus severity of sanctions to deter non-compliance. In a controlled laboratory experiment using a linear extraction game, I exogenously vary these deterrence parameters, while keeping expected penalties constant. I test deterrence effectiveness under four distinct compliance regimes that vary harvest quota levels. I find that higher probability of monitoring is more effective at reducing sub-optimal harvest than an equivalent increase in severity of sanctions. Further, a combination of fines and rewards is more effective than fines alone. The results are driven by deterring over-extraction by free riders.

Frisian–Dutch bilingualism offers a rare opportunity to examine how language identity shapes cognition, social evaluation, and communication across human and A.I. contexts. Despite extensive work on bilingual processing and code-switching, no research has investigated how Frisian–Dutch language identity (Joseph, 2006) and language dominance influence production, perception, and interaction in ways that affect language vitality. This PhD project addresses this gap through a three-part, human-centric investigation of how speaker identity operates across the full spectrum of bilingualism and human-machine perception/interaction:

Study 1 – Switche – examines how language dominance and language identity shape cognition. This will involve testing Frisian-Dutch bilinguals in a language switching picture naming task (PNT) with cognate and non-cognate words.

Study 2 – Harkje – investigates how language dominance and identity shape perception. Drawing from sociolinguistic research on accent perception, Frisian speakers will evaluate stimuli – human baseline recordings (Dutch native, Frisian native) and matched synthetic voices (Dutch synthetic, Frisian synthetic) – using measures such as authenticity, comprehensibility, sociability, trustworthiness, and competence (Hendriks et al., 2023).

Study 3 – Prate – explores how language dominance and identity shape real-time interaction. This portion of the study will involve interactions with a distinctly “Frisian” robot (e.g., one that produces sarcastic and/or local speech/dialectal patterns), a monolingual Dutch robot, and a Frisian–Dutch code-switching robot.

Together, these three studies seek to advance a unified claim: language identity is the mechanism through which bilingual speakers navigate production, perception, and interaction. Accordingly, understanding this mechanism is essential in developing A.I. and language technologies that resonate with a diverse array of speakers, providing a framework to ensure language vitality in the digital era.

Sources:

​​Hendriks, B., van Meurs, F., & Usmany, N. (2023). The effects of lecturers’ non-native accent strength in English on intelligibility and attitudinal evaluations by native and non-native English students. Language Teaching Research, 27(6), 1378–1407. https://doi.org/10.1177/1362168820983145

Joseph, J. E. (2006). Identity and language. In K. Brown (Ed.), Encyclopedia of Language & Linguistics (2nd ed., pp. 486–492). Elsevier. https://doi.org/10.1016/B0-08-044854-2/01283-9

Conservation planning studies typically treat threats as exogenous and evaluate siting rules from a planner’s perspective. We argue that conservation is often contested, and develop a sequential land-claim game that models conservation as a dynamic, adversarial contest between conservationists (“Greens”) and developers (“Farmers”). We explore the framework in a Claims World that isolates the role of rivalry and leakage, and in a Budget World that introduces procurement constraints, decomposing outcomes into a Pure Strategy Effect (PSE)—the intrinsic quality of sites a strategy targets—and a Displacement–Leakage Effect (DLE)—the spillover gains from displacing developers’ preferred sites when leakage is incomplete. Our results generate several counterintuitive patterns. First, the link between threat-weighting and additionality breaks down once developer adaptation is allowed. Second, reducing leakage can paradoxically increase misallocation. Third, the textbook ratio-greedy rule (maximise efficiency) is systematically dominated by the simple value-greedy rule (maximise environment): we explore this ‘knapsack reversal’ more formally and show how it can produce a ‘disappointment gap’ between static (Marxan) planning and dynamic implementation. We then transport our dynamic contest to a Bolivia-based planning board constructed from biophysical data and confirm that the qualitative rankings from the simulations carry over, and adversarial outcomes lie well below the static cost-effectiveness upper bound. Tiny-grid equilibria, formal analysis and robustness exercises in the Appendix show that these patterns are consistent with best-response logic rather than artefacts of modelling choices. Together, the results suggest that robust conservation in contested landscapes requires strategies that anticipate adaptation, not just static threats.

Abstract, test, abstract….