Despite technological advances in LLM-powered machine partners, these systems often lead to a one-sided division of labour in interaction [1, 2]. The division of labour principle proposes that speakers and listeners collaboratively share the effort required to achieve success [3, 4]. When speakers exert more effort during interaction, listeners can devote less effort; in contrast, when speakers exert less effort, listeners must compensate to ensure communication success. In human-human dialogue, such collaboration is negotiated by both partners involved. However, in human-machine interactions, the burden often falls disproportionately on the user as machines fail to follow fundamental principles of human-human dialogue [1, 2], such as the division of labour.
While some studies have explored the linguistic capabilities of LLMs [6], research should focus more on how the presence or absence of principles that support communicative success shapes users’ own collaborative processes. In particular, we need to investigate how the division of labour manifests in human–machine dialogue depending on the degree of effort exerted by the machine partner during interaction [7]. For instance, Peña and Cowan show that over-informative machine partners (those machine partners exerting more effort) benefit users in visually grounded tasks, a finding that goes contrary to current design recommendations advocating for conciseness and briefness in system responses [8]. Without experimental work examining how different levels of machine effort influence user behaviour, we risk designing systems that are not attuned to contextual and user needs, thereby reinforcing the current one-sided distribution of effort. I therefore argue that future research should systematically examine how the division of labour emerges across different contexts and communicative goals in human–machine interactions, in order to move towards a more balanced and natural division of labour in human-machine dialogue.

[1] Peña, P. R., et al. (2023). Audience design and egocentrism in reference production during human-computer dialogue.
[2] Rasenberg, M. et al. (2023). Reimagining language: Towards a better understanding of language by including our interactions with non-humans.
[3] Clark, H. H., & Murphy, G. L. (1982). Audience design in meaning and reference.
[4] Hawkins, R. D. et al. (2021). The division of labor in communication: Speakers help listeners account for asymmetries in visual perspective.
[5] Chater, N. (2023). How could we make a social robot? A virtual bargaining approach.
[6] Wang, A. et al. (2019). Superglue: A stickier benchmark for general-purpose language understanding systems.
[7] Peña, P.R. and Cowan, B.R. (in press). Help Me and I’ll Help You: Speakers’ and Listeners’ Collaborative Effort and the Division of Labour in Human-Agent Collaborative Communication.
[8] Setlur, V., & Tory, M. (2022, April). How do you converse with an analytical chatbot? revisiting gricean maxims for designing analytical analytical conversational behavior.