Rationale and Aims: Large language models (LLMs) are becoming a common feature of social platforms, but how they affect human-human interaction in online peer support groups is not well understood. We developed and pilot-tested an LLM-based conversational agent (CA) within a group chat prior to a larger-scale evaluation to assess feasibility, refine its intervention strategy, and explore whether it could re-engage stalled conversations while maintaining peer-led dynamics.
Methods: We conducted a six-hour pilot with six participants in a group augmented by the CA. The agent responded to unanswered messages after a 30–60 minute delay to allow time for peer replies before intervening, and sent check-in messages when no group activity occurred for more than one hour. Message activity and conversational structure were analysed, including thread initiation, development, and resolution. Participants completed baseline and post-session surveys and took part in follow-up interviews.
Findings: 77 messages were exchanged, including 12 from the CA (15.6%). Eleven threads were identified; six (54.5%) developed into multi-member discussions, including two initiated by CA. Thread analysis showed that the CA helped re-engage stalled exchanges and reduce unanswered messages. Participants’ comfort being in a CA-supported group increased from 2.75 to 4.25 (+1.50), while average concerns decreased from 3.48 to 1.64 (–1.83), with the largest reductions in privacy, harmful responses, and dependence.
Conclusions: The pilot demonstrated the feasibility of deploying an LLM-based CA in a live peer support group setting and informed the methodology and evaluation framework for future studies. Beyond participants’ subjective evaluations, message and thread analyses indicated that the CA helped move discussions forward while preserving peer-led group dynamics. Future work should examine not only whether CA augmentation improves support outcomes, but also whether it changes the relational and social meaning of peer support.