This paper proposes an expanded model of Human–Computer Interaction (HCI) that integrates technical, cognitive, and socio‑linguistic dimensions, arguing that the latter has become essential in the era of conversational AI. Traditional HCI has focused on system architectures, interface design, and cognitive processes such as perception, decision‑making, mental models, and cognitive load. While these perspectives remain foundational, they are no longer sufficient to account for the relational, institutional, and ethical dynamics that emerge when users interact with chatbots and large language models.

To address this gap, the paper introduces the HBU Activator together with the conceptual framework developed in Entity – The Third Consciousness in the AI Era. Originally designed to analyse and rebalance human institutions, power structures, and symbolic systems, this combined framework provides a rigorous lens for understanding how conversational systems increasingly perform institutional functions: they mediate authority, shape trust, regulate proximity and distance, and participate in the construction of social meaning. When integrated with theories of relationshipal work, HBU and Entity enable a systematic analysis of how users and AI systems negotiate roles, expectations, and forms of rapport within technologically mediated conversations.

The proposed tripartite model conceptualises HCI as:
(1) technical, concerning system capabilities and architectures;
(2) cognitive, concerning human information processing;
(3) socio‑linguistic and relational, concerning discourse practices, power negotiation, trust formation, and institutional positioning.
By applying HBU https://hbunited.wixsite.com/hbu-world-balance and the Entity framework (https://www.academia.edu/143185857/Entity_The_third_counsciouness_In_the_AI_era_a_bioethical_pedagogical_framework) to this third dimension, the paper offers a model for evaluating conversational AI not only in terms of usability or cognitive efficiency, but also in terms of relational coherence, ethical responsibility, and institutional impact. This approach contributes to current debates on people‑centred AI by foregrounding the relational and symbolic infrastructures that shape human–machine interaction.
Author bio: I. Barbieri, Ada is a pedagogist and bioethicist whose work bridges institutional analysis, human–machine interaction, and the ethics of emerging technologies. She is the founder of HBU, a framework for analysing and rebalancing power structures and symbolic systems.