THEORETICAL BACKGROUND
How would consumers feel when service robots fail and what are the behavioral consequences? When people perceive that a negative event cannot be resolved and altered in the future, they feel helpless (Gelbrich, 2010). Past research has documented that consumers show more negative responses when a service failure is attributed to stable causes (Van Vaerenbergh et al., 2014; Weiner 1985). In the service technology context, Belanche et al. (2020) found that consumers tend to make stronger attribution of stability to service failures caused by service robots than by humans. This could be because people tend to perceive automated systems as less flexible and adaptable (Leo & Huh, 2020) and require a substantial amount of time and effort to fix functional issues (Joyeux & Albiez, 2011). Thus, consumers may expect that a service robot’s error is more likely to be stable (i.e., occur constantly in the future), thereby feeling helpless (Gelbrich, 2010; Luse & Burkman, 2022).
Extending this notion, we argue that consumer helplessness will be contingent upon the failure type they encounter. Based on the theory of mind perception (Gray et al., 2007), past research suggests that while humans are seen as having greater minds (both high in agency and experience), robots are typically perceived as having moderate levels of agency but low levels of experiential mind (Gray & Wegner, 2012). In other words, people expect service robots to possess a great level of functional capabilities, while lacking social-emotional capabilities, which are key determinants of outcome and process failures, respectively (Choi et al., 2021; Smith et al., 1999). Also, while mechanical and thinking intelligence can be easily replaced by robots, feeling intelligence is hardly carried out by them (Huang & Rust, 2021). As a result, consumers who encounter a process failure by a robot view that the issue can hardly be fixed, resulting in a high level of helplessness. In contrast, an outcome failure by a service robot can be perceived as easily resolved by, for example, a quick system check or machine learning (Heller, 2019). In other words, a process (vs. outcome) failure by a robot is perceived to be nearly irresolvable, eliciting stronger helplessness.
Further, the helpless consumers are more likely to engage in negative word-of-mouth, but less likely to directly complain to the company because they doubt the service provider can remedy the service failure while still needing to vent their negative emotions (Gelbrich, 2010). Thus, we hypothesize:

H1. Consumers are more likely to engage in NWOM, but less likely to directly complain to the organization when service process (vs. outcome) failure is caused by a robot.
H2. The effect of robot’s service failure type (process vs. outcome) on complaint behavior (NWOM, direct complain) will be mediated by helplessness.

How can managers let consumers believe that their robot is capable of fixing social-emotional functions and expect the same error would not occur in the future? Empirical evidence has shown that subtle cues (e.g., appearance) make robots to be perceived as warmer, and consumers tend to respond less negatively to their mistakes (Xu & Liu, 2022; Yam et al., 2021). Moreover, adding warmth cues to robots can enhance perceived social-emotional skills (Choi et al., 2021). Therefore, we propose that adding warmth cues to robots can make consumers to believe their social mistakes (i.e., process failures) could be resolved, mitigating helplessness.

H3. In a process failure, the effect of robot’s failure on coping strategy mediated by helplessness will be mitigated when the robot has warmth cues. There will be no such effect of warmth cue in an outcome failure.

METHODOLOGY
Study 1
To test H1-2, Study 1 will employ a 2 (failure type: process vs. outcome) between-subjects experimental design. Participants will first read a scenario describing a service failure at a restaurant deploying service robots. Failure type will be manipulated by adapting previous empirical study scenarios (e.g., Choi et al. 2021). After reading the scenario, participants will be asked to indicate their intention to complain directly to the manager (Evanschitzky et al., 2011) and NWOM intention (Zeithaml et al., 1996). Also, helplessness will be measured (Gelbrich, 2010), followed by anger and frustration as alternative explanations (Gelbrich, 2010), and perceived severity of the service failure and propensity to complain as potential control variables.

Study 2
To test H3, Study 2 will employ a 2 (warmth cues on a robot: present vs. absent) by 2 (failure type: process vs. outcome) between-subjects experimental design. Participants will first read a scenario describing a process or outcome failure by a service robot at a hotel (Choi et al. 2021). Warmth cues will be manipulated by adding a broad smile and cute appearance to the robot image, while such cues will not be applied in the cue absence condition, following Liu et al. (2022). Pretest will be conducted to verify the stimuli. After reading the scenario, the same items will be measured as in Study 1.

EXPECTED CONTRIBUTION
When a robot fails, consumers may not complain directly to the firm but spread negative words particularly when they experience a process failure. We argue that this is due to consumer helplessness such that consumers do not expect the problem to be resolved. Thus, managers should pay extra attention in monitoring how their robots “behave” towards customers. By adding warmth cues to robots, managers can expect direct complaints from customers, which allow them to take immediate action and ultimately improve service quality.
The current research expects to add to the literature on service robot failure by examining the role of helplessness, which has been underexplored in the service robotics research, and its unique consequences (Gelbrich, 2010).  
References

Belanche, D., Casaló, L. V., Flavián, C., & Schepers, J. (2020). Robots or frontline employees? Exploring customers’ attributions of responsibility and stability after service failure or success. Journal of Service Management, 31(2), 267-289.

Choi, S., Mattila, A. S., & Bolton, L. E. (2021). To err is human (-oid): how do consumers react to robot service failure and recovery?. Journal of Service Research, 24(3), 354-371.

Evanschitzky, H., Brock, C., & Blut, M. (2011). Will you tolerate this? The impact of affective commitment on complaint intention and postrecovery behavior. Journal of Service Research, 14(4), 410-425.

Gelbrich, K. (2010). Anger, frustration, and helplessness after service failure: coping strategies and effective informational support. Journal of the Academy of Marketing Science, 38(5), 567-585.

Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. science, 315(5812), 619-619.

Heller, M. (2019), “Machine Learning Algorithms Explained,” InfoWorld (May 9), https://www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html.

Huang, M. H., & Rust, R. T. (2021). Engaged to a robot? The role of AI in service. Journal of Service Research, 24(1), 30-41.

Leo, X., & Huh, Y. E. (2020). Who gets the blame for service failures? Attribution of responsibility toward robot versus human service providers and service firms. Computers in Human Behavior, 113, 106520.

Liu, X. S., Yi, X. S., & Wan, L. C. (2022). Friendly or competent? The effects of perception of robot appearance and service context on usage intention. Annals of Tourism Research, 92, 103324.

Luse, A., & Burkman, J. (2022). Learned helplessness attributional scale (LHAS): Development and validation of an attributional style measure. Journal of Business Research, 151, 623-634.

Smith, A. K., Bolton, R. N., & Wagner, J. (1999). A model of customer satisfaction with service encounters involving failure and recovery. Journal of marketing research, 36(3), 356-372.

Van Vaerenbergh, Y., Orsingher, C., Vermeir, I., & Larivière, B. (2014). A meta-analysis of relationships linking service failure attributions to customer outcomes. Journal of Service Research, 17(4), 381-398.

Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.

Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological review, 92(4), 548-573.

Xu, X., & Liu, J. (2022). Artificial intelligence humor in service recovery. Annals of Tourism Research, 95, 103439.

Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2021). Robots at work: People prefer—and forgive—service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557-1572.

Zeithaml, V. A., Berry, L. L., & Parasuraman, A. (1996). The behavioral consequences of service quality. Journal of marketing, 60(2), 31-46.