Despite the proliferation of artificially intelligent systems capable of social interaction, how and why social interaction influences users over time remains poorly understood. We draw on theories of technology adoption and research in affective computing, social psychology, and management to introduce the concept of human-AI relationships involving interdependence, temporality, and intensity. We develop the Relational Tradeoff Model, extending current theorizing on technology adoption by accounting for a critical third factor in addition to cognitive acceptance and behavioral use: human subjective well-being. The model reveals an important unexplored tradeoff in relationships with socially interactive AI: short-term acceptance and use gains but long-term subjective well-being costs for trust, psychological safety, and emotional labor, depending on AI social function and exacerbating and mitigating individual and relational factors. We discuss implications and suggestions for future exploration, including intrapersonal, interpersonal, and team relational dynamics and evolving expectations of AI in organizations.
Building similarity graph...
Analyzing shared references across papers
Loading...
Laura Rees
Mehran Bahmani
Organizational Psychology Review
Oregon State University
York University
Building similarity graph...
Analyzing shared references across papers
Loading...
Rees et al. (Wed,) studied this question.
www.synapsesocial.com/papers/693231368e51979591dcec26 — DOI: https://doi.org/10.1177/20413866251399871
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: