Human–AI coevolution is characterized by a fundamental tension between comfort – predictable, low-effort interaction that supports trust – and growth – the introduction of challenge that enables learning and development. AI systems that over-optimize for comfort risk reinforcing cognitive inertia, while systems that introduce excessive challenge may erode trust and user agency. Drawing on psychological theories including Vygotsky's Zone of Proximal Development, Kahneman's dual-process theory, and the Transtheoretical Model of Change, we propose a dual-mode framework that conceptually delineates how AI systems can transition between predictability mode (consolidating trust, reducing cognitive effort) and growth mode (introducing calibrated challenges). Rather than resolving the comfort-growth tension, effective human-AI thought partnership depends on managing it through principled mode switching within a stable interactional meta-framework. We clarify the conceptual scope of this proposal, situate it primarily within related work on human-AI educational, training, and skill development contexts, and briefly discuss ethical and agency implications. The framework, which extends to other forms of AI-mediated exploratory interaction—information retrieval, media consumption, marketplace navigation, conversational agents—is intended as a design-oriented, non-operational contribution to research on human-centered AI and long-term human-AI development.
Building similarity graph...
Analyzing shared references across papers
Loading...
Giuseppe Riva
Dino Pedreschi
University of Pisa
IRCCS Istituto Auxologico Italiano
Building similarity graph...
Analyzing shared references across papers
Loading...
Riva et al. (Mon,) studied this question.
www.synapsesocial.com/papers/69e8661d6e0dea528ddea91d — DOI: https://doi.org/10.1145/3811407