The integration of generative AI into programming education has produced a widely reported tension between performance and learning. We distinguish immediate task performance from genuine learning: durable, transferable conceptual understanding and evaluative skill, and examine how AI support shapes learning processes, not merely outcomes. While many studies document improved speed, accuracy, and affect with AI support, questions remain about the quality of underlying learning. We use a constructivist grounded-theory design with constant comparison across two naturally occurring course sections (an AI-enabled section and a human pair-programming section used as a theoretical contrast). Over one semester, we collected triangulation across data sources from undergraduate Java programming students–interaction logs, pre-/post-course concept maps, and semi-structured dyadic interviews (AI-enabled: N=24; theoretical contrast: N=17). Analysis revealed a core tension between students’ pursuit of “Domain Mastery” (conceptualization, explanation, and evaluation) and “Tool Mastery” (procedural efficiency with AI). We identified dynamic strategy switching (the Strategic Dance), Partnership Framing with an Illusion of Dialogue subtheme, and two recurrent evaluation challenges (Trust-but-Can’t-Verify for novices; a Boilerplate Blindspot for more experienced students). We also describe attenuated meta-cognitive calibration–a mismatch between perceived readiness and independent capability–co-occurring with sustained offloading patterns. These categories synthesize into a process-level tension model with two recurrent loops (Scaffolding and Offloading), interpreted through Cognitive Load Theory and Self-Determination Theory. We offer a theory-building account that helps explain how widely observed performance and affect gains can co-occur with thinner opportunities for germane processing and authorship. The model generates testable implications (e. g. , critique-the-AI phases, planned fading, verification journals), and we invite multi-site tests to evaluate boundary conditions.
Building similarity graph...
Analyzing shared references across papers
Loading...
Dandan Liu
G. F. Fan
Lihu Pan
International Journal of STEM Education
University of Malaya
Taiyuan University of Science and Technology
Building similarity graph...
Analyzing shared references across papers
Loading...
Liu et al. (Fri,) studied this question.
www.synapsesocial.com/papers/69b6069b83145bc643d1ca7c — DOI: https://doi.org/10.1186/s40594-025-00592-w