In-context learning (ICL) represents a potent capability of large language models (LLMs), enabling dynamic task adaptation during inference without requiring parameter adjustments-a framework rooted in model-based meta-learning (MBML) principles. Originally confined to instruction-following and few-shot pattern completion, ICL has now significantly transcended its initial scope. It currently acts as a catalyst for LLM advancements in agentic architectures, reasoning capabilities, and planning modules, while simultaneously evolving into a general-purpose learning engine capable of rapid cross-task, cross-paradigm, and cross-modality adaptation. Building on this progression, we present a multi-dimensional taxonomy of ICL, thereby uncovering emergent patterns that facilitate a generalpurpose learning engine. Unlike prior surveys focused on how to use ICL, this work also examines why ICL arises, linking its emergence to the outer-loop incentives that shape it. The analysis critiques existing benchmarks, highlighting limitations in evaluation methodologies and unresolved challenges including the uncertainty in generalization scope, efficient memory and context scaling, and data hunger. By synthesizing recent progress and persistent gaps, the survey provides a structured foundation for future research, emphasizing scalable, robust, and versatile ICL systems.
Building similarity graph...
Analyzing shared references across papers
Loading...
Fan Wang
Yu Bo
Ping Shao
University of Science and Technology of China
Institute of Art
Building similarity graph...
Analyzing shared references across papers
Loading...
Wang et al. (Mon,) studied this question.
www.synapsesocial.com/papers/68a366a20a429f797332c608 — DOI: https://doi.org/10.36227/techrxiv.175492111.15449662/v1