In recently years, large language models (LLMs) show an ability to learn directly from examples embedded in their input, a process known as in-context learning (ICL). This learning approach enables the model to comprehend examples within the context and generalize to new tasks without the need for modifying model parameters. This indicates that the learning process can be fully realized during the inference phase. The phenomenon was first observed in GPT-3 and has since become central to understanding reasoning in large models. Studies describe ICL as implicit Bayesian inference, as internal simulation of learning algorithms, or as the operation of induction heads within attention circuits. Recent work extends these perspectives: The Implicit Dynamics of ICL links activation updates to posterior inference, In-Context Learning with Long-Context Models explores many-shot scaling, and VL-ICL Bench evaluates multimodal adaptation. Overall, these studies indicate that ICL integrates multiple features such as statistical inference, algorithmic approximation, and mechanistic emergence. However, challenges persist in theoretical integration, reproducibility, and robustness. This article will review the theoretical framework, empirical results, and the latest research progress of ICL in 2025. The aim is to clarify its internal mechanism, reveal the current limitations, and envision the future development direction, in order to promote a more systematic and coherent understanding of ICL.
Building similarity graph...
Analyzing shared references across papers
Loading...
Yuqing Yuan
Building similarity graph...
Analyzing shared references across papers
Loading...
Yuqing Yuan (Mon,) studied this question.
www.synapsesocial.com/papers/69d9e5b378050d08c1b75f1e — DOI: https://doi.org/10.1051/itmconf/20268403003/pdf
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: