Large Language Models (LLMs) have become essential for interactive AI systems, yet they remain fundamentally static after deployment: they cannot update their parameters from interaction feedback and often repeat the same mistakes across long interaction streams. We propose Dual-Process Agent (DPA), a framework for continual context refinement that enables learning without modifying a frozen model backbone. Inspired by dual-process theory from cognitive science, DPA decomposes each interaction episode into two complementary processes: a fast System 1 that retrieves compact, relevant context from an explicit long-term memory and generates responses, and a slow System 2 that reflects on outcomes and writes curated updates back into memory. To prevent memory degradation over extended interactions, DPA maintains bulletized memory entries with utility statistics and employs a conservative curator gate that filters generic, redundant, or conflicting insertions. Experiments on six diverse benchmarks demonstrate that DPA consistently outperforms vanilla prompting and competitive baselines on both GPT-5.1 and Llama-3.1-8B backbones, achieving the best overall performance across multiple reasoning and knowledge-intensive tasks.
Building similarity graph...
Analyzing shared references across papers
Loading...
Liangyu Teng
Wei Ni
Liang Song
Electronics
Fudan University
China State Construction Engineering (China)
Building similarity graph...
Analyzing shared references across papers
Loading...
Teng et al. (Mon,) studied this question.
www.synapsesocial.com/papers/69ba423c4e9516ffd37a2598 — DOI: https://doi.org/10.3390/electronics15061232
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: