Current large language model (LLM) systems address long-context reasoning through external memory augmentation—retrieval-augmented generation (RAG), vector databases, and agent workflows. We argue that this paradigm is fundamentally limited because it separates storage from computation, violating a core principle of biological cognition. We propose CogNet (Cognitive Network), a brain-inspired architecture that unifies memory and reasoning within a single computational framework. CogNet introduces five interconnected innovations grounded in neuroscience: (1) a three-layer memory trace model (gist–anchor–pointer) inspired by Fuzzy Trace Theory; (2) Bidirectional Reasoning-Memory Coupling (BRMC), where every reasoning step simultaneously reads and writes memory; (3) a multimodal associative memory graph with spreading activation for reasoning; (4) a dynamic forgetting system based on activation threshold modulation rather than physical deletion; and (5) a dual-channel consolidation system that transcends the human brain's sleep-dependent consolidation bottleneck. We provide formal analysis including information-theoretic bounds, convergence guarantees, and stability-plasticity trade-offs.
Building similarity graph...
Analyzing shared references across papers
Loading...
Daoxiang Dong
Building similarity graph...
Analyzing shared references across papers
Loading...
Daoxiang Dong (Tue,) studied this question.
www.synapsesocial.com/papers/69f2a4da8c0f03fd67764050 — DOI: https://doi.org/10.5281/zenodo.19839484
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: