Large Language Models have no personalized memory. We present the Semantic Tension Graph (STG), a cognitive memory architecture for LLM agents that implements nine biologically-grounded mechanisms: activation propagation, Hebbian learning, synaptic pruning, salience decay, tension tracking, self-modeling, multi-phase inhibition, co-activation edge creation, and temporal episode structure. Post-hoc analysis reveals that STGindependently converged on the same architectural principles as Kanerva's Sparse Distributed Memory (1988) and Eliasmith's Semantic Pointer Architecture (2013). We identify four design constraints that any associative memory must satisfy, show three-way convergence across 38 yearsand three disciplines, and extend the analysis to 33 recent publications. Validated through longitudinal deployment (8,199 nodes, 20+ sessions),STG demonstrates cognitive continuity — resuming complex multi-domain research from a single natural language cue.
Building similarity graph...
Analyzing shared references across papers
Loading...
Wuko
Building similarity graph...
Analyzing shared references across papers
Loading...
Wuko (Thu,) studied this question.
www.synapsesocial.com/papers/69e3207940886becb653f916 — DOI: https://doi.org/10.5281/zenodo.19603840