Content area

Abstract

Sparse Distributed Memory (SDM) provides a biologically inspired mechanism for associative and online learning. Transformer architectures, despite exceptional inference performance, remain static and vulnerable to catastrophic forgetting. This work introduces Continual Associative Learning Model (CALM), a conceptual framework that defines the theoretical base and integration logic for the cognitive model seeking to establish continual, lifelong adaptation without retraining by combining SDM system with lightweight dual-transformer modules. The architecture proposes an always-online associative memory for episodic storage (System 1), as well as a pair of asynchronous transformer consolidate experience in the background for uninterrupted reasoning and gradual model evolution (System 2). The framework remains compatible with standard transformer benchmarks, establishing a shared evaluation basis for both reasoning accuracy and continual learning stability. Preliminary experiments using the SDMPreMark benchmark evaluate algorithmic behavior across multiple synthetic sets, confirming a critical radius-threshold phenomenon in SDM recall. These results represent deterministic characterization of SDM dynamics in the component level, preceding the integration in the model level with transformer-based semantic tasks. The CALM framework provides a reproducible foundation for studying continual memory and associative learning in hybrid transformer architectures, although future work should involve experiments with non-synthetic, high-load data to confirm scalable behavior in high interference.

Full text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.