It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain.
One challenge that faces artificial intelligence is the inability of deep neural networks to continuously learn new information without catastrophically forgetting what has been learnt before. To solve this problem, here the authors propose a replay-based algorithm for deep learning without the need to store data.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 Department of Neuroscience, Baylor College of Medicine, Center for Neuroscience and Artificial Intelligence, Houston, USA (GRID:grid.39382.33) (ISNI:0000 0001 2160 926X); Department of Engineering, University of Cambridge, Computational and Biological Learning Lab, Cambridge, UK (GRID:grid.5335.0) (ISNI:0000000121885934)
2 University of Massachusetts Amherst, College of Computer and Information Sciences, Amherst, USA (GRID:grid.266683.f) (ISNI:0000 0001 2184 9220)
3 Department of Neuroscience, Baylor College of Medicine, Center for Neuroscience and Artificial Intelligence, Houston, USA (GRID:grid.39382.33) (ISNI:0000 0001 2160 926X); Rice University, Department of Electrical and Computer Engineering, Houston, USA (GRID:grid.21940.3e) (ISNI:0000 0004 1936 8278)