Deep Learning: Implications for Human Learning and Memory

Recent years have seen an explosion of interest in deep learning and deep neural networks. Deep learning lies at the heart of unprecedented feats of machine intelligence as well as software people use every day. Systems built on deep learning have surpassed human capabilities in complex strategy games like go and chess, and we use them for speech recognition, image captioning, and a wide range of other applications. A consideration of deep learning is crucial for a Handbook of Human Memory, since human brains are deep neural networks, and an understanding of artificial deep learning systems may contribute to our understanding of how humans and animals learn and remember.

[1]  Adrià Puigdomènech Badia,et al.  Never Give Up: Learning Directed Exploration Strategies , 2020, ICLR.

[2]  Bruce L. McNaughton,et al.  Integration of New Information in Memory: New Insights from a Complementary Learning Systems Perspective , 2020, bioRxiv.

[3]  D. Hassabis,et al.  A distributional code for value in dopamine-based reinforcement learning , 2020, Nature.

[4]  M. Moscovitch,et al.  From Knowing to Remembering: The Semantic–Episodic Distinction , 2019, Trends in Cognitive Sciences.

[5]  Demis Hassabis,et al.  Mastering Atari, Go, chess and shogi by planning with a learned model , 2019, Nature.

[6]  Stephen Clark,et al.  Emergent Systematic Generalization in a Situated Agent , 2019, ICLR 2020.

[7]  James L. McClelland,et al.  Zero-shot task adaptation by homoiconic meta-mapping , 2019, 1905.09950.

[8]  Jane X. Wang,et al.  Reinforcement Learning, Fast and Slow , 2019, Trends in Cognitive Sciences.

[9]  Timothy P Lillicrap,et al.  Dendritic solutions to the credit assignment problem , 2019, Current Opinion in Neurobiology.

[10]  Marc G. Bellemare,et al.  A Comparative Analysis of Expected and Distributional Reinforcement Learning , 2019, AAAI.

[11]  Demis Hassabis,et al.  A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.

[12]  Oriol Vinyals,et al.  Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.

[13]  Joel Z. Leibo,et al.  Prefrontal cortex as a meta-reinforcement learning system , 2018, bioRxiv.

[14]  Oriol Vinyals,et al.  Synthesizing Programs for Images using Reinforced Adversarial Learning , 2018, ICML.

[15]  Joel Z. Leibo,et al.  Unsupervised Predictive Memory in a Goal-Directed Agent , 2018, ArXiv.

[16]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2018, Neural Networks.

[17]  Marc G. Bellemare,et al.  Distributional Reinforcement Learning with Quantile Regression , 2017, AAAI.

[18]  Sergey Levine,et al.  Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.

[19]  N. Daw,et al.  Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative Framework , 2017, Annual review of psychology.

[20]  Zeb Kurth-Nelson,et al.  Learning to reinforcement learn , 2016, CogSci.

[21]  Sergio Gomez Colmenarejo,et al.  Hybrid computing using a neural network with dynamic external memory , 2016, Nature.

[22]  James L. McClelland,et al.  What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated , 2016, Trends in Cognitive Sciences.

[23]  Nicholas B. Turk-Browne,et al.  Complementary learning systems within the hippocampus: A neural network modeling approach to reconciling episodic memory with statistical learning , 2016, bioRxiv.

[24]  Julian N. Marewski,et al.  What can the brain teach us about building artificial intelligence? , 2016, Behavioral and Brain Sciences.

[25]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[26]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[28]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[29]  Ha Hong,et al.  Performance-optimized hierarchical models predict neural responses in higher visual cortex , 2014, Proceedings of the National Academy of Sciences.

[30]  James L. McClelland Integrating probabilistic models of perception and interactive neural networks: a historical and tutorial review , 2013, Front. Psychol..

[31]  Anne G E Collins,et al.  How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis , 2012, The European journal of neuroscience.

[32]  J. Tenenbaum,et al.  Probabilistic models of cognition: exploring representations and inductive biases , 2010, Trends in Cognitive Sciences.

[33]  James L. McClelland,et al.  Letting structure emerge: connectionist and dynamical systems approaches to cognition , 2010, Trends in Cognitive Sciences.

[34]  Dorothy Tse,et al.  References and Notes Supporting Online Material Materials and Methods Figs. S1 to S5 Tables S1 to S3 Electron Impact (ei) Mass Spectra Chemical Ionization (ci) Mass Spectra References Schemas and Memory Consolidation Research Articles Research Articles Research Articles Research Articles , 2022 .

[35]  Matthew M Botvinick,et al.  Short-term memory for serial order: a recurrent neural network model. , 2006, Psychological review.

[36]  Sepp Hochreiter,et al.  Learning to Learn Using Gradient Descent , 2001, ICANN.

[37]  Randall C. O'Reilly,et al.  Generalization in Interactive Networks: The Benefits of Inhibitory Competition and Hebbian Learning , 2001, Neural Computation.

[38]  Edward K. Vogel,et al.  The capacity of visual working memory for features and conjunctions , 1997, Nature.

[39]  P. Dayan,et al.  A framework for mesencephalic dopamine systems based on predictive Hebbian learning , 1996, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[40]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[41]  James L. McClelland,et al.  Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. , 1995, Psychological review.

[42]  N. Charness,et al.  Expert Performance Its Structure and Acquisition , 2002 .

[43]  Gerald Tesauro,et al.  TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play , 1994, Neural Computation.

[44]  A. Karmiloff-Smith,et al.  The cognizer's innards: A psychological and philosophical perspective on the development of thought. , 1993 .

[45]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[46]  K. Miller,et al.  Ocular dominance column development: analysis and simulation. , 1989, Science.

[47]  James L. McClelland,et al.  Explorations in parallel distributed processing: a handbook of models, programs, and exercises , 1988 .

[48]  J. Fodor,et al.  Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.

[49]  D. Schacter Implicit memory: History and current status. , 1987 .

[50]  R Linsker,et al.  From basic network principles to neural architecture: emergence of orientation columns. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[51]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[52]  A G Barto,et al.  Toward a modern theory of adaptive networks: expectation and prediction. , 1981, Psychological review.

[53]  J. Cutting a Cognitive approach to Korsakoff's Syndrome , 1978, Cortex.

[54]  B. Milner,et al.  Further analysis of the hippocampal amnesic syndrome: 14-year follow-up study of H.M.☆ , 1968 .

[55]  W. Scoville,et al.  LOSS OF RECENT MEMORY AFTER BILATERAL HIPPOCAMPAL LESIONS , 1957, Journal of neurology, neurosurgery, and psychiatry.

[56]  J. Knott The organization of behavior: A neuropsychological theory , 1951 .

[57]  F. Bartlett,et al.  Remembering: A Study in Experimental and Social Psychology , 1932 .

[58]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[59]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[60]  Stephanie M. Stalinski,et al.  Journal of Experimental Psychology: Learning, Memory, and Cognition , 2012 .

[61]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[62]  James L. McClelland,et al.  Structure and deterioration of semantic memory: a neuropsychological and computational investigation. , 2004, Psychological review.

[63]  N. Burgess Memory for Serial Order : A Network Model of the Phonological Loop and its Timing , 1999 .

[64]  J. Hodges,et al.  The Impact of Semantic Memory Loss on Phonological Representations , 1994, Journal of Cognitive Neuroscience.

[65]  Michael McCloskey,et al.  Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .

[66]  James L. McClelland Explorations In Parallel Distributed Processing , 1988 .

[67]  Richard C. Atkinson,et al.  Human Memory: A Proposed System and its Control Processes , 1968, Psychology of Learning and Motivation.

[68]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[69]  H. Harlow,et al.  The formation of learning sets. , 1949, Psychological review.