BIROn - Birkbeck Institutional Research Online

    A Complementary Learning Systems approach to Temporal Difference Learning

    Mareschal, Denis and Blakeman, S. (2019) A Complementary Learning Systems approach to Temporal Difference Learning. Neural Networks 122 , pp. 218-230. ISSN 0893-6080.

    [img] Text
    29596.pdf - Author's Accepted Manuscript
    Restricted to Repository staff only
    Available under License Creative Commons Attribution Non-commercial No Derivatives.

    Download (1MB) | Request a copy
    [img]
    Preview
    Text
    29596a.pdf - Published Version of Record
    Available under License Creative Commons Attribution.

    Download (1MB) | Preview

    Abstract

    Complementary Learning Systems (CLS) theory suggests that the brain uses a 'neocortical' and a 'hippocampal' learning system to achieve complex behavior. These two systems are complementary in that the 'neocortical' system relies on slow learning of distributed representations while the 'hippocampal' system relies on fast learning of pattern-separated representations. Both of these systems project to the striatum, which is a key neural structure in the brain's implementation of Reinforcement Learning (RL). Current deep RL approaches share similarities with a 'neocortical' system because they slowly learn distributed representations through backpropagation in Deep Neural Networks (DNNs). An ongoing criticism of such approaches is that they are data inefficient and lack exibility. CLS theory suggests that the addition of a 'hippocampal' system could address these criticisms. In the present study we propose a novel algorithm known as Complementary Temporal Difference Learning (CTDL), which combines a DNN with a Self-Organising Map (SOM) to obtain the benefits of both a 'neocortical' and a 'hippocampal' system. Key features of CTDL include the use of Temporal Difference (TD) error to update a SOM and the combination of a SOM and DNN to calculate action values. We evaluate CTDL on Grid World, Cart-Pole and Continuous Mountain Car tasks and show several benefits over the classic Deep Q-Network (DQN) approach. These results demonstrate (1) the utility of complementary learning systems for the evaluation of actions, (2) that the TD error signal is a useful form of communication between the two systems and (3) that our approach extends to both discrete and continuous state and action spaces.

    Metadata

    Item Type: Article
    Keyword(s) / Subject(s): Complementary Learning Systems, Reinforcement Learning, Hippocampus
    School: School of Science > Psychological Sciences
    Research Centres and Institutes: Brain and Cognitive Development, Centre for (CBCD)
    Depositing User: Administrator
    Date Deposited: 21 Oct 2019 13:01
    Last Modified: 22 Nov 2020 20:25
    URI: https://eprints.bbk.ac.uk/id/eprint/29596

    Statistics

    Downloads
    Activity Overview
    133Downloads
    83Hits

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item