BIROn - Birkbeck Institutional Research Online

    Learning input features representations in deep learning

    Mosca, Alan and Magoulas, George D. (2016) Learning input features representations in deep learning. In: Angelov, P. and Gegov, A. and Jayne, C. and Shen, Q. (eds.) Advances in Computational Intelligence Systems: Contributions Presented at the 16th UK Workshop on Computational Intelligence, September 7–9, 2016, Lancaster, UK. Advances in Intelligent Systems and Computing (AISC) 513. Berlin, Germany: Springer, pp. 433-445. ISBN 9783319465616.

    Full text not available from this repository.


    Traditionally, when training supervised classifiers with Backpropagation, the training dataset is a static representation of the learning environment. The error on this training set is then propagated backwards to all the layers, and the gradient of the error with respect to the classifiers parameters is used to update them. However, this process stops when the parameters between the input layer and the next layer are updated. We note that there is a residual error that could be propagated further backwards to the feature vector(s) in order to adapt the representation of the input features, and that using this residual error can lead to improved speed of convergence towards a generalised solution. We present a methodology for applying this new technique to Deep Learning methods, such as Deep Neural Networks and Convolutional Neural Networks.


    Item Type: Book Section
    Additional Information: Series ISSN: 2194-5357
    School: Birkbeck Faculties and Schools > Faculty of Science > School of Computing and Mathematical Sciences
    Depositing User: Administrator
    Date Deposited: 09 Mar 2017 14:33
    Last Modified: 09 Aug 2023 12:41


    Activity Overview
    6 month trend
    6 month trend

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item