Learning input features representations in deep learning
Mosca, Alan and Magoulas, George D. (2016) Learning input features representations in deep learning. In: Angelov, P. and Gegov, A. and Jayne, C. and Shen, Q. (eds.) Advances in Computational Intelligence Systems: Contributions Presented at the 16th UK Workshop on Computational Intelligence, September 7–9, 2016, Lancaster, UK. Advances in Intelligent Systems and Computing (AISC) 513. Berlin, Germany: Springer, pp. 433-445. ISBN 9783319465616.
Abstract
Traditionally, when training supervised classifiers with Backpropagation, the training dataset is a static representation of the learning environment. The error on this training set is then propagated backwards to all the layers, and the gradient of the error with respect to the classifiers parameters is used to update them. However, this process stops when the parameters between the input layer and the next layer are updated. We note that there is a residual error that could be propagated further backwards to the feature vector(s) in order to adapt the representation of the input features, and that using this residual error can lead to improved speed of convergence towards a generalised solution. We present a methodology for applying this new technique to Deep Learning methods, such as Deep Neural Networks and Convolutional Neural Networks.
Metadata
Item Type: | Book Section |
---|---|
Additional Information: | Series ISSN: 2194-5357 |
School: | Birkbeck Faculties and Schools > Faculty of Science > School of Computing and Mathematical Sciences |
Depositing User: | Administrator |
Date Deposited: | 09 Mar 2017 14:33 |
Last Modified: | 09 Aug 2023 12:41 |
URI: | https://eprints.bbk.ac.uk/id/eprint/17711 |
Statistics
Additional statistics are available via IRStats2.