BIROn - Birkbeck Institutional Research Online

    Hardening against adversarial examples with the smooth gradient method

    Mosca, Alan and Magoulas, George D. (2018) Hardening against adversarial examples with the smooth gradient method. Soft Computing 22 (10), pp. 3203-3213. ISSN 1432-7643.

    [img]
    Preview
    Text
    SoftComp.pdf - Author's Accepted Manuscript

    Download (260kB) | Preview

    Abstract

    Commonly used methods in deep learning do not utilise transformations of the residual gradient available at the inputs to update the representation in the dataset. It has been shown that this residual gradient, which can be interpreted as the first-order gradient of the input sensitivity at a particular point, may be used to improve generalisation in feed-forward neural networks, including fully connected and convolutional layers. We explore how these input gradients are related to input perturbations used to generate adversarial examples and how the networks that are trained with this technique are more robust to attacks generated with the fast gradient sign method.

    Metadata

    Item Type: Article
    Additional Information: The final publication is available at Springer via the link above.
    School: Birkbeck Schools and Departments > School of Business, Economics & Informatics > Computer Science and Information Systems
    Research Centre: Birkbeck Knowledge Lab
    Depositing User: Prof George Magoulas
    Date Deposited: 23 Jan 2018 13:54
    Last Modified: 27 Jul 2019 07:50
    URI: http://eprints.bbk.ac.uk/id/eprint/20936

    Statistics

    Downloads
    Activity Overview
    74Downloads
    115Hits

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item