BIROn - Birkbeck Institutional Research Online

    Hardening against adversarial examples with the smooth gradient method

    Mosca, Alan and Magoulas, George D. (2018) Hardening against adversarial examples with the smooth gradient method. Soft Computing 22 (10), pp. 3203-3213. ISSN 1432-7643.

    SoftComp.pdf - Author's Accepted Manuscript

    Download (260kB) | Preview


    Commonly used methods in deep learning do not utilise transformations of the residual gradient available at the inputs to update the representation in the dataset. It has been shown that this residual gradient, which can be interpreted as the first-order gradient of the input sensitivity at a particular point, may be used to improve generalisation in feed-forward neural networks, including fully connected and convolutional layers. We explore how these input gradients are related to input perturbations used to generate adversarial examples and how the networks that are trained with this technique are more robust to attacks generated with the fast gradient sign method.


    Item Type: Article
    Additional Information: The final publication is available at Springer via the link above.
    School: School of Business, Economics & Informatics > Computer Science and Information Systems
    Research Centres and Institutes: Birkbeck Knowledge Lab
    Depositing User: George Magoulas
    Date Deposited: 23 Jan 2018 13:54
    Last Modified: 13 Jun 2021 05:13


    Activity Overview
    6 month trend
    6 month trend

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item