BIROn - Birkbeck Institutional Research Online

    Multimodal learning for multi-label image classification

    Pang, Y. and Ma, Z. and Yuan, Y. and Li, Xuelong and Wang, K. (2011) Multimodal learning for multi-label image classification. In: UNSPECIFIED (ed.) IEEE International Conference on Image Processing. Institute of Electrical and Electronics Engineers, pp. 1797-1800. ISBN 9781457713040.

    Full text not available from this repository.

    Abstract

    We tackle the challenge of web image classification using additional tags information. Unlike traditional methods that only use the combination of several low-level features, we try to use semantic concepts to represent images and corresponding tags. At first, we extract the latent topic information by probabilistic latent semantic analysis (pLSA) algorithm, and then use multi-label multiple kernel learning to combine visual and textual features to make a better image classification. In our experiments on PASCAL VOC'07 set and MIR Flickr set, we demonstrate the benefit of using multimodal feature to improve image classification. Specifically, we discover that on the issue of image classification, utilizing latent semantic feature to represent images and associated tags can obtain better classification results than other ways that integrating several low-level features.

    Metadata

    Item Type: Book Section
    School: Birkbeck Faculties and Schools > Faculty of Science > School of Computing and Mathematical Sciences
    Depositing User: Sarah Hall
    Date Deposited: 07 Jun 2013 10:19
    Last Modified: 09 Aug 2023 12:33
    URI: https://eprints.bbk.ac.uk/id/eprint/7378

    Statistics

    Activity Overview
    6 month trend
    0Downloads
    6 month trend
    223Hits

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item
    Edit/View Item