BIROn - Birkbeck Institutional Research Online

    Multimodal learning for multi-label image classification

    Pang, Y. and Ma, Z. and Yuan, Y. and Li, Xuelong and Wang, K. (2011) Multimodal learning for multi-label image classification. In: UNSPECIFIED (ed.) IEEE International Conference on Image Processing. Institute of Electrical and Electronics Engineers, pp. 1797-1800. ISBN 9781457713040.

    Full text not available from this repository.


    We tackle the challenge of web image classification using additional tags information. Unlike traditional methods that only use the combination of several low-level features, we try to use semantic concepts to represent images and corresponding tags. At first, we extract the latent topic information by probabilistic latent semantic analysis (pLSA) algorithm, and then use multi-label multiple kernel learning to combine visual and textual features to make a better image classification. In our experiments on PASCAL VOC'07 set and MIR Flickr set, we demonstrate the benefit of using multimodal feature to improve image classification. Specifically, we discover that on the issue of image classification, utilizing latent semantic feature to represent images and associated tags can obtain better classification results than other ways that integrating several low-level features.


    Item Type: Book Section
    School: School of Business, Economics & Informatics > Computer Science and Information Systems
    Depositing User: Sarah Hall
    Date Deposited: 07 Jun 2013 10:19
    Last Modified: 11 Oct 2016 15:27


    Activity Overview

    Additional statistics are available via IRStats2.

    Archive Staff Only (login required)

    Edit/View Item Edit/View Item