Gao, Y. and Wang, M. and Zha, Z.-J. and Shen, J. and Li, Xuelong and Wu, X. (2013) Visual-textual joint relevance learning for tag-based social image search. IEEE Transactions on Image Processing 22 (1), pp. 363-376. ISSN 1057-7149.
Abstract
Due to the popularity of social media websites, extensive research efforts have been dedicated to tag-based social image search. Both visual information and tags have been investigated in the research field. However, most existing methods use tags and visual characteristics either separately or sequentially in order to estimate the relevance of images. In this paper, we propose an approach that simultaneously utilizes both visual and textual information to estimate the relevance of user tagged images. The relevance estimation is determined with a hypergraph learning approach. In this method, a social image hypergraph is constructed, where vertices represent images and hyperedges represent visual or textual terms. Learning is achieved with use of a set of pseudo-positive images, where the weights of hyperedges are updated throughout the learning process. In this way, the impact of different tags and visual words can be automatically modulated. Comparative results of the experiments conducted on a dataset including 370+images are presented, which demonstrate the effectiveness of the proposed approach.
Metadata
Item Type: | Article |
---|---|
School: | Birkbeck Faculties and Schools > Faculty of Science > School of Computing and Mathematical Sciences |
Depositing User: | Sarah Hall |
Date Deposited: | 06 Jun 2013 09:57 |
Last Modified: | 09 Aug 2023 12:33 |
URI: | https://eprints.bbk.ac.uk/id/eprint/7288 |
Statistics
Additional statistics are available via IRStats2.