File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Understanding tag-cloud and visual features for better annotation of concepts in NUS-WIDE dataset

TitleUnderstanding tag-cloud and visual features for better annotation of concepts in NUS-WIDE dataset
Authors
KeywordsConcept prediction
Large-scale date set
Tag
Visual feature
Issue Date2009
Citation
1st International Workshop on Web-Scale Multimedia Corpus, WSMC'09, Co-located with the 2009 ACM International Conference on Multimedia, MM'09, 2009, p. 9-16 How to Cite?
AbstractLarge-scale dataset construction will require a significant large amount of well labeled ground truth. For the NUS-WIDE dataset, a less labor-intensive annotation process was used and this paper will focuses on improving the semi-manual annotation method used. For the NUS-WIDE dataset, improving the average accuracy for top retrievals of individual concepts will effectively improve the results of the semi-manual annotation method. For web images, both tags and visual feature play important roles in predicting the concept of the image. For visual features, we have adopted an adaptive feature selection method to construct a middle level feature by concatenating the k-NN results for each type of visual feature. This middle feature is more robust than the average combination of single features, and we have shown it achieves good performance for the concept prediction. For Tag cloud, we construct a concept-tag co-occurrence matrix. The co-occurrence information to compute the probability of an image belonging to certain concept and according to Bayes theory for the annotated tags. By understanding the WordNet's taxonomy level, which indicates whether the concept is generic of specific, and exploring the tags clouds distribution, we propose a selection method of using either tag cloud or visual features, to enhance the concepts annotation performance. In this way, the advantages of both tag and visual features are boosted. Experimental results have shown that our method can achieve very high average precision for the NUS-WIDE dataset, which greatly facilitates the construction of large-scale web image data set. Copyright 2009 ACM.
Persistent Identifierhttp://hdl.handle.net/10722/345051

 

DC FieldValueLanguage
dc.contributor.authorGao, Shenghua-
dc.contributor.authorChia, Liang Tien-
dc.contributor.authorCheng, Xiangang-
dc.date.accessioned2024-08-15T09:24:53Z-
dc.date.available2024-08-15T09:24:53Z-
dc.date.issued2009-
dc.identifier.citation1st International Workshop on Web-Scale Multimedia Corpus, WSMC'09, Co-located with the 2009 ACM International Conference on Multimedia, MM'09, 2009, p. 9-16-
dc.identifier.urihttp://hdl.handle.net/10722/345051-
dc.description.abstractLarge-scale dataset construction will require a significant large amount of well labeled ground truth. For the NUS-WIDE dataset, a less labor-intensive annotation process was used and this paper will focuses on improving the semi-manual annotation method used. For the NUS-WIDE dataset, improving the average accuracy for top retrievals of individual concepts will effectively improve the results of the semi-manual annotation method. For web images, both tags and visual feature play important roles in predicting the concept of the image. For visual features, we have adopted an adaptive feature selection method to construct a middle level feature by concatenating the k-NN results for each type of visual feature. This middle feature is more robust than the average combination of single features, and we have shown it achieves good performance for the concept prediction. For Tag cloud, we construct a concept-tag co-occurrence matrix. The co-occurrence information to compute the probability of an image belonging to certain concept and according to Bayes theory for the annotated tags. By understanding the WordNet's taxonomy level, which indicates whether the concept is generic of specific, and exploring the tags clouds distribution, we propose a selection method of using either tag cloud or visual features, to enhance the concepts annotation performance. In this way, the advantages of both tag and visual features are boosted. Experimental results have shown that our method can achieve very high average precision for the NUS-WIDE dataset, which greatly facilitates the construction of large-scale web image data set. Copyright 2009 ACM.-
dc.languageeng-
dc.relation.ispartof1st International Workshop on Web-Scale Multimedia Corpus, WSMC'09, Co-located with the 2009 ACM International Conference on Multimedia, MM'09-
dc.subjectConcept prediction-
dc.subjectLarge-scale date set-
dc.subjectTag-
dc.subjectVisual feature-
dc.titleUnderstanding tag-cloud and visual features for better annotation of concepts in NUS-WIDE dataset-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/1631135.1631138-
dc.identifier.scopuseid_2-s2.0-72249106177-
dc.identifier.spage9-
dc.identifier.epage16-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats