File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Exploiting Privileged Information from Web Data for Action and Event Recognition

TitleExploiting Privileged Information from Web Data for Action and Event Recognition
Authors
KeywordsAction recognition
Domain adaptation
Event recognition
Learning using privileged information
Multi-instance learning
Issue Date2016
Citation
International Journal of Computer Vision, 2016, v. 118, n. 2, p. 130-150 How to Cite?
AbstractIn the conventional approaches for action and event recognition, sufficient labelled training videos are generally required to learn robust classifiers with good generalization capability on new testing videos. However, collecting labelled training videos is often time consuming and expensive. In this work, we propose new learning frameworks to train robust classifiers for action and event recognition by using freely available web videos as training data. We aim to address three challenging issues: (1) the training web videos are generally associated with rich textual descriptions, which are not available in test videos; (2) the labels of training web videos are noisy and may be inaccurate; (3) the data distributions between training and test videos are often considerably different. To address the first two issues, we propose a new framework called multi-instance learning with privileged information (MIL-PI) together with three new MIL methods, in which we not only take advantage of the additional textual descriptions of training web videos as privileged information, but also explicitly cope with noise in the loose labels of training web videos. When the training and test videos come from different data distributions, we further extend our MIL-PI as a new framework called domain adaptive MIL-PI. We also propose another three new domain adaptation methods, which can additionally reduce the data distribution mismatch between training and test videos. Comprehensive experiments for action and event recognition demonstrate the effectiveness of our proposed approaches.
Persistent Identifierhttp://hdl.handle.net/10722/321652
ISSN
2023 Impact Factor: 11.6
2023 SCImago Journal Rankings: 6.668
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorNiu, Li-
dc.contributor.authorLi, Wen-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:20:30Z-
dc.date.available2022-11-03T02:20:30Z-
dc.date.issued2016-
dc.identifier.citationInternational Journal of Computer Vision, 2016, v. 118, n. 2, p. 130-150-
dc.identifier.issn0920-5691-
dc.identifier.urihttp://hdl.handle.net/10722/321652-
dc.description.abstractIn the conventional approaches for action and event recognition, sufficient labelled training videos are generally required to learn robust classifiers with good generalization capability on new testing videos. However, collecting labelled training videos is often time consuming and expensive. In this work, we propose new learning frameworks to train robust classifiers for action and event recognition by using freely available web videos as training data. We aim to address three challenging issues: (1) the training web videos are generally associated with rich textual descriptions, which are not available in test videos; (2) the labels of training web videos are noisy and may be inaccurate; (3) the data distributions between training and test videos are often considerably different. To address the first two issues, we propose a new framework called multi-instance learning with privileged information (MIL-PI) together with three new MIL methods, in which we not only take advantage of the additional textual descriptions of training web videos as privileged information, but also explicitly cope with noise in the loose labels of training web videos. When the training and test videos come from different data distributions, we further extend our MIL-PI as a new framework called domain adaptive MIL-PI. We also propose another three new domain adaptation methods, which can additionally reduce the data distribution mismatch between training and test videos. Comprehensive experiments for action and event recognition demonstrate the effectiveness of our proposed approaches.-
dc.languageeng-
dc.relation.ispartofInternational Journal of Computer Vision-
dc.subjectAction recognition-
dc.subjectDomain adaptation-
dc.subjectEvent recognition-
dc.subjectLearning using privileged information-
dc.subjectMulti-instance learning-
dc.titleExploiting Privileged Information from Web Data for Action and Event Recognition-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s11263-015-0862-5-
dc.identifier.scopuseid_2-s2.0-84946925643-
dc.identifier.volume118-
dc.identifier.issue2-
dc.identifier.spage130-
dc.identifier.epage150-
dc.identifier.eissn1573-1405-
dc.identifier.isiWOS:000377477400003-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats