File Download
Supplementary

Conference Paper: Relatively-paired space analysis

TitleRelatively-paired space analysis
Authors
Issue Date2013
Citation
The 2013 British Machine Vision Conference (BMVC), Bristol, UK., 9-13 September 2013., p. 1-12 How to Cite?
AbstractDiscovering a latent common space between different modalities plays an important role in cross-modality pattern recognition. Existing techniques often require absolutely paired observations as training data, and are incapable of capturing more general semantic relationships between cross-modality observations. This greatly limits their applications. In this paper, we propose a general framework for learning a latent common space from relatively-paired observations (i.e., two observations from different modalities are more-likely-paired than another two). Relative-pairing information is encoded using relative proximities of observations in the latent common space. By building a discriminative model and maximizing a distance margin, a projection function that maps observations into the latent common space is learned for each modality. Cross-modality pattern recognition can then be carried out in the latent common space. To evaluate its performance, the proposed framework has been applied to cross-pose face recognition and feature fusion. Experimental results demonstrate that the proposed framework outperforms other state-of-the-art approaches.
DescriptionSession 11: Segmentation & Features
Persistent Identifierhttp://hdl.handle.net/10722/189618

 

DC FieldValueLanguage
dc.contributor.authorKuang, Zen_US
dc.contributor.authorWong, KKYen_US
dc.date.accessioned2013-09-17T14:50:21Z-
dc.date.available2013-09-17T14:50:21Z-
dc.date.issued2013en_US
dc.identifier.citationThe 2013 British Machine Vision Conference (BMVC), Bristol, UK., 9-13 September 2013., p. 1-12en_US
dc.identifier.urihttp://hdl.handle.net/10722/189618-
dc.descriptionSession 11: Segmentation & Features-
dc.description.abstractDiscovering a latent common space between different modalities plays an important role in cross-modality pattern recognition. Existing techniques often require absolutely paired observations as training data, and are incapable of capturing more general semantic relationships between cross-modality observations. This greatly limits their applications. In this paper, we propose a general framework for learning a latent common space from relatively-paired observations (i.e., two observations from different modalities are more-likely-paired than another two). Relative-pairing information is encoded using relative proximities of observations in the latent common space. By building a discriminative model and maximizing a distance margin, a projection function that maps observations into the latent common space is learned for each modality. Cross-modality pattern recognition can then be carried out in the latent common space. To evaluate its performance, the proposed framework has been applied to cross-pose face recognition and feature fusion. Experimental results demonstrate that the proposed framework outperforms other state-of-the-art approaches.-
dc.languageengen_US
dc.relation.ispartofBMVC 2013en_US
dc.rightsAuthor holds the copyright-
dc.titleRelatively-paired space analysisen_US
dc.typeConference_Paperen_US
dc.identifier.emailWong, KKY: kykwong@cs.hku.hken_US
dc.identifier.authorityWong, KKY=rp01393en_US
dc.description.naturepostprint-
dc.identifier.hkuros221064en_US
dc.identifier.spage1-
dc.identifier.epage12-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats