File Download
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Relatively-paired space analysis
| Title | Relatively-paired space analysis |
|---|---|
| Authors | |
| Issue Date | 2013 |
| Citation | The 2013 British Machine Vision Conference (BMVC), Bristol, UK., 9-13 September 2013., p. 1-12 How to Cite? |
| Abstract | Discovering a latent common space between different modalities plays an important role in cross-modality pattern recognition. Existing techniques often require absolutely paired observations as training data, and are incapable of capturing more general semantic relationships between cross-modality observations. This greatly limits their applications. In this paper, we propose a general framework for learning a latent common space from relatively-paired observations (i.e., two observations from different modalities are more-likely-paired than another two). Relative-pairing information is encoded using relative proximities of observations in the latent common space. By building a discriminative model and maximizing a distance margin, a projection function that maps observations into the latent common space is learned for each modality. Cross-modality pattern recognition can then be carried out in the latent common space. To evaluate its performance, the proposed framework has been applied to cross-pose face recognition and feature fusion. Experimental results demonstrate that the proposed framework outperforms other state-of-the-art approaches. |
| Description | Session 11: Segmentation & Features |
| Persistent Identifier | http://hdl.handle.net/10722/189618 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kuang, Z | en_US |
| dc.contributor.author | Wong, KKY | en_US |
| dc.date.accessioned | 2013-09-17T14:50:21Z | - |
| dc.date.available | 2013-09-17T14:50:21Z | - |
| dc.date.issued | 2013 | en_US |
| dc.identifier.citation | The 2013 British Machine Vision Conference (BMVC), Bristol, UK., 9-13 September 2013., p. 1-12 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10722/189618 | - |
| dc.description | Session 11: Segmentation & Features | - |
| dc.description.abstract | Discovering a latent common space between different modalities plays an important role in cross-modality pattern recognition. Existing techniques often require absolutely paired observations as training data, and are incapable of capturing more general semantic relationships between cross-modality observations. This greatly limits their applications. In this paper, we propose a general framework for learning a latent common space from relatively-paired observations (i.e., two observations from different modalities are more-likely-paired than another two). Relative-pairing information is encoded using relative proximities of observations in the latent common space. By building a discriminative model and maximizing a distance margin, a projection function that maps observations into the latent common space is learned for each modality. Cross-modality pattern recognition can then be carried out in the latent common space. To evaluate its performance, the proposed framework has been applied to cross-pose face recognition and feature fusion. Experimental results demonstrate that the proposed framework outperforms other state-of-the-art approaches. | - |
| dc.language | eng | en_US |
| dc.relation.ispartof | BMVC 2013 | en_US |
| dc.rights | Author holds the copyright | - |
| dc.title | Relatively-paired space analysis | en_US |
| dc.type | Conference_Paper | en_US |
| dc.identifier.email | Wong, KKY: kykwong@cs.hku.hk | en_US |
| dc.identifier.authority | Wong, KKY=rp01393 | en_US |
| dc.description.nature | postprint | - |
| dc.identifier.hkuros | 221064 | en_US |
| dc.identifier.spage | 1 | - |
| dc.identifier.epage | 12 | - |
