File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.patcog.2011.09.027
- Scopus: eid_2-s2.0-83655184804
- WOS: WOS:000300459000036
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Feature fusion at the local region using localized maximum-margin learning for scene categorization
Title | Feature fusion at the local region using localized maximum-margin learning for scene categorization |
---|---|
Authors | |
Keywords | Feature fusion Image-based Local feature Local region Multiple features |
Issue Date | 2012 |
Publisher | Elsevier BV. The Journal's web site is located at http://www.elsevier.com/locate/pr |
Citation | Pattern Recognition, 2012, v. 45 n. 4, p. 1671-1683 How to Cite? |
Abstract | In the field of visual recognition such as scene categorization, representing an image based on the local feature (e.g., the bag-of-visual-word (BOVW) model and the bag-of-contextual-visual-word (BOCVW) model) has become popular and one of the most successful methods. In this paper, we propose a method that uses localized maximum-margin learning to fuse different types of features during the BOCVW modeling for eventual scene classification. The proposed method fuses multiple features at the stage when the best contextual visual word is selected to represent a local region (hard assignment) or the probabilities of the candidate contextual visual words used to represent the unknown region are estimated (soft assignment). The merits of the proposed method are that (1) errors caused by the ambiguity of single feature when assigning local regions to the contextual visual words can be corrected or the probabilities of the candidate contextual visual words used to represent the region can be estimated more accurately; and that (2) it offers a more flexible way in fusing these features through determining the similarity-metric locally by localized maximum-margin learning. The proposed method has been evaluated experimentally and the results indicate its effectiveness. © 2011 Elsevier Ltd All rights reserved. |
Persistent Identifier | http://hdl.handle.net/10722/152665 |
ISSN | 2023 Impact Factor: 7.5 2023 SCImago Journal Rankings: 2.732 |
ISI Accession Number ID | |
References |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Qin, J | en_US |
dc.contributor.author | Yung, NHC | en_US |
dc.date.accessioned | 2012-07-16T09:45:52Z | - |
dc.date.available | 2012-07-16T09:45:52Z | - |
dc.date.issued | 2012 | en_US |
dc.identifier.citation | Pattern Recognition, 2012, v. 45 n. 4, p. 1671-1683 | en_US |
dc.identifier.issn | 0031-3203 | - |
dc.identifier.uri | http://hdl.handle.net/10722/152665 | - |
dc.description.abstract | In the field of visual recognition such as scene categorization, representing an image based on the local feature (e.g., the bag-of-visual-word (BOVW) model and the bag-of-contextual-visual-word (BOCVW) model) has become popular and one of the most successful methods. In this paper, we propose a method that uses localized maximum-margin learning to fuse different types of features during the BOCVW modeling for eventual scene classification. The proposed method fuses multiple features at the stage when the best contextual visual word is selected to represent a local region (hard assignment) or the probabilities of the candidate contextual visual words used to represent the unknown region are estimated (soft assignment). The merits of the proposed method are that (1) errors caused by the ambiguity of single feature when assigning local regions to the contextual visual words can be corrected or the probabilities of the candidate contextual visual words used to represent the region can be estimated more accurately; and that (2) it offers a more flexible way in fusing these features through determining the similarity-metric locally by localized maximum-margin learning. The proposed method has been evaluated experimentally and the results indicate its effectiveness. © 2011 Elsevier Ltd All rights reserved. | - |
dc.language | eng | en_US |
dc.publisher | Elsevier BV. The Journal's web site is located at http://www.elsevier.com/locate/pr | en_US |
dc.relation.ispartof | Pattern Recognition | en_US |
dc.rights | NOTICE: this is the author’s version of a work that was accepted for publication in Pattern Recognition. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition, 2012, v. 45 n. 4, p. 1671-1683. DOI: 10.1016/j.patcog.2011.09.027 | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Feature fusion | - |
dc.subject | Image-based | - |
dc.subject | Local feature | - |
dc.subject | Local region | - |
dc.subject | Multiple features | - |
dc.title | Feature fusion at the local region using localized maximum-margin learning for scene categorization | en_US |
dc.type | Article | en_US |
dc.identifier.email | Qin, J: jzhqin@eee.hku.hk | en_US |
dc.identifier.email | Yung, NHC: nyung@eee.hku.hk | - |
dc.identifier.authority | Yung, NHC=rp00226 | en_US |
dc.description.nature | postprint | - |
dc.identifier.doi | 10.1016/j.patcog.2011.09.027 | - |
dc.identifier.scopus | eid_2-s2.0-83655184804 | - |
dc.identifier.hkuros | 201091 | en_US |
dc.relation.references | http://www.scopus.com/mlt/select.url?eid=2-s2.0-83655184804&selection=ref&src=s&origin=recordpage | - |
dc.identifier.volume | 45 | en_US |
dc.identifier.issue | 4 | - |
dc.identifier.spage | 1671 | en_US |
dc.identifier.epage | 1683 | en_US |
dc.identifier.isi | WOS:000300459000036 | - |
dc.publisher.place | Netherlands | - |
dc.identifier.scopusauthorid | Qin, J=24450951900 | - |
dc.identifier.scopusauthorid | Yung, NHC=7003473369 | - |
dc.identifier.issnl | 0031-3203 | - |