File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Scale robust adaptive feature density approximation for visual object representation and tracking

TitleScale robust adaptive feature density approximation for visual object representation and tracking
Authors
KeywordsBayesian Adaptation
Density Approximation
EM
Feature Scale Selection
MAP
Tracking
Issue Date2009
PublisherINSTICC Press.
Citation
The 4th International Conference on Computer Vision Theory and Applications (VISAPP/GRAPP 2009), Lisboa, Portugal, 5-8 February 2009. In Proceedings of the 4th International Conference on Computer Vision Theory and Applications, 2009, v. 2, p. 535-540 How to Cite?
AbstractFeature density approximation (FDA) based visual object appearance representation is emerging as an effective method for object tracking, but its challenges come from object's complex motion (e.g. scaling, rotation) and the consequent object's appearance variation. The traditional adaptive FDA methods extract features in fixed scales ignoring the object's scale variation, and update FDA by sequential Maximum Likelihood estimation, which lacks robustness for sparse data. In this paper, to solve the above challenges, a robust multi-scale adaptive FDA object representation method is proposed for tracking, and its robust FDA updating method is provided. This FDA achieve robustness by extracting features in the selected scale and estimating feature density using a new likelihood function defined both by feature set and the feature's effectiveness probability. In FDA updating, robustness is achieved updating FDA in a Bayesian way by MAP-EM algorithm using density prior knowledge extracted from historical density. Object complex motion (e.g. scaling and rotation) is solved by correlating object appearance with its spatial alignment. Experimental results show that this method is efficient for complex motion, and robust in adapting the object appearance variation caused by changing scale, illumination, pose and viewing angel.
DescriptionVISAPP is part of VISIGRAPP - The International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Paper no. 228
The Abstracts' web site is located at http://www.visapp.visigrapp.org/Abstracts/2009/VISAPP_2009_Abstracts.htm
Persistent Identifierhttp://hdl.handle.net/10722/158594
ISBN
References

 

DC FieldValueLanguage
dc.contributor.authorLiu, Cen_US
dc.contributor.authorYung, NHCen_US
dc.contributor.authorFang, RGen_US
dc.date.accessioned2012-08-08T09:00:23Z-
dc.date.available2012-08-08T09:00:23Z-
dc.date.issued2009en_US
dc.identifier.citationThe 4th International Conference on Computer Vision Theory and Applications (VISAPP/GRAPP 2009), Lisboa, Portugal, 5-8 February 2009. In Proceedings of the 4th International Conference on Computer Vision Theory and Applications, 2009, v. 2, p. 535-540en_US
dc.identifier.isbn978-989-8111-69-2-
dc.identifier.urihttp://hdl.handle.net/10722/158594-
dc.descriptionVISAPP is part of VISIGRAPP - The International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications-
dc.descriptionPaper no. 228-
dc.descriptionThe Abstracts' web site is located at http://www.visapp.visigrapp.org/Abstracts/2009/VISAPP_2009_Abstracts.htm-
dc.description.abstractFeature density approximation (FDA) based visual object appearance representation is emerging as an effective method for object tracking, but its challenges come from object's complex motion (e.g. scaling, rotation) and the consequent object's appearance variation. The traditional adaptive FDA methods extract features in fixed scales ignoring the object's scale variation, and update FDA by sequential Maximum Likelihood estimation, which lacks robustness for sparse data. In this paper, to solve the above challenges, a robust multi-scale adaptive FDA object representation method is proposed for tracking, and its robust FDA updating method is provided. This FDA achieve robustness by extracting features in the selected scale and estimating feature density using a new likelihood function defined both by feature set and the feature's effectiveness probability. In FDA updating, robustness is achieved updating FDA in a Bayesian way by MAP-EM algorithm using density prior knowledge extracted from historical density. Object complex motion (e.g. scaling and rotation) is solved by correlating object appearance with its spatial alignment. Experimental results show that this method is efficient for complex motion, and robust in adapting the object appearance variation caused by changing scale, illumination, pose and viewing angel.en_US
dc.languageengen_US
dc.publisherINSTICC Press.-
dc.relation.ispartofProceedings of the 4th International Conference on Computer Vision Theory and Applicationsen_US
dc.subjectBayesian Adaptationen_US
dc.subjectDensity Approximationen_US
dc.subjectEMen_US
dc.subjectFeature Scale Selectionen_US
dc.subjectMAPen_US
dc.subjectTrackingen_US
dc.titleScale robust adaptive feature density approximation for visual object representation and trackingen_US
dc.typeConference_Paperen_US
dc.identifier.emailYung, NHC:nyung@eee.hku.hken_US
dc.identifier.authorityYung, NHC=rp00226en_US
dc.description.naturelink_to_subscribed_fulltexten_US
dc.identifier.doi10.5220/0001802805350540-
dc.identifier.scopuseid_2-s2.0-70349293774en_US
dc.identifier.hkuros164713-
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-70349293774&selection=ref&src=s&origin=recordpageen_US
dc.identifier.volume2en_US
dc.identifier.spage535en_US
dc.identifier.epage540en_US
dc.identifier.scopusauthoridLiu, CY=26431035900en_US
dc.identifier.scopusauthoridYung, NHC=7003473369en_US
dc.identifier.scopusauthoridFang, RG=36336806400en_US
dc.customcontrol.immutablesml 140725-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats