File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Object tracking for a class of dynamic image-based representations

TitleObject tracking for a class of dynamic image-based representations
Authors
KeywordsBayesian
Compression
IBR
Level-set
Matting
MPEG-4
Object
Plenoptic Video
Rendering
Tracking
Issue Date2005
PublisherS P I E - International Society for Optical Engineering. The Journal's web site is located at http://spie.org/x1848.xml
Citation
Proceedings Of Spie - The International Society For Optical Engineering, 2005, v. 5960 n. 3, p. 1267-1274 How to Cite?
AbstractImage-based rendering (IBR) is an emerging technology for photo-realistic rendering of scenes from a collection of densely sampled images and videos. Recently, an object-based approach for rendering and the compression of a class of dynamic image-based representations called plenoptic videos was proposed. The plenoptic video is a simplified dynamic light field, which is obtained by capturing videos at regularly locations along a series of line segments. In the object-based approach, objects at large depth differences are segmented into layers for rendering and compression. The rendering quality in large environment can be significantly improved, as demonstrated by the pop-up lightfields. In addition, by coding the plenoptic video at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects, can be achieved. An important step in the object-based approach is to segment the objects in the video streams into layers or image-based objects, which is largely done by semi-automatic technique. To reduce the segmentation time for segmenting plenoptic videos, efficient tracking techniques are highly desirable. This paper proposes a new automatic object tracking method based on the level-set method. Our method, which utilizes both local and global features of the image sequences instead of global features exploited in previous approach, can achieve better tracking results for objects, especially with non-uniform energy distribution. Due to possible segmentation errors around object boundaries, natural matting with Bayesian approach is also incorporated into our system. Using the alpha map and texture so estimated, it is very convenient to composite the image-based objects onto the background of the original or other plenoptic videos. Furthermore, a MPEG-4 like object-based algorithm is developed for compressing the plenoptic videos, which consist of the alpha maps, depth maps and textures of the segmented image-based objects from different video plenoptic streams. Experimental results show that satisfactory renderings can be obtained by the proposed approaches.
DescriptionS P I E Conference on Visual Communications and Image Processing 2005, Beijing, China, 12-15 July 2005
Persistent Identifierhttp://hdl.handle.net/10722/54061
ISSN
References

 

DC FieldValueLanguage
dc.contributor.authorGan, ZFen_HK
dc.contributor.authorChan, SCen_HK
dc.contributor.authorNg, KTen_HK
dc.contributor.authorShum, HYen_HK
dc.date.accessioned2009-04-03T07:35:43Z-
dc.date.available2009-04-03T07:35:43Z-
dc.date.issued2005en_HK
dc.identifier.citationProceedings Of Spie - The International Society For Optical Engineering, 2005, v. 5960 n. 3, p. 1267-1274en_HK
dc.identifier.issn0277-786Xen_HK
dc.identifier.urihttp://hdl.handle.net/10722/54061-
dc.descriptionS P I E Conference on Visual Communications and Image Processing 2005, Beijing, China, 12-15 July 2005-
dc.description.abstractImage-based rendering (IBR) is an emerging technology for photo-realistic rendering of scenes from a collection of densely sampled images and videos. Recently, an object-based approach for rendering and the compression of a class of dynamic image-based representations called plenoptic videos was proposed. The plenoptic video is a simplified dynamic light field, which is obtained by capturing videos at regularly locations along a series of line segments. In the object-based approach, objects at large depth differences are segmented into layers for rendering and compression. The rendering quality in large environment can be significantly improved, as demonstrated by the pop-up lightfields. In addition, by coding the plenoptic video at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects, can be achieved. An important step in the object-based approach is to segment the objects in the video streams into layers or image-based objects, which is largely done by semi-automatic technique. To reduce the segmentation time for segmenting plenoptic videos, efficient tracking techniques are highly desirable. This paper proposes a new automatic object tracking method based on the level-set method. Our method, which utilizes both local and global features of the image sequences instead of global features exploited in previous approach, can achieve better tracking results for objects, especially with non-uniform energy distribution. Due to possible segmentation errors around object boundaries, natural matting with Bayesian approach is also incorporated into our system. Using the alpha map and texture so estimated, it is very convenient to composite the image-based objects onto the background of the original or other plenoptic videos. Furthermore, a MPEG-4 like object-based algorithm is developed for compressing the plenoptic videos, which consist of the alpha maps, depth maps and textures of the segmented image-based objects from different video plenoptic streams. Experimental results show that satisfactory renderings can be obtained by the proposed approaches.en_HK
dc.languageengen_HK
dc.publisherS P I E - International Society for Optical Engineering. The Journal's web site is located at http://spie.org/x1848.xmlen_HK
dc.relation.ispartofProceedings of SPIE - The International Society for Optical Engineeringen_HK
dc.rightsS P I E - the International Society for Optical Proceedings. Copyright © S P I E - International Society for Optical Engineering.en_HK
dc.rightsCreative Commons: Attribution 3.0 Hong Kong License-
dc.rightsCopyright 2005 Society of Photo-Optical Instrumentation Engineers.This paper was published in S P I E Conference on Visual Communications and Image Processing 2005, Beijing, China, 12-15 July 2005, v. 5960, p. 59603Q-1 - 59603Q-8 and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may bemade for personal use only. Systematic or multiple reproduction, distribution to multiple locations viaelectronic or other means, duplication of any material in this paper for a fee or for commercial purposes,or modification of the content of the paper are prohibited.en_HK
dc.subjectBayesianen_HK
dc.subjectCompressionen_HK
dc.subjectIBRen_HK
dc.subjectLevel-seten_HK
dc.subjectMattingen_HK
dc.subjectMPEG-4en_HK
dc.subjectObjecten_HK
dc.subjectPlenoptic Videoen_HK
dc.subjectRenderingen_HK
dc.subjectTrackingen_HK
dc.titleObject tracking for a class of dynamic image-based representationsen_HK
dc.typeConference_Paperen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=0277-786X&volume=5960&spage=59603Q&epage=1 &date=2005&atitle=Object+tracking+for+a+class+of+dynamic+image-based+representationsen_HK
dc.identifier.emailChan, SC: ascchan@hkucc.hku.hken_HK
dc.identifier.emailNg, KT: wktng@hku.hken_HK
dc.identifier.authorityChan, SC=rp00094en_HK
dc.identifier.authorityNg, KT=rp00157en_HK
dc.description.naturepublished_or_final_versionen_HK
dc.identifier.doi10.1117/12.632661en_HK
dc.identifier.scopuseid_2-s2.0-32544435084en_HK
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-32544435084&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume5960en_HK
dc.identifier.issue3en_HK
dc.identifier.spage1267en_HK
dc.identifier.epage1274en_HK
dc.publisher.placeUnited Statesen_HK
dc.identifier.scopusauthoridGan, ZF=7102935242en_HK
dc.identifier.scopusauthoridChan, SC=13310287100en_HK
dc.identifier.scopusauthoridNg, KT=7403178463en_HK
dc.identifier.scopusauthoridShum, HY=7006094115en_HK

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats