File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Gaze Prediction in Dynamic 360° Immersive Videos

TitleGaze Prediction in Dynamic 360° Immersive Videos
Authors
Issue Date2018
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, p. 5333-5342 How to Cite?
AbstractThis paper explores gaze prediction in dynamic 360° immersive videos, i.e., based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time. To tackle this problem, we first present the large-scale eye-tracking in dynamic VR scene dataset. Our dataset contains 208 360° videos captured in dynamic scenes, and each video is viewed by at least 31 subjects. Our analysis shows that gaze prediction depends on its history scan path and image contents. In terms of the image contents, those salient objects easily attract viewers' attention. On the one hand, the saliency is related to both appearance and motion of the objects. Considering that the saliency measured at different scales is different, we propose to compute saliency maps at different spatial scales: The sub-image patch centered at current gaze point, the sub-image corresponding to the Field of View (FoV), and the panorama image. Then we feed both the saliency maps and the corresponding images into a Convolutional Neural Network (CNN) for feature extraction. Meanwhile, we also use a Long-Short-Term-Memory (LSTM) to encode the history scan path. Then we combine the CNN features and LSTM features for gaze displacement prediction between gaze point at a current time and gaze point at an upcoming time. Extensive experiments validate the effectiveness of our method for gaze prediction in dynamic VR scenes.
Persistent Identifierhttp://hdl.handle.net/10722/345240
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorXu, Yanyu-
dc.contributor.authorDong, Yanbing-
dc.contributor.authorWu, Junru-
dc.contributor.authorSun, Zhengzhong-
dc.contributor.authorShi, Zhiru-
dc.contributor.authorYu, Jingyi-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:26:06Z-
dc.date.available2024-08-15T09:26:06Z-
dc.date.issued2018-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, p. 5333-5342-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/345240-
dc.description.abstractThis paper explores gaze prediction in dynamic 360° immersive videos, i.e., based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time. To tackle this problem, we first present the large-scale eye-tracking in dynamic VR scene dataset. Our dataset contains 208 360° videos captured in dynamic scenes, and each video is viewed by at least 31 subjects. Our analysis shows that gaze prediction depends on its history scan path and image contents. In terms of the image contents, those salient objects easily attract viewers' attention. On the one hand, the saliency is related to both appearance and motion of the objects. Considering that the saliency measured at different scales is different, we propose to compute saliency maps at different spatial scales: The sub-image patch centered at current gaze point, the sub-image corresponding to the Field of View (FoV), and the panorama image. Then we feed both the saliency maps and the corresponding images into a Convolutional Neural Network (CNN) for feature extraction. Meanwhile, we also use a Long-Short-Term-Memory (LSTM) to encode the history scan path. Then we combine the CNN features and LSTM features for gaze displacement prediction between gaze point at a current time and gaze point at an upcoming time. Extensive experiments validate the effectiveness of our method for gaze prediction in dynamic VR scenes.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleGaze Prediction in Dynamic 360° Immersive Videos-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2018.00559-
dc.identifier.scopuseid_2-s2.0-85061647977-
dc.identifier.spage5333-
dc.identifier.epage5342-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats