File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/WACV51458.2022.00094
- Scopus: eid_2-s2.0-85126129304
- WOS: WOS:000800471200087
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Deep Online Fused Video Stabilization
Title | Deep Online Fused Video Stabilization |
---|---|
Authors | |
Keywords | Computational Photography Image and Video Synthesis |
Issue Date | 2022 |
Citation | Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 2022, p. 865-873 How to Cite? |
Abstract | We present a deep neural network (DNN) that uses both sensor data (gyroscope) and image content (optical flow) to stabilize videos through unsupervised learning. The network fuses optical flow with real/virtual camera pose histories into a joint motion representation. Next, the LSTM cell infers the new virtual camera pose, which is used to generate a warping grid that stabilizes the video frames. We adopt a relative motion representation as well as a multi-stage training strategy to optimize our model without any supervision. To the best of our knowledge, this is the first DNN solution that adopts both sensor data and image content for video stabilization. We validate the proposed framework through ablation studies and demonstrate that the proposed method outperforms the state-of-art alternative solutions via quantitative evaluations and a user study. Check out our video results, code and dataset at our website. |
Persistent Identifier | http://hdl.handle.net/10722/341347 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shi, Zhenmei | - |
dc.contributor.author | Shi, Fuhao | - |
dc.contributor.author | Lai, Wei Sheng | - |
dc.contributor.author | Liang, Chia Kai | - |
dc.contributor.author | Liang, Yingyu | - |
dc.date.accessioned | 2024-03-13T08:42:06Z | - |
dc.date.available | 2024-03-13T08:42:06Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 2022, p. 865-873 | - |
dc.identifier.uri | http://hdl.handle.net/10722/341347 | - |
dc.description.abstract | We present a deep neural network (DNN) that uses both sensor data (gyroscope) and image content (optical flow) to stabilize videos through unsupervised learning. The network fuses optical flow with real/virtual camera pose histories into a joint motion representation. Next, the LSTM cell infers the new virtual camera pose, which is used to generate a warping grid that stabilizes the video frames. We adopt a relative motion representation as well as a multi-stage training strategy to optimize our model without any supervision. To the best of our knowledge, this is the first DNN solution that adopts both sensor data and image content for video stabilization. We validate the proposed framework through ablation studies and demonstrate that the proposed method outperforms the state-of-art alternative solutions via quantitative evaluations and a user study. Check out our video results, code and dataset at our website. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022 | - |
dc.subject | Computational Photography | - |
dc.subject | Image and Video Synthesis | - |
dc.title | Deep Online Fused Video Stabilization | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/WACV51458.2022.00094 | - |
dc.identifier.scopus | eid_2-s2.0-85126129304 | - |
dc.identifier.spage | 865 | - |
dc.identifier.epage | 873 | - |
dc.identifier.isi | WOS:000800471200087 | - |