File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A Video-based Fall Detection Network by Spatio-temporal Joint-point Model on Edge Devices

TitleA Video-based Fall Detection Network by Spatio-temporal Joint-point Model on Edge Devices
Authors
Keywordsfall detection
pose estimation
spatio-temporal model
joint-point features
Issue Date2021
PublisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000198
Citation
The 24th Design, Automation and Test in Europe Conference and Exhibition (DATE 2021), Virtual Conference, Grenoble, France, 1-5 February 2021, p. 422-427 How to Cite?
AbstractTripping or falling is among the top threats in elderly healthcare, and the development of automatic fall detection systems are of considerable importance. With the fast development of the Internet of Things (IoT), camera vision-based solutions have drawn much attention in recent years. The traditional fall video analysis on the cloud has significant communication overhead. This work introduces a fast and lightweight video fall detection network based on a spatio-temporal joint-point model to overcome these hurdles. Instead of detecting falling motion by the traditional Convolutional Neural Networks (CNNs), we propose a Long Short-Term Memory (LSTM) model based on time-series joint-point features, extracted from a pose extractor and then filtered from a geometric joint-point filter. Experiments are conducted to verify the proposed framework, which shows a high sensitivity of 98.46% on Multiple Cameras Fall Dataset and 100% on UR Fall Dataset. Furthermore, our model can achieve pose estimation tasks simultaneously, attaining 73.3 mAP in the COCO keypoint challenge dataset, which outperforms the OpenPose work by 8%.
Persistent Identifierhttp://hdl.handle.net/10722/301978
ISSN

 

DC FieldValueLanguage
dc.contributor.authorGuan, Z-
dc.contributor.authorLi, S-
dc.contributor.authorCheng, Y-
dc.contributor.authorMan, C-
dc.contributor.authorMao, W-
dc.contributor.authorWong, N-
dc.contributor.authorYu, H-
dc.date.accessioned2021-08-21T03:29:46Z-
dc.date.available2021-08-21T03:29:46Z-
dc.date.issued2021-
dc.identifier.citationThe 24th Design, Automation and Test in Europe Conference and Exhibition (DATE 2021), Virtual Conference, Grenoble, France, 1-5 February 2021, p. 422-427-
dc.identifier.issn1530-1591-
dc.identifier.urihttp://hdl.handle.net/10722/301978-
dc.description.abstractTripping or falling is among the top threats in elderly healthcare, and the development of automatic fall detection systems are of considerable importance. With the fast development of the Internet of Things (IoT), camera vision-based solutions have drawn much attention in recent years. The traditional fall video analysis on the cloud has significant communication overhead. This work introduces a fast and lightweight video fall detection network based on a spatio-temporal joint-point model to overcome these hurdles. Instead of detecting falling motion by the traditional Convolutional Neural Networks (CNNs), we propose a Long Short-Term Memory (LSTM) model based on time-series joint-point features, extracted from a pose extractor and then filtered from a geometric joint-point filter. Experiments are conducted to verify the proposed framework, which shows a high sensitivity of 98.46% on Multiple Cameras Fall Dataset and 100% on UR Fall Dataset. Furthermore, our model can achieve pose estimation tasks simultaneously, attaining 73.3 mAP in the COCO keypoint challenge dataset, which outperforms the OpenPose work by 8%.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000198-
dc.relation.ispartofDesign, Automation, and Test in Europe Conference and Exhibition Proceedings-
dc.rightsDesign, Automation, and Test in Europe Conference and Exhibition Proceedings. Copyright © IEEE Computer Society.-
dc.rights©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectfall detection-
dc.subjectpose estimation-
dc.subjectspatio-temporal model-
dc.subjectjoint-point features-
dc.titleA Video-based Fall Detection Network by Spatio-temporal Joint-point Model on Edge Devices-
dc.typeConference_Paper-
dc.identifier.emailWong, N: nwong@eee.hku.hk-
dc.identifier.authorityWong, N=rp00190-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.23919/DATE51398.2021.9474206-
dc.identifier.scopuseid_2-s2.0-85111061102-
dc.identifier.hkuros324501-
dc.identifier.spage422-
dc.identifier.epage427-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats