File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2021.3105387
- Scopus: eid_2-s2.0-85113236120
- WOS: WOS:000864325900096
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Widar3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi
Title | Widar3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi |
---|---|
Authors | |
Keywords | Training COTS WiFi Wireless sensor networks Wireless Sensing Feature extraction Wireless fidelity Gesture recognition Gesture Recognition Wireless communication Sensors Feature Extraction |
Issue Date | 2021 |
Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021 How to Cite? |
Abstract | With the development of signal processing technology, the ubiquitous Wi-Fi devices open an unprecedented opportunity to solve the challenging human gesture recognition problem by learning motion representations from wireless signals. Wi-Fi-based gesture recognition systems, although yield good performance on specific data domains, are still practically difficult to be used without explicit adaptation efforts to new domains. Various pioneering approaches have been proposed to resolve this contradiction but extra training efforts are still necessary for either data collection or model re-training when new data domains appear. To advance cross-domain recognition and achieve fully zero-effort recognition, we propose Widar3.0, a Wi-Fi-based zero-effort cross-domain gesture recognition system. The key insight of Widar3.0 is to derive and extract domain-independent features of human gestures at the lower signal level, which represent unique kinetic characteristics of gestures and are irrespective of domains. On this basis, we develop a one-fits-all general model that requires only one-time training but can adapt to different data domains. Experiments on various domain factors (i.e. environments, locations, and orientations of persons) demonstrate the accuracy of 92.7% for in-domain recognition and 82.6%-92.4% for cross-domain recognition without model re-training, outperforming the state-of-the-art solutions. |
Persistent Identifier | http://hdl.handle.net/10722/303820 |
ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Yi | - |
dc.contributor.author | Zheng, Yue | - |
dc.contributor.author | Qian, Kun | - |
dc.contributor.author | Zhang, Guidong | - |
dc.contributor.author | Liu, Yunhao | - |
dc.contributor.author | Wu, Chenshu | - |
dc.contributor.author | Yang, Zheng | - |
dc.date.accessioned | 2021-09-15T08:26:05Z | - |
dc.date.available | 2021-09-15T08:26:05Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10722/303820 | - |
dc.description.abstract | With the development of signal processing technology, the ubiquitous Wi-Fi devices open an unprecedented opportunity to solve the challenging human gesture recognition problem by learning motion representations from wireless signals. Wi-Fi-based gesture recognition systems, although yield good performance on specific data domains, are still practically difficult to be used without explicit adaptation efforts to new domains. Various pioneering approaches have been proposed to resolve this contradiction but extra training efforts are still necessary for either data collection or model re-training when new data domains appear. To advance cross-domain recognition and achieve fully zero-effort recognition, we propose Widar3.0, a Wi-Fi-based zero-effort cross-domain gesture recognition system. The key insight of Widar3.0 is to derive and extract domain-independent features of human gestures at the lower signal level, which represent unique kinetic characteristics of gestures and are irrespective of domains. On this basis, we develop a one-fits-all general model that requires only one-time training but can adapt to different data domains. Experiments on various domain factors (i.e. environments, locations, and orientations of persons) demonstrate the accuracy of 92.7% for in-domain recognition and 82.6%-92.4% for cross-domain recognition without model re-training, outperforming the state-of-the-art solutions. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.subject | Training | - |
dc.subject | COTS WiFi | - |
dc.subject | Wireless sensor networks | - |
dc.subject | Wireless Sensing | - |
dc.subject | Feature extraction | - |
dc.subject | Wireless fidelity | - |
dc.subject | Gesture recognition | - |
dc.subject | Gesture Recognition | - |
dc.subject | Wireless communication | - |
dc.subject | Sensors | - |
dc.subject | Feature Extraction | - |
dc.title | Widar3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TPAMI.2021.3105387 | - |
dc.identifier.scopus | eid_2-s2.0-85113236120 | - |
dc.identifier.eissn | 1939-3539 | - |
dc.identifier.isi | WOS:000864325900096 | - |