File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3240508.3240530
- Scopus: eid_2-s2.0-85058239628
- WOS: WOS:000509665700017
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Self-boosted gesture interactive system with ST-Net
Title | Self-boosted gesture interactive system with ST-Net |
---|---|
Authors | |
Keywords | Interactive system Recognition Convolutional neural networks |
Issue Date | 2018 |
Citation | MM 2018 - Proceedings of the 2018 ACM Multimedia Conference, 2018, p. 145-153 How to Cite? |
Abstract | © 2018 Association for Computing Machinery. In this paper, we propose a self-boosted intelligent system for joint sign language recognition and automatic education. A novel Spatial-Temporal Net (ST-Net) is designed to exploit the temporal dynamics of localized hands for sign language recognition. Features from ST-Net can be deployed by our education system to detect failure modes of the learners. Moreover, the education system can help collect a vast amount of data for training ST-Net. Our sign language recognition and education system help improve each other step-by-step. On the one hand, benefited from accurate recognition system, the education system can detect the failure parts of the learner more precisely. On the other hand, with more training data gathered from the education system, the recognition system becomes more robust and accurate. Experiments on Hong Kong sign language dataset containing 227 commonly used words validate the effectiveness of our joint recognition and education system. |
Persistent Identifier | http://hdl.handle.net/10722/281968 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Zhengzhe | - |
dc.contributor.author | Qi, Xiaojuan | - |
dc.contributor.author | Pang, Lei | - |
dc.date.accessioned | 2020-04-09T09:19:16Z | - |
dc.date.available | 2020-04-09T09:19:16Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | MM 2018 - Proceedings of the 2018 ACM Multimedia Conference, 2018, p. 145-153 | - |
dc.identifier.uri | http://hdl.handle.net/10722/281968 | - |
dc.description.abstract | © 2018 Association for Computing Machinery. In this paper, we propose a self-boosted intelligent system for joint sign language recognition and automatic education. A novel Spatial-Temporal Net (ST-Net) is designed to exploit the temporal dynamics of localized hands for sign language recognition. Features from ST-Net can be deployed by our education system to detect failure modes of the learners. Moreover, the education system can help collect a vast amount of data for training ST-Net. Our sign language recognition and education system help improve each other step-by-step. On the one hand, benefited from accurate recognition system, the education system can detect the failure parts of the learner more precisely. On the other hand, with more training data gathered from the education system, the recognition system becomes more robust and accurate. Experiments on Hong Kong sign language dataset containing 227 commonly used words validate the effectiveness of our joint recognition and education system. | - |
dc.language | eng | - |
dc.relation.ispartof | MM 2018 - Proceedings of the 2018 ACM Multimedia Conference | - |
dc.subject | Interactive system | - |
dc.subject | Recognition | - |
dc.subject | Convolutional neural networks | - |
dc.title | Self-boosted gesture interactive system with ST-Net | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.doi | 10.1145/3240508.3240530 | - |
dc.identifier.scopus | eid_2-s2.0-85058239628 | - |
dc.identifier.spage | 145 | - |
dc.identifier.epage | 153 | - |
dc.identifier.isi | WOS:000509665700017 | - |