File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Sign Language Recognition Based on R(2+1)D with Spatial-Temporal-Channel Attention

TitleSign Language Recognition Based on R(2+1)D with Spatial-Temporal-Channel Attention
Authors
KeywordsAttention mechanism
R(2+1)D
sign language recognition (SLR)
Issue Date2022
Citation
IEEE Transactions on Human-Machine Systems, 2022, v. 52, n. 4, p. 687-698 How to Cite?
AbstractPrevious work utilized three-dimensional (3-D) convolutional neural networks (CNNs) tomodel the spatial appearance and temporal evolution concurrently for sign language recognition (SLR) and exhibited impressive performance. However, there are still challenges for 3-D CNN-based methods. First, motion information plays a more significant role than spatial content in sign language. Therefore, it is still questionable whether to treat space and time equally and model them jointly by heavy 3-D convolutions in a unified approach. Second, because of the interference from the highly redundant information in sign videos, it is still nontrivial to effectively extract discriminative spatiotemporal features related to sign language. In this study, deep R(2+1)D was adopted for separate spatial and temporal modeling and demonstrated that decomposing 3-D convolution filters into independent spatial and temporal convolutions facilitates the optimization process in SLR. A lightweight spatial-Temporal-channel attention module, including two submodules called channel-Temporal attention and spatial-Temporal attention, was proposed to make the network concentrate on the significant information along spatial, temporal, and channel dimensions by combining squeeze and excitation attention with self-Attention. By embedding this module into R(2+1)D, superior or comparable results to the state-of-The-Art methods on the CSL-500, Jester, and EgoGesture datasets were obtained, which demonstrated the effectiveness of the proposed method.
Persistent Identifierhttp://hdl.handle.net/10722/349686
ISSN
2023 Impact Factor: 3.5
2023 SCImago Journal Rankings: 1.139

 

DC FieldValueLanguage
dc.contributor.authorHan, Xiangzu-
dc.contributor.authorLu, Fei-
dc.contributor.authorYin, Jianqin-
dc.contributor.authorTian, Guohui-
dc.contributor.authorLiu, Jun-
dc.date.accessioned2024-10-17T07:00:08Z-
dc.date.available2024-10-17T07:00:08Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Human-Machine Systems, 2022, v. 52, n. 4, p. 687-698-
dc.identifier.issn2168-2291-
dc.identifier.urihttp://hdl.handle.net/10722/349686-
dc.description.abstractPrevious work utilized three-dimensional (3-D) convolutional neural networks (CNNs) tomodel the spatial appearance and temporal evolution concurrently for sign language recognition (SLR) and exhibited impressive performance. However, there are still challenges for 3-D CNN-based methods. First, motion information plays a more significant role than spatial content in sign language. Therefore, it is still questionable whether to treat space and time equally and model them jointly by heavy 3-D convolutions in a unified approach. Second, because of the interference from the highly redundant information in sign videos, it is still nontrivial to effectively extract discriminative spatiotemporal features related to sign language. In this study, deep R(2+1)D was adopted for separate spatial and temporal modeling and demonstrated that decomposing 3-D convolution filters into independent spatial and temporal convolutions facilitates the optimization process in SLR. A lightweight spatial-Temporal-channel attention module, including two submodules called channel-Temporal attention and spatial-Temporal attention, was proposed to make the network concentrate on the significant information along spatial, temporal, and channel dimensions by combining squeeze and excitation attention with self-Attention. By embedding this module into R(2+1)D, superior or comparable results to the state-of-The-Art methods on the CSL-500, Jester, and EgoGesture datasets were obtained, which demonstrated the effectiveness of the proposed method.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Human-Machine Systems-
dc.subjectAttention mechanism-
dc.subjectR(2+1)D-
dc.subjectsign language recognition (SLR)-
dc.titleSign Language Recognition Based on R(2+1)D with Spatial-Temporal-Channel Attention-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/THMS.2022.3144000-
dc.identifier.scopuseid_2-s2.0-85124184340-
dc.identifier.volume52-
dc.identifier.issue4-
dc.identifier.spage687-
dc.identifier.epage698-
dc.identifier.eissn2168-2305-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats