File Download
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Segmental recurrent neural networks
Title | Segmental recurrent neural networks |
---|---|
Authors | |
Issue Date | 2016 |
Citation | 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, 2016 How to Cite? |
Abstract | © ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We introduce segmental recurrent neural networks (SRNNs) which define, given an input sequence, a joint probability distribution over segmentations of the input and labelings of the segments. Representations of the input segments (i.e., contiguous subsequences of the input) are computed by encoding their constituent tokens using bidirectional recurrent neural nets, and these “segment embeddings” are used to define compatibility scores with output labels. These local compatibility scores are integrated using a global semi-Markov conditional random field. Both fully supervised training—in which segment boundaries and labels are observed—as well as partially supervised training—in which segment boundaries are latent—are straightforward. Experiments on handwriting recognition and joint Chinese word segmentation/POS tagging show that, compared to models that do not explicitly represent segments such as BIO tagging schemes and connectionist temporal classification (CTC), SRNNs obtain substantially higher accuracies. |
Persistent Identifier | http://hdl.handle.net/10722/296009 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kong, Lingpeng | - |
dc.contributor.author | Dyer, Chris | - |
dc.contributor.author | Smith, Noah A. | - |
dc.date.accessioned | 2021-02-11T04:52:38Z | - |
dc.date.available | 2021-02-11T04:52:38Z | - |
dc.date.issued | 2016 | - |
dc.identifier.citation | 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, 2016 | - |
dc.identifier.uri | http://hdl.handle.net/10722/296009 | - |
dc.description.abstract | © ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We introduce segmental recurrent neural networks (SRNNs) which define, given an input sequence, a joint probability distribution over segmentations of the input and labelings of the segments. Representations of the input segments (i.e., contiguous subsequences of the input) are computed by encoding their constituent tokens using bidirectional recurrent neural nets, and these “segment embeddings” are used to define compatibility scores with output labels. These local compatibility scores are integrated using a global semi-Markov conditional random field. Both fully supervised training—in which segment boundaries and labels are observed—as well as partially supervised training—in which segment boundaries are latent—are straightforward. Experiments on handwriting recognition and joint Chinese word segmentation/POS tagging show that, compared to models that do not explicitly represent segments such as BIO tagging schemes and connectionist temporal classification (CTC), SRNNs obtain substantially higher accuracies. | - |
dc.language | eng | - |
dc.relation.ispartof | 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings | - |
dc.title | Segmental recurrent neural networks | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85083953994 | - |