File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.21437/Interspeech.2016-40
- Scopus: eid_2-s2.0-84994242299
- WOS: WOS:000409394400082
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Segmental recurrent neural networks for end-to-end speech recognition
Title | Segmental recurrent neural networks for end-to-end speech recognition |
---|---|
Authors | |
Keywords | End-to-end speech recognition Segmental CRF Recurrent neural networks |
Issue Date | 2016 |
Citation | INTERSPEECH 2016, San Francisco, CA, 8-12 September 2016. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016), 2016, p. 385-389 How to Cite? |
Abstract | Copyright © 2016 ISCA. We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. Essentially, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding-the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model. |
Persistent Identifier | http://hdl.handle.net/10722/296137 |
ISSN | 2020 SCImago Journal Rankings: 0.689 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lu, Liang | - |
dc.contributor.author | Kong, Lingpeng | - |
dc.contributor.author | Dyer, Chris | - |
dc.contributor.author | Smith, Noah A. | - |
dc.contributor.author | Renals, Steve | - |
dc.date.accessioned | 2021-02-11T04:52:55Z | - |
dc.date.available | 2021-02-11T04:52:55Z | - |
dc.date.issued | 2016 | - |
dc.identifier.citation | INTERSPEECH 2016, San Francisco, CA, 8-12 September 2016. In Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016), 2016, p. 385-389 | - |
dc.identifier.issn | 2308-457X | - |
dc.identifier.uri | http://hdl.handle.net/10722/296137 | - |
dc.description.abstract | Copyright © 2016 ISCA. We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. Essentially, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding-the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the 17th Annual Conference of the International Speech Communication Association (INTERSPEECH 2016) | - |
dc.subject | End-to-end speech recognition | - |
dc.subject | Segmental CRF | - |
dc.subject | Recurrent neural networks | - |
dc.title | Segmental recurrent neural networks for end-to-end speech recognition | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.doi | 10.21437/Interspeech.2016-40 | - |
dc.identifier.scopus | eid_2-s2.0-84994242299 | - |
dc.identifier.spage | 385 | - |
dc.identifier.epage | 389 | - |
dc.identifier.eissn | 1990-9772 | - |
dc.identifier.isi | WOS:000409394400082 | - |