File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Testing autotrace

TitleTesting autotrace
Authors
Issue Date2014
PublisherAcoustical Society of America. The Journal's web site is located at http://asa.aip.org/jasa.html
Citation
The 2014 Fall Meeting of the Acoustical Society of America, Indianapolis, IN., 27–31 October 2014. In Journal of the Acoustical Society of America, 2014, v. 136 n. 4, p. 2082-2082 How to Cite?
AbstractWhile ultrasound provides a remarkable tool for tracking the tongue's movements during speech, it has yet to emerge as the powerful research tool it could be. A major roadblock is that the means of appropriately labeling images is a laborious, time-intensive undertaking. In earlier work, Fasel and Berry (2010) introduced a 'translational' deep belief network (tDBN) approach to automated labeling of ultrasound images of the tongue, and tested it against a single-speaker set of 3209 images. This study tests the same methodology against a much larger data set (about 40,000 images), using data collected for different studies with multiple speakers and multiple languages. Retraining a “generic” network with a small set of the most erroneously labeled images from language-specific development sets resulted in an almost three-fold increase in precision in the three test cases examined. © 2014 Acoustical Society of America
Persistent Identifierhttp://hdl.handle.net/10722/211047
ISSN
2015 Impact Factor: 1.572
2015 SCImago Journal Rankings: 0.938

 

DC FieldValueLanguage
dc.contributor.authorHahn-Powell, GV-
dc.contributor.authorArchangeli, D-
dc.date.accessioned2015-07-03T08:55:52Z-
dc.date.available2015-07-03T08:55:52Z-
dc.date.issued2014-
dc.identifier.citationThe 2014 Fall Meeting of the Acoustical Society of America, Indianapolis, IN., 27–31 October 2014. In Journal of the Acoustical Society of America, 2014, v. 136 n. 4, p. 2082-2082-
dc.identifier.issn0001-4966-
dc.identifier.urihttp://hdl.handle.net/10722/211047-
dc.description.abstractWhile ultrasound provides a remarkable tool for tracking the tongue's movements during speech, it has yet to emerge as the powerful research tool it could be. A major roadblock is that the means of appropriately labeling images is a laborious, time-intensive undertaking. In earlier work, Fasel and Berry (2010) introduced a 'translational' deep belief network (tDBN) approach to automated labeling of ultrasound images of the tongue, and tested it against a single-speaker set of 3209 images. This study tests the same methodology against a much larger data set (about 40,000 images), using data collected for different studies with multiple speakers and multiple languages. Retraining a “generic” network with a small set of the most erroneously labeled images from language-specific development sets resulted in an almost three-fold increase in precision in the three test cases examined. © 2014 Acoustical Society of America-
dc.languageeng-
dc.publisherAcoustical Society of America. The Journal's web site is located at http://asa.aip.org/jasa.html-
dc.relation.ispartofJournal of the Acoustical Society of America-
dc.rightsJournal of the Acoustical Society of America. Copyright © Acoustical Society of America.-
dc.titleTesting autotrace-
dc.typeConference_Paper-
dc.identifier.emailArchangeli, D: darchang@hku.hk-
dc.identifier.authorityArchangeli, D=rp01748-
dc.identifier.doi10.1121/1.4899478-
dc.identifier.hkuros244581-
dc.identifier.volume136-
dc.identifier.issue4-
dc.identifier.spage2082-
dc.identifier.epage2082-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats