File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Knowledge-guided inference for voice-enabled CAD

TitleKnowledge-guided inference for voice-enabled CAD
Authors
KeywordsKnowledge
Semantic Inference
Speech
Synonym Expansion
Template Matching
Voice
Voice-Enabled Cad
Issue Date2010
PublisherElsevier Ltd. The Journal's web site is located at http://www.elsevier.com/locate/cad
Citation
Cad Computer Aided Design, 2010, v. 42 n. 6, p. 545-557 How to Cite?
AbstractVoice based human-computer interactions have raised much interest and found various applications. Some extant voice based interactions only support voice commands with fixed vocabularies or preset expressions. This paper is motivated to investigate an approach to implement a flexible voice-enabled CAD system, where users are no longer constrained by predefined commands. Designers can, to a much more flexible degree, communicate with CAD modelers using natural language conversations. To accomplish this, a knowledge-guided approach is proposed to infer the semantics of voice input. The semantic inference is formulated as a template matching problem, where the semantic units parsed from voice input are the "samples" to be inspected and the semantic units in the predefined library are the feature templates. The proposed behavioral glosses, together with CAD-specific synonyms, hyponyms and hypernyms are extensively used in the parsing of semantic units and the subsequent template matching. Using such sources of knowledge, all the semantically equivalent expressions can be mapped to the same command set, and the Voice-enabled Computer Aided Design (VeCAD) system is then capable of processing new expressions it has never encountered and inferring/understanding the semantics at runtime. Experiments show that this knowledge-guided approach is helpful to enhance the robustness of semantic inference and can effectively eliminate the chance of overestimations and underestimations in design intent interpretation. © 2010 Elsevier Ltd. All rights reserved.
Persistent Identifierhttp://hdl.handle.net/10722/157066
ISSN
2023 Impact Factor: 3.0
2023 SCImago Journal Rankings: 0.791
ISI Accession Number ID
Funding AgencyGrant Number
University of Hong Kong200707176061
Research Grants Council of Hong Kong SAR government716808E
Funding Information:

We would like to thank the Department of Mechanical Engineering and the Committee on Research and Conference Grants of the University of Hong Kong for supporting this project (Project No. 200707176061). Thanks also go to the Research Grants Council of Hong Kong SAR government for supporting the project (Project Code: 716808E).

References
Grants

 

DC FieldValueLanguage
dc.contributor.authorKou, XYen_US
dc.contributor.authorXue, SKen_US
dc.contributor.authorTan, STen_US
dc.date.accessioned2012-08-08T08:45:11Z-
dc.date.available2012-08-08T08:45:11Z-
dc.date.issued2010en_US
dc.identifier.citationCad Computer Aided Design, 2010, v. 42 n. 6, p. 545-557en_US
dc.identifier.issn0010-4485en_US
dc.identifier.urihttp://hdl.handle.net/10722/157066-
dc.description.abstractVoice based human-computer interactions have raised much interest and found various applications. Some extant voice based interactions only support voice commands with fixed vocabularies or preset expressions. This paper is motivated to investigate an approach to implement a flexible voice-enabled CAD system, where users are no longer constrained by predefined commands. Designers can, to a much more flexible degree, communicate with CAD modelers using natural language conversations. To accomplish this, a knowledge-guided approach is proposed to infer the semantics of voice input. The semantic inference is formulated as a template matching problem, where the semantic units parsed from voice input are the "samples" to be inspected and the semantic units in the predefined library are the feature templates. The proposed behavioral glosses, together with CAD-specific synonyms, hyponyms and hypernyms are extensively used in the parsing of semantic units and the subsequent template matching. Using such sources of knowledge, all the semantically equivalent expressions can be mapped to the same command set, and the Voice-enabled Computer Aided Design (VeCAD) system is then capable of processing new expressions it has never encountered and inferring/understanding the semantics at runtime. Experiments show that this knowledge-guided approach is helpful to enhance the robustness of semantic inference and can effectively eliminate the chance of overestimations and underestimations in design intent interpretation. © 2010 Elsevier Ltd. All rights reserved.en_US
dc.languageengen_US
dc.publisherElsevier Ltd. The Journal's web site is located at http://www.elsevier.com/locate/caden_US
dc.relation.ispartofCAD Computer Aided Designen_US
dc.subjectKnowledgeen_US
dc.subjectSemantic Inferenceen_US
dc.subjectSpeechen_US
dc.subjectSynonym Expansionen_US
dc.subjectTemplate Matchingen_US
dc.subjectVoiceen_US
dc.subjectVoice-Enabled Caden_US
dc.titleKnowledge-guided inference for voice-enabled CADen_US
dc.typeArticleen_US
dc.identifier.emailXue, SK:anx@pdx.eduen_US
dc.identifier.emailTan, ST:sttan@hkucc.hku.hken_US
dc.identifier.authorityXue, SK=rp00977en_US
dc.identifier.authorityTan, ST=rp00174en_US
dc.description.naturelink_to_subscribed_fulltexten_US
dc.identifier.doi10.1016/j.cad.2010.02.002en_US
dc.identifier.scopuseid_2-s2.0-77951114079en_US
dc.identifier.hkuros170880-
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-77951114079&selection=ref&src=s&origin=recordpageen_US
dc.identifier.volume42en_US
dc.identifier.issue6en_US
dc.identifier.spage545en_US
dc.identifier.epage557en_US
dc.identifier.eissn1879-2685-
dc.identifier.isiWOS:000278174600007-
dc.publisher.placeUnited Kingdomen_US
dc.relation.projectVoice assisted CAD modeling and system design-
dc.identifier.scopusauthoridKou, XY=7005662507en_US
dc.identifier.scopusauthoridXue, SK=7202791296en_US
dc.identifier.scopusauthoridTan, ST=7403366758en_US
dc.identifier.issnl0010-4485-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats