File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input

TitleAttention-LSTM autoencoder simulation for phonotactic learning from raw audio input
Authors
Keywordsattention
autoencoder
language learning
neural network modeling
phonotactics
Issue Date8-Sep-2025
PublisherDe Gruyter
Citation
Linguistics Vanguard, 2025 How to Cite?
AbstractThis paper presents a learning simulation of phonotactics using an attention-based long short-term memory autoencoder trained on raw audio input. Unlike previous models that use abstract phonological representations, the current method imitates early phonotactic acquisition stages by processing continuous acoustic signals. Focusing on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops, the model implicitly acquires phonotactic knowledge through reconstruction tasks. The results demonstrate the model’s ability to acquire essential phonotactic relations through attention mechanisms, exhibiting increased attention to phonological context which shows higher phonotactic predictability. The learning trajectory begins with a strong reliance on contextual cues to identify phonotactic patterns. Over time, the system internalizes these constraints, leading to a decreased reliance on specific phonotactic cues. This study suggests the feasibility of early phonotactic learning models based on raw auditory input and provides insights into both computational modeling and infants’ phonotactic acquisition.
Persistent Identifierhttp://hdl.handle.net/10722/363802
ISSN
2023 Impact Factor: 1.1
2023 SCImago Journal Rankings: 0.572

 

DC FieldValueLanguage
dc.contributor.authorTan, Frank Lihui-
dc.contributor.authorDo, Youngah-
dc.date.accessioned2025-10-12T00:30:12Z-
dc.date.available2025-10-12T00:30:12Z-
dc.date.issued2025-09-08-
dc.identifier.citationLinguistics Vanguard, 2025-
dc.identifier.issn2199-174X-
dc.identifier.urihttp://hdl.handle.net/10722/363802-
dc.description.abstractThis paper presents a learning simulation of phonotactics using an attention-based long short-term memory autoencoder trained on raw audio input. Unlike previous models that use abstract phonological representations, the current method imitates early phonotactic acquisition stages by processing continuous acoustic signals. Focusing on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops, the model implicitly acquires phonotactic knowledge through reconstruction tasks. The results demonstrate the model’s ability to acquire essential phonotactic relations through attention mechanisms, exhibiting increased attention to phonological context which shows higher phonotactic predictability. The learning trajectory begins with a strong reliance on contextual cues to identify phonotactic patterns. Over time, the system internalizes these constraints, leading to a decreased reliance on specific phonotactic cues. This study suggests the feasibility of early phonotactic learning models based on raw auditory input and provides insights into both computational modeling and infants’ phonotactic acquisition.-
dc.languageeng-
dc.publisherDe Gruyter-
dc.relation.ispartofLinguistics Vanguard-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectattention-
dc.subjectautoencoder-
dc.subjectlanguage learning-
dc.subjectneural network modeling-
dc.subjectphonotactics-
dc.titleAttention-LSTM autoencoder simulation for phonotactic learning from raw audio input-
dc.typeArticle-
dc.identifier.doi10.1515/lingvan-2024-0210-
dc.identifier.scopuseid_2-s2.0-105016363495-
dc.identifier.eissn2199-174X-
dc.identifier.issnl2199-174X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats