File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1515/lingvan-2024-0210
- Scopus: eid_2-s2.0-105016363495
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input
| Title | Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input |
|---|---|
| Authors | |
| Keywords | attention autoencoder language learning neural network modeling phonotactics |
| Issue Date | 8-Sep-2025 |
| Publisher | De Gruyter |
| Citation | Linguistics Vanguard, 2025 How to Cite? |
| Abstract | This paper presents a learning simulation of phonotactics using an attention-based long short-term memory autoencoder trained on raw audio input. Unlike previous models that use abstract phonological representations, the current method imitates early phonotactic acquisition stages by processing continuous acoustic signals. Focusing on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops, the model implicitly acquires phonotactic knowledge through reconstruction tasks. The results demonstrate the model’s ability to acquire essential phonotactic relations through attention mechanisms, exhibiting increased attention to phonological context which shows higher phonotactic predictability. The learning trajectory begins with a strong reliance on contextual cues to identify phonotactic patterns. Over time, the system internalizes these constraints, leading to a decreased reliance on specific phonotactic cues. This study suggests the feasibility of early phonotactic learning models based on raw auditory input and provides insights into both computational modeling and infants’ phonotactic acquisition. |
| Persistent Identifier | http://hdl.handle.net/10722/363802 |
| ISSN | 2023 Impact Factor: 1.1 2023 SCImago Journal Rankings: 0.572 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Tan, Frank Lihui | - |
| dc.contributor.author | Do, Youngah | - |
| dc.date.accessioned | 2025-10-12T00:30:12Z | - |
| dc.date.available | 2025-10-12T00:30:12Z | - |
| dc.date.issued | 2025-09-08 | - |
| dc.identifier.citation | Linguistics Vanguard, 2025 | - |
| dc.identifier.issn | 2199-174X | - |
| dc.identifier.uri | http://hdl.handle.net/10722/363802 | - |
| dc.description.abstract | This paper presents a learning simulation of phonotactics using an attention-based long short-term memory autoencoder trained on raw audio input. Unlike previous models that use abstract phonological representations, the current method imitates early phonotactic acquisition stages by processing continuous acoustic signals. Focusing on an English phonotactic pattern, specifically the distribution of aspirated and unaspirated voiceless stops, the model implicitly acquires phonotactic knowledge through reconstruction tasks. The results demonstrate the model’s ability to acquire essential phonotactic relations through attention mechanisms, exhibiting increased attention to phonological context which shows higher phonotactic predictability. The learning trajectory begins with a strong reliance on contextual cues to identify phonotactic patterns. Over time, the system internalizes these constraints, leading to a decreased reliance on specific phonotactic cues. This study suggests the feasibility of early phonotactic learning models based on raw auditory input and provides insights into both computational modeling and infants’ phonotactic acquisition. | - |
| dc.language | eng | - |
| dc.publisher | De Gruyter | - |
| dc.relation.ispartof | Linguistics Vanguard | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | attention | - |
| dc.subject | autoencoder | - |
| dc.subject | language learning | - |
| dc.subject | neural network modeling | - |
| dc.subject | phonotactics | - |
| dc.title | Attention-LSTM autoencoder simulation for phonotactic learning from raw audio input | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1515/lingvan-2024-0210 | - |
| dc.identifier.scopus | eid_2-s2.0-105016363495 | - |
| dc.identifier.eissn | 2199-174X | - |
| dc.identifier.issnl | 2199-174X | - |
