File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/1816123.1816146
- Scopus: eid_2-s2.0-77955115533
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Improving mood classification in music digital libraries by combining lyrics and audio
Title | Improving mood classification in music digital libraries by combining lyrics and audio |
---|---|
Authors | |
Keywords | Experimentation Measurement Performance Access points Classification accuracy Data sets Fusion methods Learning curves Music digital libraries Online music Sentiment analysis Text feature Training sample Audio acoustics Audio systems Experiments Metadata Text processing Digital libraries |
Issue Date | 2010 |
Citation | The 10th Annual Joint Conference on Digital Libraries (JCDL2010), Gold Coast, Australia, 21-25 June 2010. In Proceedings of the ACM International Conference on Digital Libraries, 2010, p. 159-168 How to Cite? |
Abstract | Mood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and comparing a wide range of lyric text features including linguistic and text stylistic features. We then combine the best lyric features with features extracted from music audio using two fusion methods. The results show that combining lyrics and audio significantly outperformed systems using audio-only features. In addition, the examination of learning curves shows that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly. These experiments were conducted on a unique large-scale dataset of 5,296 songs (with both audio and lyrics for each) representing 18 mood categories derived from social tags. The findings push forward the state-of-the-art on lyric sentiment analysis and automatic music mood classification and will help make mood a practical access point in music digital libraries. © 2010 ACM. |
Persistent Identifier | http://hdl.handle.net/10722/180711 |
ISBN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Hu, X | en_US |
dc.contributor.author | Downie, JS | en_US |
dc.date.accessioned | 2013-01-28T01:41:32Z | - |
dc.date.available | 2013-01-28T01:41:32Z | - |
dc.date.issued | 2010 | en_US |
dc.identifier.citation | The 10th Annual Joint Conference on Digital Libraries (JCDL2010), Gold Coast, Australia, 21-25 June 2010. In Proceedings of the ACM International Conference on Digital Libraries, 2010, p. 159-168 | en_US |
dc.identifier.isbn | 9781450300858 | en_US |
dc.identifier.uri | http://hdl.handle.net/10722/180711 | - |
dc.description.abstract | Mood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and comparing a wide range of lyric text features including linguistic and text stylistic features. We then combine the best lyric features with features extracted from music audio using two fusion methods. The results show that combining lyrics and audio significantly outperformed systems using audio-only features. In addition, the examination of learning curves shows that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly. These experiments were conducted on a unique large-scale dataset of 5,296 songs (with both audio and lyrics for each) representing 18 mood categories derived from social tags. The findings push forward the state-of-the-art on lyric sentiment analysis and automatic music mood classification and will help make mood a practical access point in music digital libraries. © 2010 ACM. | en_US |
dc.language | eng | en_US |
dc.relation.ispartof | Proceedings of the ACM International Conference on Digital Libraries | - |
dc.subject | Experimentation | en_US |
dc.subject | Measurement | en_US |
dc.subject | Performance | en_US |
dc.subject | Access points | en_US |
dc.subject | Classification accuracy | en_US |
dc.subject | Data sets | en_US |
dc.subject | Fusion methods | en_US |
dc.subject | Learning curves | en_US |
dc.subject | Music digital libraries | en_US |
dc.subject | Online music | en_US |
dc.subject | Sentiment analysis | en_US |
dc.subject | Text feature | en_US |
dc.subject | Training sample | en_US |
dc.subject | Audio acoustics | en_US |
dc.subject | Audio systems | en_US |
dc.subject | Experiments | en_US |
dc.subject | Metadata | en_US |
dc.subject | Text processing | en_US |
dc.subject | Digital libraries | en_US |
dc.title | Improving mood classification in music digital libraries by combining lyrics and audio | en_US |
dc.type | Conference_Paper | en_US |
dc.identifier.email | Hu, X: xiaoxhu@hku.hk | en_US |
dc.identifier.authority | Hu, X=rp01711 | en_US |
dc.description.nature | link_to_subscribed_fulltext | en_US |
dc.identifier.doi | 10.1145/1816123.1816146 | en_US |
dc.identifier.scopus | eid_2-s2.0-77955115533 | - |
dc.identifier.spage | 159 | en_US |
dc.identifier.epage | 168 | en_US |
dc.customcontrol.immutable | sml 160129 - amend | - |