File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Music emotion recognition: Toward new, robust standards in personalized and context-sensitive applications

TitleMusic emotion recognition: Toward new, robust standards in personalized and context-sensitive applications
Authors
Issue Date2021
PublisherIEEE.
Citation
IEEE Signal Processing Magazine, 2021, v. 38 n. 6, p. 106-114 How to Cite?
AbstractEmotion is one of the main reasons why people engage and interact with music [1] . Songs can express our inner feelings, produce goosebumps, bring us to tears, share an emotional state with a composer or performer, or trigger specific memories. Interest in a deeper understanding of the relationship between music and emotion has motivated researchers from various areas of knowledge for decades [2] , including computational researchers. Imagine an algorithm capable of predicting the emotions that a listener perceives in a musical piece, or one that dynamically generates music that adapts to the mood of a conversation in a film—a particularly fascinating and provocative idea. These algorithms typify music emotion recognition (MER), a computational task that attempts to automatically recognize either the emotional content in music or the emotions induced by music to the listener [3] . To do so, emotionally relevant features are extracted from music. The features are processed, evaluated, and then associated with certain emotions. MER is one of the most challenging high-level music description problems in music information retrieval (MIR), an interdisciplinary research field that focuses on the development of computational systems to help humans better understand music collections. MIR integrates concepts and methodologies from several disciplines, including music theory, music psychology, neuroscience, signal processing, and machine learning.
Persistent Identifierhttp://hdl.handle.net/10722/305468
ISSN
2023 Impact Factor: 9.4
2023 SCImago Journal Rankings: 4.896
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorGómez-Cañón, JS-
dc.contributor.authorCano, E-
dc.contributor.authorEerola, T-
dc.contributor.authorHerrera, P-
dc.contributor.authorHu, X-
dc.contributor.authorYang, YH-
dc.contributor.authorGómez, E-
dc.date.accessioned2021-10-20T10:09:49Z-
dc.date.available2021-10-20T10:09:49Z-
dc.date.issued2021-
dc.identifier.citationIEEE Signal Processing Magazine, 2021, v. 38 n. 6, p. 106-114-
dc.identifier.issn1053-5888-
dc.identifier.urihttp://hdl.handle.net/10722/305468-
dc.description.abstractEmotion is one of the main reasons why people engage and interact with music [1] . Songs can express our inner feelings, produce goosebumps, bring us to tears, share an emotional state with a composer or performer, or trigger specific memories. Interest in a deeper understanding of the relationship between music and emotion has motivated researchers from various areas of knowledge for decades [2] , including computational researchers. Imagine an algorithm capable of predicting the emotions that a listener perceives in a musical piece, or one that dynamically generates music that adapts to the mood of a conversation in a film—a particularly fascinating and provocative idea. These algorithms typify music emotion recognition (MER), a computational task that attempts to automatically recognize either the emotional content in music or the emotions induced by music to the listener [3] . To do so, emotionally relevant features are extracted from music. The features are processed, evaluated, and then associated with certain emotions. MER is one of the most challenging high-level music description problems in music information retrieval (MIR), an interdisciplinary research field that focuses on the development of computational systems to help humans better understand music collections. MIR integrates concepts and methodologies from several disciplines, including music theory, music psychology, neuroscience, signal processing, and machine learning.-
dc.languageeng-
dc.publisherIEEE.-
dc.relation.ispartofIEEE Signal Processing Magazine-
dc.titleMusic emotion recognition: Toward new, robust standards in personalized and context-sensitive applications-
dc.typeArticle-
dc.identifier.emailHu, X: xiaoxhu@hku.hk-
dc.identifier.authorityHu, X=rp01711-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/MSP.2021.3106232-
dc.identifier.scopuseid_2-s2.0-85118593358-
dc.identifier.hkuros327854-
dc.identifier.volume38-
dc.identifier.issue6-
dc.identifier.spage106-
dc.identifier.epage114-
dc.identifier.isiWOS:000711718500019-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats