File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.3389/fpsyg.2018.00837
- Scopus: eid_2-s2.0-85047667969
- PMID: 29896144
- WOS: WOS:000433394300001
Supplementary
- Citations:
- Appears in Collections:
Article: (Dis)agreement on sight-singing assessment of undergraduate musicians
Title | (Dis)agreement on sight-singing assessment of undergraduate musicians |
---|---|
Authors | |
Keywords | Sight-singing assessment Inter-judge validity and reliability Conservatoire training Music evaluation Music performance assessment |
Issue Date | 2018 |
Citation | Frontiers in Psychology, 2018, v. 9, article no. 837 How to Cite? |
Abstract | © 2018 Bortz, Germano and Cogo-Moreira. Assessment criteria for sight-singing abilities are similar to those used to judge music performances across music school programs. However, little evidence of agreement among judges has been provided in the literature. Fifty out of 152 participants were randomly selected and blindly assessed by three judges, who evaluated students based on given criteria. Participants were recorded while sight-singing 19 intervals and 10 tonal melodies. Interjudge agreement on melodic sight-singing was tested considering four items in a five-point Likert scale format as follows: (1) Intonation and pitch accuracy; (2) Tonal sense and memory; (3) Rhythmic precision, regularity of pulse and subdivisions; (4) Fluency and music direction. Intervals were scored considering a 3-point Likert scale. Agreement was conducted using weighted kappa. For melodic sight-singing considering the ten tonal melodies, on average, the weighted kappa (κw) were: κ1w = 0.296, κ2w = 0.487, κ3w = 0.224, and κ4w = 0.244, ranging from fair to moderate.. For intervals, the lowest agreement was kappa = 0.406 and the highest was kappa = 0.792 (on average, kappa = 0.637). These findings light up the discussion on the validity and reliability of models that have been taken for granted in assessing music performance in auditions and contests, and illustrate the need to better discuss evaluation criteria. |
Persistent Identifier | http://hdl.handle.net/10722/288572 |
PubMed Central ID | |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Bortz, Graziela | - |
dc.contributor.author | Germano, Nayana G. | - |
dc.contributor.author | Cogo-Moreira, Hugo | - |
dc.date.accessioned | 2020-10-12T08:05:18Z | - |
dc.date.available | 2020-10-12T08:05:18Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Frontiers in Psychology, 2018, v. 9, article no. 837 | - |
dc.identifier.uri | http://hdl.handle.net/10722/288572 | - |
dc.description.abstract | © 2018 Bortz, Germano and Cogo-Moreira. Assessment criteria for sight-singing abilities are similar to those used to judge music performances across music school programs. However, little evidence of agreement among judges has been provided in the literature. Fifty out of 152 participants were randomly selected and blindly assessed by three judges, who evaluated students based on given criteria. Participants were recorded while sight-singing 19 intervals and 10 tonal melodies. Interjudge agreement on melodic sight-singing was tested considering four items in a five-point Likert scale format as follows: (1) Intonation and pitch accuracy; (2) Tonal sense and memory; (3) Rhythmic precision, regularity of pulse and subdivisions; (4) Fluency and music direction. Intervals were scored considering a 3-point Likert scale. Agreement was conducted using weighted kappa. For melodic sight-singing considering the ten tonal melodies, on average, the weighted kappa (κw) were: κ1w = 0.296, κ2w = 0.487, κ3w = 0.224, and κ4w = 0.244, ranging from fair to moderate.. For intervals, the lowest agreement was kappa = 0.406 and the highest was kappa = 0.792 (on average, kappa = 0.637). These findings light up the discussion on the validity and reliability of models that have been taken for granted in assessing music performance in auditions and contests, and illustrate the need to better discuss evaluation criteria. | - |
dc.language | eng | - |
dc.relation.ispartof | Frontiers in Psychology | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Sight-singing assessment | - |
dc.subject | Inter-judge validity and reliability | - |
dc.subject | Conservatoire training | - |
dc.subject | Music evaluation | - |
dc.subject | Music performance assessment | - |
dc.title | (Dis)agreement on sight-singing assessment of undergraduate musicians | - |
dc.type | Article | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.doi | 10.3389/fpsyg.2018.00837 | - |
dc.identifier.pmid | 29896144 | - |
dc.identifier.pmcid | PMC5987045 | - |
dc.identifier.scopus | eid_2-s2.0-85047667969 | - |
dc.identifier.volume | 9 | - |
dc.identifier.spage | article no. 837 | - |
dc.identifier.epage | article no. 837 | - |
dc.identifier.eissn | 1664-1078 | - |
dc.identifier.isi | WOS:000433394300001 | - |
dc.identifier.issnl | 1664-1078 | - |