File Download

There are no files associated with this item.

Conference Paper: Criteria, standards and judgment practices in assessing performance-based tasks in higher education: opportunities from professional programmes

TitleCriteria, standards and judgment practices in assessing performance-based tasks in higher education: opportunities from professional programmes
Authors
Issue Date2015
Citation
The 2015 International Conference on Assessment for Learning in Higher Education, The university of Hong Kong, Hong Kong, 14-15 May 2015. How to Cite?
AbstractOutcomes-based models in higher education recognize the centrality of standards-based assessment in fulfilling the goal of curriculum alignment. This workshop aims to take this mission forward by examining one assessment type: performance-based tasks. By definition, we consider these tasks to be-in-the-moment performances by students that may be assessed in real time or video recorded for post-performance assessment. Examples include professional practicum performances, clinical performances in simulated treatments or real patient care, demonstration of skills, teaching practicums and oral presentations such as moot courts, vivas, dramas, debates etc. We will first examine the tensions between validity and reliability with performance-based tasks when considering their placement within an overarching, course or programme-level assessment strategy. Second, in considering in situ assessment of performance-based tasks, the notion of examiner judgment is central. Key to validity and reliability is making such judgments defensible, visible and accessible to students and examiners alike. Articulation of latent expertise and ‘connoisseur’ use of task performance criteria are key to this notion of accessibility. One widely adopted approach is the adoption of ‘rubric’ formats for the denotation of standards and explication of task-specific criteria. However, the standard table-format matrix used to as a template for assessment of tasks holds potential limitations for application and interpretation. ‘Boxing’ in multiple descriptors for single criterion may constrain views of student performance. They have potential to limit what an assessor ‘sees’ in the act of assessing performances, specifically, what the performance calls the assessor ‘to see’ that may not have been previously identified in the published criteria. The use of assessment grading intervals whether pass/fail or a A-E affects interpretation, reliability and the nature of feedback to students. Likewise, the ability to make ‘on-balance’ judgments may be limited by wholly pre-specified features of quality. The writing of clear yet nuanced descriptors or specifications, therefore, proves to be a continuing challenge in higher education, especially in performance-based tasks. Various models and approaches will be shared and developed in this workshop. We will also problematize the use of scalar attributes such as ‘excellent’, ‘good’, ‘unsatisfactory’ in denoting criteria and explore methods to best capture salient features considered by assessors to be central to task performance across levels.
DescriptionPre-Conference Workshop 3
Persistent Identifierhttp://hdl.handle.net/10722/213670

 

DC FieldValueLanguage
dc.contributor.authorBridges, S-
dc.contributor.authorBotelho, M-
dc.contributor.authorWyatt-Smith, C-
dc.date.accessioned2015-08-11T04:14:03Z-
dc.date.available2015-08-11T04:14:03Z-
dc.date.issued2015-
dc.identifier.citationThe 2015 International Conference on Assessment for Learning in Higher Education, The university of Hong Kong, Hong Kong, 14-15 May 2015.-
dc.identifier.urihttp://hdl.handle.net/10722/213670-
dc.descriptionPre-Conference Workshop 3-
dc.description.abstractOutcomes-based models in higher education recognize the centrality of standards-based assessment in fulfilling the goal of curriculum alignment. This workshop aims to take this mission forward by examining one assessment type: performance-based tasks. By definition, we consider these tasks to be-in-the-moment performances by students that may be assessed in real time or video recorded for post-performance assessment. Examples include professional practicum performances, clinical performances in simulated treatments or real patient care, demonstration of skills, teaching practicums and oral presentations such as moot courts, vivas, dramas, debates etc. We will first examine the tensions between validity and reliability with performance-based tasks when considering their placement within an overarching, course or programme-level assessment strategy. Second, in considering in situ assessment of performance-based tasks, the notion of examiner judgment is central. Key to validity and reliability is making such judgments defensible, visible and accessible to students and examiners alike. Articulation of latent expertise and ‘connoisseur’ use of task performance criteria are key to this notion of accessibility. One widely adopted approach is the adoption of ‘rubric’ formats for the denotation of standards and explication of task-specific criteria. However, the standard table-format matrix used to as a template for assessment of tasks holds potential limitations for application and interpretation. ‘Boxing’ in multiple descriptors for single criterion may constrain views of student performance. They have potential to limit what an assessor ‘sees’ in the act of assessing performances, specifically, what the performance calls the assessor ‘to see’ that may not have been previously identified in the published criteria. The use of assessment grading intervals whether pass/fail or a A-E affects interpretation, reliability and the nature of feedback to students. Likewise, the ability to make ‘on-balance’ judgments may be limited by wholly pre-specified features of quality. The writing of clear yet nuanced descriptors or specifications, therefore, proves to be a continuing challenge in higher education, especially in performance-based tasks. Various models and approaches will be shared and developed in this workshop. We will also problematize the use of scalar attributes such as ‘excellent’, ‘good’, ‘unsatisfactory’ in denoting criteria and explore methods to best capture salient features considered by assessors to be central to task performance across levels.-
dc.languageeng-
dc.relation.ispartofAssessment for Learning in Higher Education International Conference-
dc.titleCriteria, standards and judgment practices in assessing performance-based tasks in higher education: opportunities from professional programmes-
dc.typeConference_Paper-
dc.identifier.emailBridges, S: sbridges@hku.hk-
dc.identifier.emailBotelho, M: botelho@hkucc.hku.hk-
dc.identifier.authorityBridges, S=rp00048-
dc.identifier.authorityBotelho, M=rp00033-
dc.identifier.hkuros247061-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats