File Download

There are no files associated with this item.

Supplementary

Conference Paper: A Note on the Equivalence of the Unidimensional Item Response Theory and Cognitive Diagnosis Models

TitleA Note on the Equivalence of the Unidimensional Item Response Theory and Cognitive Diagnosis Models
Authors
Issue Date2018
PublisherColumbia University.
Citation
Conference on Statistical Methods for Innovative Testing and Learning, Department of Statistics, Columbia University, New York, USA, 7-8 July 2018 How to Cite?
AbstractAt present, most existing educational assessments are developed and analyzed using unidimensional item response theory (IRT) models. To obtain information that can be used for diagnostic purposes, the same assessments have also been retrofitted with cognitive diagnosis models (CDMs). However, it remains unclear the extent to which two disparate psychometric frameworks can be simultaneously used to analyze the same assessment. To address this issue, we propose a framework for relating the two classes of psychometric models, as well as boundaries as to when this can be done. Specifically, we impose certain conditions on the higher-order generalized deterministic inputs, noisy and gate (HO-GDINA) model, and reformulate its success probability as a function of the higher-order ability. It can be shown that with appropriate constraints, the HO-GDINA model reduces to the four unidimensional IRT models when only a single attribute is required. When two or more attributes are required, the item response function of the HO-GDINA model can be well approximated by unidimensional IRT models. We investigate a number of factors (e.g., slope and intercept of the higher-order structure, guessing and slip of the item parameters, sample size) to determine their impact on the quality of item parameter approximation, as well as ability estimation. Preliminary results show that a wider range of intercept values leads to a lower bias in the IRT parameter estimates and a slightly higher correlation between the true and estimated abilities. Additionally, more discriminating items and larger sample size lead to a higher correlation between the true and estimated abilities.
DescriptionInvited Presentation
Persistent Identifierhttp://hdl.handle.net/10722/270294

 

DC FieldValueLanguage
dc.contributor.authorde la Torre, J-
dc.date.accessioned2019-05-24T09:09:04Z-
dc.date.available2019-05-24T09:09:04Z-
dc.date.issued2018-
dc.identifier.citationConference on Statistical Methods for Innovative Testing and Learning, Department of Statistics, Columbia University, New York, USA, 7-8 July 2018-
dc.identifier.urihttp://hdl.handle.net/10722/270294-
dc.descriptionInvited Presentation-
dc.description.abstractAt present, most existing educational assessments are developed and analyzed using unidimensional item response theory (IRT) models. To obtain information that can be used for diagnostic purposes, the same assessments have also been retrofitted with cognitive diagnosis models (CDMs). However, it remains unclear the extent to which two disparate psychometric frameworks can be simultaneously used to analyze the same assessment. To address this issue, we propose a framework for relating the two classes of psychometric models, as well as boundaries as to when this can be done. Specifically, we impose certain conditions on the higher-order generalized deterministic inputs, noisy and gate (HO-GDINA) model, and reformulate its success probability as a function of the higher-order ability. It can be shown that with appropriate constraints, the HO-GDINA model reduces to the four unidimensional IRT models when only a single attribute is required. When two or more attributes are required, the item response function of the HO-GDINA model can be well approximated by unidimensional IRT models. We investigate a number of factors (e.g., slope and intercept of the higher-order structure, guessing and slip of the item parameters, sample size) to determine their impact on the quality of item parameter approximation, as well as ability estimation. Preliminary results show that a wider range of intercept values leads to a lower bias in the IRT parameter estimates and a slightly higher correlation between the true and estimated abilities. Additionally, more discriminating items and larger sample size lead to a higher correlation between the true and estimated abilities.-
dc.languageeng-
dc.publisherColumbia University.-
dc.relation.ispartofConference on Statistical Methods for Innovative Testing and Learning, Department of Statistics, Columbia University, New York-
dc.titleA Note on the Equivalence of the Unidimensional Item Response Theory and Cognitive Diagnosis Models-
dc.typeConference_Paper-
dc.identifier.emailde la Torre, J: jdltorre@hku.hk-
dc.identifier.authorityde la Torre, J=rp02159-
dc.identifier.hkuros288959-
dc.publisher.placeNew York, NY-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats