File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Item response theory modeling for examinee-selected items with rater effect

TitleItem response theory modeling for examinee-selected items with rater effect
Authors
Keywordsexaminee-selected items
missing not at random
rater severity
Issue Date2019
PublisherSage Publications, Inc. The Journal's web site is located at http://www.sagepub.com/journal.aspx?pid=184
Citation
Applied Psychological Measurement, 2019, v. 43 n. 6, p. 435-448 How to Cite?
AbstractSome large-scale testing requires examinees to select and answer a fixed number of items from given items (e.g., select one out of the three items). Usually, they are constructed-response items that are marked by human raters. In this examinee-selected item (ESI) design, some examinees may benefit more than others from choosing easier items to answer, and so the missing data induced by the design become missing not at random (MNAR). Although item response theory (IRT) models have recently been developed to account for MNAR data in the ESI design, they do not consider the rater effect; thus, their utility is seriously restricted. In this study, two methods are developed: the first one is a new IRT model to account for both MNAR data and rater severity simultaneously, and the second one adapts conditional maximum likelihood estimation and pairwise estimation methods to the ESI design with the rater effect. A series of simulations was then conducted to compare their performance with those of conventional IRT models that ignored MNAR data or rater severity. The results indicated a good parameter recovery for the new model. The conditional maximum likelihood estimation and pairwise estimation methods were applicable when the Rasch models fit the data, but the conventional IRT models yielded biased parameter estimates. An empirical example was given to illustrate these new initiatives. © The Author(s) 2018.
Persistent Identifierhttp://hdl.handle.net/10722/274075
ISSN
2023 Impact Factor: 1.0
2023 SCImago Journal Rankings: 1.061
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, C-
dc.contributor.authorQiu, X-
dc.contributor.authorWang, W-
dc.date.accessioned2019-08-18T14:54:34Z-
dc.date.available2019-08-18T14:54:34Z-
dc.date.issued2019-
dc.identifier.citationApplied Psychological Measurement, 2019, v. 43 n. 6, p. 435-448-
dc.identifier.issn0146-6216-
dc.identifier.urihttp://hdl.handle.net/10722/274075-
dc.description.abstractSome large-scale testing requires examinees to select and answer a fixed number of items from given items (e.g., select one out of the three items). Usually, they are constructed-response items that are marked by human raters. In this examinee-selected item (ESI) design, some examinees may benefit more than others from choosing easier items to answer, and so the missing data induced by the design become missing not at random (MNAR). Although item response theory (IRT) models have recently been developed to account for MNAR data in the ESI design, they do not consider the rater effect; thus, their utility is seriously restricted. In this study, two methods are developed: the first one is a new IRT model to account for both MNAR data and rater severity simultaneously, and the second one adapts conditional maximum likelihood estimation and pairwise estimation methods to the ESI design with the rater effect. A series of simulations was then conducted to compare their performance with those of conventional IRT models that ignored MNAR data or rater severity. The results indicated a good parameter recovery for the new model. The conditional maximum likelihood estimation and pairwise estimation methods were applicable when the Rasch models fit the data, but the conventional IRT models yielded biased parameter estimates. An empirical example was given to illustrate these new initiatives. © The Author(s) 2018.-
dc.languageeng-
dc.publisherSage Publications, Inc. The Journal's web site is located at http://www.sagepub.com/journal.aspx?pid=184-
dc.relation.ispartofApplied Psychological Measurement-
dc.rightsApplied Psychological Measurement. Copyright © Sage Publications, Inc.-
dc.subjectexaminee-selected items-
dc.subjectmissing not at random-
dc.subjectrater severity-
dc.titleItem response theory modeling for examinee-selected items with rater effect-
dc.typeArticle-
dc.identifier.emailQiu, X: xlqiu@hku.hk-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1177/0146621618798667-
dc.identifier.scopuseid_2-s2.0-85059681803-
dc.identifier.hkuros301986-
dc.identifier.volume43-
dc.identifier.issue6-
dc.identifier.spage435-
dc.identifier.epage448-
dc.identifier.isiWOS:000481462100002-
dc.publisher.placeUnited States-
dc.identifier.issnl0146-6216-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats