File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A comparison of four methods of IRT subscoring

TitleA comparison of four methods of IRT subscoring
Authors
Keywordsability estimation
Issue Date2011
Citation
Applied Psychological Measurement, 2011, v. 35, n. 4, p. 296-316 How to Cite?
AbstractLack of sufficient reliability is the primary impediment for generating and reporting subtest scores. Several current methods of subscore estimation do so either by incorporating the correlational structure among the subtest abilities or by using the examinee's performance on the overall test. This article conducted a systematic comparison of four subscoring methods-multidimensional scoring (MS), augmented scoring (AS), higher order item response model scoring (HO), and objective performance index scoring (OPI)-by examining how test length, number of subtests or domains, and correlation between the abilities affect the subtest ability estimation. The correlation-based methods (i.e., MS, AS, and HO) provided largely similar results, and performed best under conditions involving multiple short subtests and highly correlated abilities. In most of the conditions considered, the OPI method performed poorer compared to other methods on both ability estimates and proportion correct scores. Real data analysis further underscores the similarities and differences between the four subscoring methods. © The Author(s) 2011.
Persistent Identifierhttp://hdl.handle.net/10722/228120
ISSN
2023 Impact Factor: 1.0
2023 SCImago Journal Rankings: 1.061
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorde la Torre, Jimmy-
dc.contributor.authorSong, Hao-
dc.contributor.authorHong, Yuan-
dc.date.accessioned2016-08-01T06:45:14Z-
dc.date.available2016-08-01T06:45:14Z-
dc.date.issued2011-
dc.identifier.citationApplied Psychological Measurement, 2011, v. 35, n. 4, p. 296-316-
dc.identifier.issn0146-6216-
dc.identifier.urihttp://hdl.handle.net/10722/228120-
dc.description.abstractLack of sufficient reliability is the primary impediment for generating and reporting subtest scores. Several current methods of subscore estimation do so either by incorporating the correlational structure among the subtest abilities or by using the examinee's performance on the overall test. This article conducted a systematic comparison of four subscoring methods-multidimensional scoring (MS), augmented scoring (AS), higher order item response model scoring (HO), and objective performance index scoring (OPI)-by examining how test length, number of subtests or domains, and correlation between the abilities affect the subtest ability estimation. The correlation-based methods (i.e., MS, AS, and HO) provided largely similar results, and performed best under conditions involving multiple short subtests and highly correlated abilities. In most of the conditions considered, the OPI method performed poorer compared to other methods on both ability estimates and proportion correct scores. Real data analysis further underscores the similarities and differences between the four subscoring methods. © The Author(s) 2011.-
dc.languageeng-
dc.relation.ispartofApplied Psychological Measurement-
dc.subjectability estimation-
dc.titleA comparison of four methods of IRT subscoring-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1177/0146621610378653-
dc.identifier.scopuseid_2-s2.0-79955711648-
dc.identifier.volume35-
dc.identifier.issue4-
dc.identifier.spage296-
dc.identifier.epage316-
dc.identifier.eissn1552-3497-
dc.identifier.isiWOS:000290300000003-
dc.identifier.issnl0146-6216-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats