File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: [A robust statistic AC for assessing inter-observer agreement in reliability studies].

Title[A robust statistic AC for assessing inter-observer agreement in reliability studies].
観察者間の診断の一致性を評価する頑健な統計量AC1について
Authors
Issue Date2010
PublisherNihon Hoshasen Gijutsu Gakkai (日本放射線技術学会)
Citation
Nippon Hoshasen Gijutsu Gakkai Zasshi, 2010, v. 66 n. 11, p. 1485-1491 How to Cite?
AbstractUnderstanding inter-observer variability in clinical diagnosis is crucial for reliability studies. As the statistical measurements of reliability, the kappa statistic and its extensions have been widely adopted in medical research, but it has been discussed that kappa is vulnerable to prevalence and presence of bias. As an alternative robust statistic, AC has attracted recent statistical attentions. This article describes fundamental ideas and quantitative features of AC. The reliability of infrared thermoscanner as an application in detecting febrile patients of pandemic influenza is discussed by means of Monte Carlo simulation. AC adjusts chance agreement more appropriately than kappa and is regarded as a more useful measurement for assessing inter-observer agreement, especially when prevalence is small.
Persistent Identifierhttp://hdl.handle.net/10722/134189
ISSN
2019 SCImago Journal Rankings: 0.110

 

DC FieldValueLanguage
dc.contributor.authorNishiura, Hen_HK
dc.date.accessioned2011-06-13T07:20:44Z-
dc.date.available2011-06-13T07:20:44Z-
dc.date.issued2010en_HK
dc.identifier.citationNippon Hoshasen Gijutsu Gakkai Zasshi, 2010, v. 66 n. 11, p. 1485-1491en_HK
dc.identifier.issn0369-4305en_HK
dc.identifier.urihttp://hdl.handle.net/10722/134189-
dc.description.abstractUnderstanding inter-observer variability in clinical diagnosis is crucial for reliability studies. As the statistical measurements of reliability, the kappa statistic and its extensions have been widely adopted in medical research, but it has been discussed that kappa is vulnerable to prevalence and presence of bias. As an alternative robust statistic, AC has attracted recent statistical attentions. This article describes fundamental ideas and quantitative features of AC. The reliability of infrared thermoscanner as an application in detecting febrile patients of pandemic influenza is discussed by means of Monte Carlo simulation. AC adjusts chance agreement more appropriately than kappa and is regarded as a more useful measurement for assessing inter-observer agreement, especially when prevalence is small.en_HK
dc.languagejpnen_US
dc.publisherNihon Hoshasen Gijutsu Gakkai (日本放射線技術学会)-
dc.relation.ispartofNippon Hoshasen Gijutsu Gakkai zasshien_HK
dc.subject.meshFever - diagnosisen_HK
dc.subject.meshHumansen_HK
dc.subject.meshInfluenza, Human - diagnosisen_HK
dc.subject.meshMonte Carlo Methoden_HK
dc.subject.meshObserver Variationen_HK
dc.subject.meshReproducibility of Resultsen_HK
dc.subject.meshStatistics as Topicen_HK
dc.subject.meshThermosensingen_HK
dc.title[A robust statistic AC for assessing inter-observer agreement in reliability studies].en_HK
dc.title観察者間の診断の一致性を評価する頑健な統計量AC1について-
dc.typeArticleen_HK
dc.identifier.emailNishiura, H:nishiura@hku.hken_HK
dc.identifier.authorityNishiura, H=rp01488en_HK
dc.description.naturelink_to_subscribed_fulltexten_US
dc.identifier.doi10.6009/jjrt.66.1485-
dc.identifier.pmid21099180-
dc.identifier.scopuseid_2-s2.0-79952277372en_HK
dc.identifier.volume66en_HK
dc.identifier.issue11en_HK
dc.identifier.spage1485en_HK
dc.identifier.epage1491en_HK
dc.publisher.placeJapan-
dc.identifier.scopusauthoridNishiura, H=7005501836en_HK
dc.identifier.issnl0369-4305-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats