File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Intentional Control of Type I Error over Unconscious Data Distortion: a Neyman-Pearson Approach to Text Classification

TitleIntentional Control of Type I Error over Unconscious Data Distortion: a Neyman-Pearson Approach to Text Classification
Authors
KeywordsCensorship
Data distortion
Neyman–Pearson classification paradigm
Social media
Text classification
Issue Date2021
PublisherAmerican Statistical Association. The Journal's web site is located at http://www.amstat.org/publications/jasa/index.cfm?fuseaction=main
Citation
Journal of the American Statistical Association, 2021, v. 116 n. 533, p. 68-81 How to Cite?
AbstractThis article addresses the challenges in classifying textual data obtained from open online platforms, which are vulnerable to distortion. Most existing classification methods minimize the overall classification error and may yield an undesirably large Type I error (relevant textual messages are classified as irrelevant), particularly when available data exhibit an asymmetry between relevant and irrelevant information. Data distortion exacerbates this situation and often leads to fallacious prediction. To deal with inestimable data distortion, we propose the use of the Neyman–Pearson (NP) classification paradigm, which minimizes Type II error under a user-specified Type I error constraint. Theoretically, we show that the NP oracle is unaffected by data distortion when the class conditional distributions remain the same. Empirically, we study a case of classifying posts about worker strikes obtained from a leading Chinese microblogging platform, which are frequently prone to extensive, unpredictable and inestimable censorship. We demonstrate that, even though the training and test data are susceptible to different distortion and therefore potentially follow different distributions, our proposed NP methods control the Type I error on test data at the targeted level. The methods and implementation pipeline proposed in our case study are applicable to many other problems involving data distortion. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.
Persistent Identifierhttp://hdl.handle.net/10722/281985
ISSN
2021 Impact Factor: 4.369
2020 SCImago Journal Rankings: 4.976
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorXia, L-
dc.contributor.authorZhao, R-
dc.contributor.authorWu, Y-
dc.contributor.authorTong, X-
dc.date.accessioned2020-04-19T03:33:46Z-
dc.date.available2020-04-19T03:33:46Z-
dc.date.issued2021-
dc.identifier.citationJournal of the American Statistical Association, 2021, v. 116 n. 533, p. 68-81-
dc.identifier.issn0162-1459-
dc.identifier.urihttp://hdl.handle.net/10722/281985-
dc.description.abstractThis article addresses the challenges in classifying textual data obtained from open online platforms, which are vulnerable to distortion. Most existing classification methods minimize the overall classification error and may yield an undesirably large Type I error (relevant textual messages are classified as irrelevant), particularly when available data exhibit an asymmetry between relevant and irrelevant information. Data distortion exacerbates this situation and often leads to fallacious prediction. To deal with inestimable data distortion, we propose the use of the Neyman–Pearson (NP) classification paradigm, which minimizes Type II error under a user-specified Type I error constraint. Theoretically, we show that the NP oracle is unaffected by data distortion when the class conditional distributions remain the same. Empirically, we study a case of classifying posts about worker strikes obtained from a leading Chinese microblogging platform, which are frequently prone to extensive, unpredictable and inestimable censorship. We demonstrate that, even though the training and test data are susceptible to different distortion and therefore potentially follow different distributions, our proposed NP methods control the Type I error on test data at the targeted level. The methods and implementation pipeline proposed in our case study are applicable to many other problems involving data distortion. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.-
dc.languageeng-
dc.publisherAmerican Statistical Association. The Journal's web site is located at http://www.amstat.org/publications/jasa/index.cfm?fuseaction=main-
dc.relation.ispartofJournal of the American Statistical Association-
dc.subjectCensorship-
dc.subjectData distortion-
dc.subjectNeyman–Pearson classification paradigm-
dc.subjectSocial media-
dc.subjectText classification-
dc.titleIntentional Control of Type I Error over Unconscious Data Distortion: a Neyman-Pearson Approach to Text Classification-
dc.typeArticle-
dc.identifier.emailWu, Y: yanhuiwu@hku.hk-
dc.identifier.authorityWu, Y=rp02644-
dc.description.naturepostprint-
dc.identifier.doi10.1080/01621459.2020.1740711-
dc.identifier.scopuseid_2-s2.0-85083525386-
dc.identifier.hkuros309711-
dc.identifier.volume116-
dc.identifier.issue533-
dc.identifier.spage68-
dc.identifier.epage81-
dc.identifier.isiWOS:000627050500006-
dc.publisher.placeUnited States-
dc.identifier.issnl0162-1459-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats