File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles

TitleDetecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles
Authors
Issue Date2021
Citation
Advances in Neural Information Processing Systems, 2021, v. 18, p. 14980-14992 How to Cite?
AbstractWhen a deep learning model is deployed in the wild, it can encounter test data drawn from distributions different from the training data distribution and suffer drop in performance. For safe deployment, it is essential to estimate the accuracy of the pre-trained model on the test data. However, the labels for the test inputs are usually not immediately available in practice, and obtaining them can be expensive. This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs. In this paper, we propose a principled and practically effective framework that simultaneously addresses the two tasks. The proposed framework iteratively learns an ensemble of models to identify mis-classified data points and performs self-training to improve the ensemble with the identified points. Theoretical analysis demonstrates that our framework enjoys provable guarantees for both accuracy estimation and error detection under mild conditions readily satisfied by practical deep learning models. Along with the framework, we proposed and experimented with two instantiations and achieved state-of-the-art results on 59 tasks. For example, on iWildCam, one instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7% compared to existing methods.
Persistent Identifierhttp://hdl.handle.net/10722/341362
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiefeng-
dc.contributor.authorLiu, Frederick-
dc.contributor.authorAvci, Besim-
dc.contributor.authorWu, Xi-
dc.contributor.authorLiang, Yingyu-
dc.contributor.authorJha, Somesh-
dc.date.accessioned2024-03-13T08:42:14Z-
dc.date.available2024-03-13T08:42:14Z-
dc.date.issued2021-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2021, v. 18, p. 14980-14992-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/341362-
dc.description.abstractWhen a deep learning model is deployed in the wild, it can encounter test data drawn from distributions different from the training data distribution and suffer drop in performance. For safe deployment, it is essential to estimate the accuracy of the pre-trained model on the test data. However, the labels for the test inputs are usually not immediately available in practice, and obtaining them can be expensive. This observation leads to two challenging tasks: (1) unsupervised accuracy estimation, which aims to estimate the accuracy of a pre-trained classifier on a set of unlabeled test inputs; (2) error detection, which aims to identify mis-classified test inputs. In this paper, we propose a principled and practically effective framework that simultaneously addresses the two tasks. The proposed framework iteratively learns an ensemble of models to identify mis-classified data points and performs self-training to improve the ensemble with the identified points. Theoretical analysis demonstrates that our framework enjoys provable guarantees for both accuracy estimation and error detection under mild conditions readily satisfied by practical deep learning models. Along with the framework, we proposed and experimented with two instantiations and achieved state-of-the-art results on 59 tasks. For example, on iWildCam, one instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7% compared to existing methods.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleDetecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85132037955-
dc.identifier.volume18-
dc.identifier.spage14980-
dc.identifier.epage14992-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats