File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-030-86523-8_26
- Scopus: eid_2-s2.0-85115723261
- WOS: WOS:000713413200026
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: ATOM: Robustifying Out-of-Distribution Detection Using Outlier Mining
Title | ATOM: Robustifying Out-of-Distribution Detection Using Outlier Mining |
---|---|
Authors | |
Keywords | Out-of-distribution detection Outlier mining Robustness |
Issue Date | 2021 |
Citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2021, v. 12977 LNAI, p. 430-445 How to Cite? |
Abstract | Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in an open-world setting. However, existing OOD detection solutions can be brittle in the open world, facing various types of adversarial OOD inputs. While methods leveraging auxiliary OOD data have emerged, our analysis on illuminative examples reveals a key insight that the majority of auxiliary OOD examples may not meaningfully improve or even hurt the decision boundary of the OOD detector, which is also observed in empirical results on real data. In this paper, we provide a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness of OOD detection. We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks. ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks. For example, on the CIFAR-10 in-distribution dataset, ATOM reduces the FPR (at TPR 95%) by up to 57.99% under adversarial OOD inputs, surpassing the previous best baseline by a large margin. |
Persistent Identifier | http://hdl.handle.net/10722/341328 |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jiefeng | - |
dc.contributor.author | Li, Yixuan | - |
dc.contributor.author | Wu, Xi | - |
dc.contributor.author | Liang, Yingyu | - |
dc.contributor.author | Jha, Somesh | - |
dc.date.accessioned | 2024-03-13T08:41:57Z | - |
dc.date.available | 2024-03-13T08:41:57Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2021, v. 12977 LNAI, p. 430-445 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/341328 | - |
dc.description.abstract | Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in an open-world setting. However, existing OOD detection solutions can be brittle in the open world, facing various types of adversarial OOD inputs. While methods leveraging auxiliary OOD data have emerged, our analysis on illuminative examples reveals a key insight that the majority of auxiliary OOD examples may not meaningfully improve or even hurt the decision boundary of the OOD detector, which is also observed in empirical results on real data. In this paper, we provide a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness of OOD detection. We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks. ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks. For example, on the CIFAR-10 in-distribution dataset, ATOM reduces the FPR (at TPR 95%) by up to 57.99% under adversarial OOD inputs, surpassing the previous best baseline by a large margin. | - |
dc.language | eng | - |
dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.subject | Out-of-distribution detection | - |
dc.subject | Outlier mining | - |
dc.subject | Robustness | - |
dc.title | ATOM: Robustifying Out-of-Distribution Detection Using Outlier Mining | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/978-3-030-86523-8_26 | - |
dc.identifier.scopus | eid_2-s2.0-85115723261 | - |
dc.identifier.volume | 12977 LNAI | - |
dc.identifier.spage | 430 | - |
dc.identifier.epage | 445 | - |
dc.identifier.eissn | 1611-3349 | - |
dc.identifier.isi | WOS:000713413200026 | - |