File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Back to the Source: Diffusion-Driven Test-Time Adaptation

TitleBack to the Source: Diffusion-Driven Test-Time Adaptation
Authors
Issue Date18-Jun-2023
Abstract

Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on source data when tested on shifted target data. Existing methods update the source model by (re-)training on each target domain. While effective, re-training is sensitive to the amount and order of the data and the hyperparameters for optimization. We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model. Our diffusion-driven adaptation method, DDA, shares its models for classification and generation across all domains. Both models are trained on the source domain, then fixed during testing. We augment diffusion with image guidance and self-ensembling to automatically decide how much to adapt. Input adaptation by DDA is more robust than prior model adaptation approaches across a variety of corruptions, architectures, and data regimes on the ImageNet-C benchmark. With its input-wise updates, DDA succeeds where model adaptation degrades on too little data in small batches, dependent data in non-uniform order, or mixed data with multiple corruptions.


Persistent Identifierhttp://hdl.handle.net/10722/333874

 

DC FieldValueLanguage
dc.contributor.authorGao, Jin-
dc.contributor.authorZhang, Jialing-
dc.contributor.authorLiu, Xihui-
dc.contributor.authorDarrell, Trevor-
dc.contributor.authorShelhamer, Evan-
dc.contributor.authorWang, Dequan-
dc.date.accessioned2023-10-06T08:39:48Z-
dc.date.available2023-10-06T08:39:48Z-
dc.date.issued2023-06-18-
dc.identifier.urihttp://hdl.handle.net/10722/333874-
dc.description.abstract<p>Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on source data when tested on shifted target data. Existing methods update the source model by (re-)training on each target domain. While effective, re-training is sensitive to the amount and order of the data and the hyperparameters for optimization. We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model. Our diffusion-driven adaptation method, DDA, shares its models for classification and generation across all domains. Both models are trained on the source domain, then fixed during testing. We augment diffusion with image guidance and self-ensembling to automatically decide how much to adapt. Input adaptation by DDA is more robust than prior model adaptation approaches across a variety of corruptions, architectures, and data regimes on the ImageNet-C benchmark. With its input-wise updates, DDA succeeds where model adaptation degrades on too little data in small batches, dependent data in non-uniform order, or mixed data with multiple corruptions.<br></p>-
dc.languageeng-
dc.relation.ispartof2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2023-24/06/2023, Vancouver, BC, Canada)-
dc.titleBack to the Source: Diffusion-Driven Test-Time Adaptation-
dc.typeConference_Paper-
dc.identifier.doi10.48550/arXiv.2207.03442-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats