File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Universal Adaptive Data Augmentation

TitleUniversal Adaptive Data Augmentation
Authors
Issue Date2023
Citation
IJCAI International Joint Conference on Artificial Intelligence, 2023, v. 2023-August, p. 1596-1603 How to Cite?
AbstractExisting automatic data augmentation (DA) methods either ignore updating DA's parameters according to the target model's state during training or adopt update strategies that are not effective enough. In this work, we design a novel data augmentation strategy called “Universal Adaptive Data Augmentation” (UADA). Different from existing methods, UADA would adaptively update DA's parameters according to the target model's gradient information during training: given a pre-defined set of DA operations, we randomly decide types and magnitudes of DA operations for every data batch during training, and adaptively update DA's parameters along the gradient direction of the loss concerning DA's parameters. In this way, UADA can increase the training loss of the target networks, and the target networks would learn features from harder samples to improve the generalization. Moreover, UADA is very general and can be utilized in numerous tasks, e.g., image classification, semantic segmentation and object detection. Extensive experiments with various models are conducted on CIFAR-10, CIFAR-100, ImageNet, tiny-ImageNet, Cityscapes, and VOC07+12 to prove the significant performance improvements brought by UADA.
Persistent Identifierhttp://hdl.handle.net/10722/333643
ISSN
2020 SCImago Journal Rankings: 0.649

 

DC FieldValueLanguage
dc.contributor.authorXu, Xiaogang-
dc.contributor.authorZhao, Hengshuang-
dc.date.accessioned2023-10-06T05:21:15Z-
dc.date.available2023-10-06T05:21:15Z-
dc.date.issued2023-
dc.identifier.citationIJCAI International Joint Conference on Artificial Intelligence, 2023, v. 2023-August, p. 1596-1603-
dc.identifier.issn1045-0823-
dc.identifier.urihttp://hdl.handle.net/10722/333643-
dc.description.abstractExisting automatic data augmentation (DA) methods either ignore updating DA's parameters according to the target model's state during training or adopt update strategies that are not effective enough. In this work, we design a novel data augmentation strategy called “Universal Adaptive Data Augmentation” (UADA). Different from existing methods, UADA would adaptively update DA's parameters according to the target model's gradient information during training: given a pre-defined set of DA operations, we randomly decide types and magnitudes of DA operations for every data batch during training, and adaptively update DA's parameters along the gradient direction of the loss concerning DA's parameters. In this way, UADA can increase the training loss of the target networks, and the target networks would learn features from harder samples to improve the generalization. Moreover, UADA is very general and can be utilized in numerous tasks, e.g., image classification, semantic segmentation and object detection. Extensive experiments with various models are conducted on CIFAR-10, CIFAR-100, ImageNet, tiny-ImageNet, Cityscapes, and VOC07+12 to prove the significant performance improvements brought by UADA.-
dc.languageeng-
dc.relation.ispartofIJCAI International Joint Conference on Artificial Intelligence-
dc.titleUniversal Adaptive Data Augmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85170387066-
dc.identifier.volume2023-August-
dc.identifier.spage1596-
dc.identifier.epage1603-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats