File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: GLeaD: Improving GANs with A Generator-Leading Task

TitleGLeaD: Improving GANs with A Generator-Leading Task
Authors
Issue Date18-Jun-2023
Abstract

Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement. Project page is at https://ezioby.github.io/glead/.


Persistent Identifierhttp://hdl.handle.net/10722/333873

 

DC FieldValueLanguage
dc.contributor.authorBai, Qingyan-
dc.contributor.authorYang, Ceyuan-
dc.contributor.authorXu, Yinghao-
dc.contributor.authorLiu, Xihui-
dc.contributor.authorYang, Yujiu-
dc.contributor.authorShen, Yujun-
dc.date.accessioned2023-10-06T08:39:47Z-
dc.date.available2023-10-06T08:39:47Z-
dc.date.issued2023-06-18-
dc.identifier.urihttp://hdl.handle.net/10722/333873-
dc.description.abstract<p>Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement. Project page is at https://ezioby.github.io/glead/.<br></p>-
dc.languageeng-
dc.relation.ispartof2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2023-24/06/2023, Vancouver, BC, Canada)-
dc.titleGLeaD: Improving GANs with A Generator-Leading Task-
dc.typeConference_Paper-
dc.identifier.doi10.1109/CVPR52729.2023.01164-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats