File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: LinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis

TitleLinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis
Authors
Issue Date2023
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 7622-7632 How to Cite?
AbstractThis work presents an easy-to-use regularizer for GAN training, which helps explicitly link some axes of the latent space to a set of pixels in the synthesized image. Establishing such a connection facilitates a more convenient local control of GAN generation, where users can alter the image content only within a spatial area simply by partially resampling the latent code. Experimental results confirm four appealing properties of our regularizer, which we call LinkGAN. (1) The latent-pixel linkage is applicable to either a fixed region (i.e., same for all instances) or a particular semantic category (i.e., varying across instances), like the sky. (2) Two or multiple regions can be independently linked to different latent axes, which further supports joint control. (3) Our regularizer can improve the spatial controllability of both 2D and 3D-aware GAN models, barely sacrificing the synthesis performance. (4) The models trained with our regularizer are compatible with GAN inversion techniques and maintain editability on real images. Project page can be found here.
Persistent Identifierhttp://hdl.handle.net/10722/352395
ISSN
2023 SCImago Journal Rankings: 12.263

 

DC FieldValueLanguage
dc.contributor.authorZhu, Jiapeng-
dc.contributor.authorYang, Ceyuan-
dc.contributor.authorShen, Yujun-
dc.contributor.authorShi, Zifan-
dc.contributor.authorDai, Bo-
dc.contributor.authorZhao, Deli-
dc.contributor.authorChen, Qifeng-
dc.date.accessioned2024-12-16T03:58:40Z-
dc.date.available2024-12-16T03:58:40Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2023, p. 7622-7632-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/352395-
dc.description.abstractThis work presents an easy-to-use regularizer for GAN training, which helps explicitly link some axes of the latent space to a set of pixels in the synthesized image. Establishing such a connection facilitates a more convenient local control of GAN generation, where users can alter the image content only within a spatial area simply by partially resampling the latent code. Experimental results confirm four appealing properties of our regularizer, which we call LinkGAN. (1) The latent-pixel linkage is applicable to either a fixed region (i.e., same for all instances) or a particular semantic category (i.e., varying across instances), like the sky. (2) Two or multiple regions can be independently linked to different latent axes, which further supports joint control. (3) Our regularizer can improve the spatial controllability of both 2D and 3D-aware GAN models, barely sacrificing the synthesis performance. (4) The models trained with our regularizer are compatible with GAN inversion techniques and maintain editability on real images. Project page can be found here.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleLinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV51070.2023.00704-
dc.identifier.scopuseid_2-s2.0-85180113215-
dc.identifier.spage7622-
dc.identifier.epage7632-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats