File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCV51070.2023.00704
- Scopus: eid_2-s2.0-85180113215
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: LinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis
Title | LinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis |
---|---|
Authors | |
Issue Date | 2023 |
Citation | Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 7622-7632 How to Cite? |
Abstract | This work presents an easy-to-use regularizer for GAN training, which helps explicitly link some axes of the latent space to a set of pixels in the synthesized image. Establishing such a connection facilitates a more convenient local control of GAN generation, where users can alter the image content only within a spatial area simply by partially resampling the latent code. Experimental results confirm four appealing properties of our regularizer, which we call LinkGAN. (1) The latent-pixel linkage is applicable to either a fixed region (i.e., same for all instances) or a particular semantic category (i.e., varying across instances), like the sky. (2) Two or multiple regions can be independently linked to different latent axes, which further supports joint control. (3) Our regularizer can improve the spatial controllability of both 2D and 3D-aware GAN models, barely sacrificing the synthesis performance. (4) The models trained with our regularizer are compatible with GAN inversion techniques and maintain editability on real images. Project page can be found here. |
Persistent Identifier | http://hdl.handle.net/10722/352395 |
ISSN | 2023 SCImago Journal Rankings: 12.263 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhu, Jiapeng | - |
dc.contributor.author | Yang, Ceyuan | - |
dc.contributor.author | Shen, Yujun | - |
dc.contributor.author | Shi, Zifan | - |
dc.contributor.author | Dai, Bo | - |
dc.contributor.author | Zhao, Deli | - |
dc.contributor.author | Chen, Qifeng | - |
dc.date.accessioned | 2024-12-16T03:58:40Z | - |
dc.date.available | 2024-12-16T03:58:40Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 7622-7632 | - |
dc.identifier.issn | 1550-5499 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352395 | - |
dc.description.abstract | This work presents an easy-to-use regularizer for GAN training, which helps explicitly link some axes of the latent space to a set of pixels in the synthesized image. Establishing such a connection facilitates a more convenient local control of GAN generation, where users can alter the image content only within a spatial area simply by partially resampling the latent code. Experimental results confirm four appealing properties of our regularizer, which we call LinkGAN. (1) The latent-pixel linkage is applicable to either a fixed region (i.e., same for all instances) or a particular semantic category (i.e., varying across instances), like the sky. (2) Two or multiple regions can be independently linked to different latent axes, which further supports joint control. (3) Our regularizer can improve the spatial controllability of both 2D and 3D-aware GAN models, barely sacrificing the synthesis performance. (4) The models trained with our regularizer are compatible with GAN inversion techniques and maintain editability on real images. Project page can be found here. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | - |
dc.title | LinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCV51070.2023.00704 | - |
dc.identifier.scopus | eid_2-s2.0-85180113215 | - |
dc.identifier.spage | 7622 | - |
dc.identifier.epage | 7632 | - |