File Download

There are no files associated with this item.

Supplementary

Conference Paper: PLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis

TitlePLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis
Authors
Issue Date17-Jun-2024
Abstract

Recent advancements in large-scale pre-trained textto-image models have led to remarkable progress in semantic image synthesis. Nevertheless, synthesizing highquality images with consistent semantics and layout remains a challenge. In this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE) that harnesses pre-trained models to alleviate the aforementioned issues. Specifically, we first employ the layout control map to faithfully represent layouts in the feature space. Subsequently, we combine the layout and semantic features in a timestep-adaptive manner to synthesize images with realistic details. During fine-tuning, we propose the Semantic Alignment (SA) loss to further enhance layout alignment. Additionally, we introduce the Layout-Free Prior Preservation (LFP) loss, which leverages unlabeled data to maintain the priors of pre-trained models, thereby improving the visual quality and semantic consistency of synthesized images. Extensive experiments demonstrate that our approach performs favorably in terms of visual quality, semantic consistency, and layout alignment. The source code and model are available at https://github.com/cszy98/PLACE


Persistent Identifierhttp://hdl.handle.net/10722/345750

 

DC FieldValueLanguage
dc.contributor.authorLv, Zhengyao-
dc.contributor.authorWei, Yuxiang-
dc.contributor.authorZuo, Wangmeng-
dc.contributor.authorWong, Kwan-Yee K-
dc.date.accessioned2024-08-27T09:10:56Z-
dc.date.available2024-08-27T09:10:56Z-
dc.date.issued2024-06-17-
dc.identifier.urihttp://hdl.handle.net/10722/345750-
dc.description.abstract<p>Recent advancements in large-scale pre-trained textto-image models have led to remarkable progress in semantic image synthesis. Nevertheless, synthesizing highquality images with consistent semantics and layout remains a challenge. In this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE) that harnesses pre-trained models to alleviate the aforementioned issues. Specifically, we first employ the layout control map to faithfully represent layouts in the feature space. Subsequently, we combine the layout and semantic features in a timestep-adaptive manner to synthesize images with realistic details. During fine-tuning, we propose the Semantic Alignment (SA) loss to further enhance layout alignment. Additionally, we introduce the Layout-Free Prior Preservation (LFP) loss, which leverages unlabeled data to maintain the priors of pre-trained models, thereby improving the visual quality and semantic consistency of synthesized images. Extensive experiments demonstrate that our approach performs favorably in terms of visual quality, semantic consistency, and layout alignment. The source code and model are available at https://github.com/cszy98/PLACE<br></p>-
dc.languageeng-
dc.relation.ispartof2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2024-21/06/2024)-
dc.titlePLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis-
dc.typeConference_Paper-
dc.identifier.spage9264-
dc.identifier.epage9274-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats