File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Article: StyleAdapter: A Unified Stylized Image Generation Model

TitleStyleAdapter: A Unified Stylized Image Generation Model
Authors
KeywordsArtificial intelligence generated content (AIGC)
Computer vision
Diffusion model
Stylized image generation
Issue Date1-Apr-2025
PublisherSpringer
Citation
International Journal of Computer Vision, 2025, v. 133, n. 4, p. 1894-1911 How to Cite?
Abstract

This work focuses on generating high-quality images with specific style of reference images and content of provided textual descriptions. Current leading algorithms, i.e., DreamBooth and LoRA, require fine-tuning for each style, leading to time-consuming and computationally expensive processes. In this work, we propose StyleAdapter, a unified stylized image generation model capable of producing a variety of stylized images that match both the content of a given prompt and the style of reference images, without the need for per-style fine-tuning. It introduces a two-path cross-attention (TPCA) module to separately process style information and textual prompt, which cooperate with a semantic suppressing vision model (SSVM) to suppress the semantic content of style images. In this way, it can ensure that the prompt maintains control over the content of the generated images, while also mitigating the negative impact of semantic information in style references. This results in the content of the generated image adhering to the prompt, and its style aligning with the style references. Besides, our StyleAdapter can be integrated with existing controllable synthesis methods, such as T2I-adapter and ControlNet, to attain a more controllable and stable generation process. Extensive experiments demonstrate the superiority of our method over previous works.


Persistent Identifierhttp://hdl.handle.net/10722/362108
ISSN
2023 Impact Factor: 11.6
2023 SCImago Journal Rankings: 6.668

 

DC FieldValueLanguage
dc.contributor.authorWang, Zhouxia-
dc.contributor.authorWang, Xintao-
dc.contributor.authorXie, Liangbin-
dc.contributor.authorQi, Zhongang-
dc.contributor.authorShan, Ying-
dc.contributor.authorWang, Wenping-
dc.contributor.authorLuo, Ping-
dc.date.accessioned2025-09-19T00:32:06Z-
dc.date.available2025-09-19T00:32:06Z-
dc.date.issued2025-04-01-
dc.identifier.citationInternational Journal of Computer Vision, 2025, v. 133, n. 4, p. 1894-1911-
dc.identifier.issn0920-5691-
dc.identifier.urihttp://hdl.handle.net/10722/362108-
dc.description.abstract<p>This work focuses on generating high-quality images with specific style of reference images and content of provided textual descriptions. Current leading algorithms, i.e., DreamBooth and LoRA, require fine-tuning for each style, leading to time-consuming and computationally expensive processes. In this work, we propose StyleAdapter, a unified stylized image generation model capable of producing a variety of stylized images that match both the content of a given prompt and the style of reference images, without the need for per-style fine-tuning. It introduces a two-path cross-attention (TPCA) module to separately process style information and textual prompt, which cooperate with a semantic suppressing vision model (SSVM) to suppress the semantic content of style images. In this way, it can ensure that the prompt maintains control over the content of the generated images, while also mitigating the negative impact of semantic information in style references. This results in the content of the generated image adhering to the prompt, and its style aligning with the style references. Besides, our StyleAdapter can be integrated with existing controllable synthesis methods, such as T2I-adapter and ControlNet, to attain a more controllable and stable generation process. Extensive experiments demonstrate the superiority of our method over previous works.</p>-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofInternational Journal of Computer Vision-
dc.subjectArtificial intelligence generated content (AIGC)-
dc.subjectComputer vision-
dc.subjectDiffusion model-
dc.subjectStylized image generation-
dc.titleStyleAdapter: A Unified Stylized Image Generation Model-
dc.typeArticle-
dc.identifier.doi10.1007/s11263-024-02253-x-
dc.identifier.scopuseid_2-s2.0-105001549092-
dc.identifier.volume133-
dc.identifier.issue4-
dc.identifier.spage1894-
dc.identifier.epage1911-
dc.identifier.eissn1573-1405-
dc.identifier.issnl0920-5691-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats