File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Cross-Modal Generative Semantic Communications for Mobile AIGC: Joint Semantic Encoding and Prompt Engineering

TitleCross-Modal Generative Semantic Communications for Mobile AIGC: Joint Semantic Encoding and Prompt Engineering
Authors
KeywordsCross-Modal attention
diffusion
generative semantic communications
mobile AIGC
Issue Date2024
Citation
IEEE Transactions on Mobile Computing, 2024, v. 23, n. 12, p. 14871-14888 How to Cite?
AbstractEmploying massive Mobile AI-Generated Content (AIGC) Service Providers (MASPs) with powerful models, high-quality AIGC services become accessible for resource-constrained end users. However, this advancement, referred to as mobile AIGC, also introduces a significant challenge: users should download large AIGC outputs from the MASPs, leading to substantial bandwidth consumption and potential transmission failures. In this paper, we apply cross-modal Generative Semantic Communications (G-SemCom) in mobile AIGC to overcome wireless bandwidth constraints. Specifically, we utilize cross-modal attention maps to indicate the correlation between user prompts and each part of AIGC outputs. In this way, the MASP can analyze the prompt context and filter the most semantically important content efficiently. Only semantic information is transmitted, with which users can recover the entire AIGC output with high quality while saving mobile bandwidth. Since the transmitted information not only preserves the semantics but also prompts the recovery, we formulate a joint semantic encoding and prompt engineering problem to optimize the bandwidth allocation among users. Particularly, we present a human-perceptual metric named Joint Perceptual Similarity and Quality (JPSQ), which is fused by two learning-based measurements regarding semantic similarity and aesthetic quality, respectively. Furthermore, we develop the Attention-aware Deep Diffusion (ADD) algorithm, which learns attention maps and leverages the diffusion process to enhance the environment exploration ability of traditional deep reinforcement learning (DRL). Extensive experiments demonstrate that our proposal can reduce the bandwidth consumption of mobile users by 49.4% on average, with almost no perceptual difference in AIGC output quality. Moreover, the ADD algorithm shows superior performance over baseline DRL methods, with 1.74× higher overall reward.
Persistent Identifierhttp://hdl.handle.net/10722/353211
ISSN
2023 Impact Factor: 7.7
2023 SCImago Journal Rankings: 2.755
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Yinqiu-
dc.contributor.authorDu, Hongyang-
dc.contributor.authorNiyato, Dusit-
dc.contributor.authorKang, Jiawen-
dc.contributor.authorXiong, Zehui-
dc.contributor.authorMao, Shiwen-
dc.contributor.authorZhang, Ping-
dc.contributor.authorShen, Xuemin-
dc.date.accessioned2025-01-13T03:02:39Z-
dc.date.available2025-01-13T03:02:39Z-
dc.date.issued2024-
dc.identifier.citationIEEE Transactions on Mobile Computing, 2024, v. 23, n. 12, p. 14871-14888-
dc.identifier.issn1536-1233-
dc.identifier.urihttp://hdl.handle.net/10722/353211-
dc.description.abstractEmploying massive Mobile AI-Generated Content (AIGC) Service Providers (MASPs) with powerful models, high-quality AIGC services become accessible for resource-constrained end users. However, this advancement, referred to as mobile AIGC, also introduces a significant challenge: users should download large AIGC outputs from the MASPs, leading to substantial bandwidth consumption and potential transmission failures. In this paper, we apply cross-modal Generative Semantic Communications (G-SemCom) in mobile AIGC to overcome wireless bandwidth constraints. Specifically, we utilize cross-modal attention maps to indicate the correlation between user prompts and each part of AIGC outputs. In this way, the MASP can analyze the prompt context and filter the most semantically important content efficiently. Only semantic information is transmitted, with which users can recover the entire AIGC output with high quality while saving mobile bandwidth. Since the transmitted information not only preserves the semantics but also prompts the recovery, we formulate a joint semantic encoding and prompt engineering problem to optimize the bandwidth allocation among users. Particularly, we present a human-perceptual metric named Joint Perceptual Similarity and Quality (JPSQ), which is fused by two learning-based measurements regarding semantic similarity and aesthetic quality, respectively. Furthermore, we develop the Attention-aware Deep Diffusion (ADD) algorithm, which learns attention maps and leverages the diffusion process to enhance the environment exploration ability of traditional deep reinforcement learning (DRL). Extensive experiments demonstrate that our proposal can reduce the bandwidth consumption of mobile users by 49.4% on average, with almost no perceptual difference in AIGC output quality. Moreover, the ADD algorithm shows superior performance over baseline DRL methods, with 1.74× higher overall reward.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Mobile Computing-
dc.subjectCross-Modal attention-
dc.subjectdiffusion-
dc.subjectgenerative semantic communications-
dc.subjectmobile AIGC-
dc.titleCross-Modal Generative Semantic Communications for Mobile AIGC: Joint Semantic Encoding and Prompt Engineering-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1109/TMC.2024.3449645-
dc.identifier.scopuseid_2-s2.0-85202773311-
dc.identifier.volume23-
dc.identifier.issue12-
dc.identifier.spage14871-
dc.identifier.epage14888-
dc.identifier.eissn1558-0660-
dc.identifier.isiWOS:001359244600283-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats