File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Contract-Inspired Contest Theory for Controllable Image Generation in Mobile Edge Metaverse

TitleContract-Inspired Contest Theory for Controllable Image Generation in Mobile Edge Metaverse
Authors
KeywordsContest Theory
Generative AI
Image Generation
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Mobile Computing, 2025, v. 24, n. 8, p. 7389-7405 How to Cite?
AbstractThe rapid advancement of immersive technologies has propelled the development of the Metaverse, where the convergence of virtual and physical realities necessitates the generation of high-quality, photorealistic images to enhance user experience. However, generating these images, especially through Generative Diffusion Models (GDMs), in mobile edge computing environments presents significant challenges due to the limited computing resources of edge devices and the dynamic nature of wireless networks. This paper proposes a novel framework that integrates contract-inspired contest theory, Deep Reinforcement Learning (DRL), and GDMs to optimize image generation in these resource-constrained environments. The framework addresses the critical challenges of resource allocation and semantic data transmission quality by incentivizing edge devices to efficiently transmit high-quality semantic data, which is essential for creating realistic and immersive images. The use of contest and contract theory ensures that edge devices are motivated to allocate resources effectively, while DRL dynamically adjusts to network conditions, optimizing the overall image generation process. Experimental results demonstrate that the proposed approach not only improves the quality of generated images but also achieves superior convergence speed and stability compared to traditional methods. This makes the framework particularly effective for optimizing complex resource allocation tasks in mobile edge Metaverse applications, offering enhanced performance and efficiency in creating immersive virtual environments.
Persistent Identifierhttp://hdl.handle.net/10722/362005
ISSN
2023 Impact Factor: 7.7
2023 SCImago Journal Rankings: 2.755

 

DC FieldValueLanguage
dc.contributor.authorLiu, Guangyuan-
dc.contributor.authorDu, Hongyang-
dc.contributor.authorWang, Jiacheng-
dc.contributor.authorNiyato, Dusit-
dc.contributor.authorKim, Dong In-
dc.date.accessioned2025-09-18T00:36:14Z-
dc.date.available2025-09-18T00:36:14Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Transactions on Mobile Computing, 2025, v. 24, n. 8, p. 7389-7405-
dc.identifier.issn1536-1233-
dc.identifier.urihttp://hdl.handle.net/10722/362005-
dc.description.abstractThe rapid advancement of immersive technologies has propelled the development of the Metaverse, where the convergence of virtual and physical realities necessitates the generation of high-quality, photorealistic images to enhance user experience. However, generating these images, especially through Generative Diffusion Models (GDMs), in mobile edge computing environments presents significant challenges due to the limited computing resources of edge devices and the dynamic nature of wireless networks. This paper proposes a novel framework that integrates contract-inspired contest theory, Deep Reinforcement Learning (DRL), and GDMs to optimize image generation in these resource-constrained environments. The framework addresses the critical challenges of resource allocation and semantic data transmission quality by incentivizing edge devices to efficiently transmit high-quality semantic data, which is essential for creating realistic and immersive images. The use of contest and contract theory ensures that edge devices are motivated to allocate resources effectively, while DRL dynamically adjusts to network conditions, optimizing the overall image generation process. Experimental results demonstrate that the proposed approach not only improves the quality of generated images but also achieves superior convergence speed and stability compared to traditional methods. This makes the framework particularly effective for optimizing complex resource allocation tasks in mobile edge Metaverse applications, offering enhanced performance and efficiency in creating immersive virtual environments.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Mobile Computing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectContest Theory-
dc.subjectGenerative AI-
dc.subjectImage Generation-
dc.titleContract-Inspired Contest Theory for Controllable Image Generation in Mobile Edge Metaverse-
dc.typeArticle-
dc.identifier.doi10.1109/TMC.2025.3550815-
dc.identifier.scopuseid_2-s2.0-105000066834-
dc.identifier.volume24-
dc.identifier.issue8-
dc.identifier.spage7389-
dc.identifier.epage7405-
dc.identifier.eissn1558-0660-
dc.identifier.issnl1536-1233-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats