File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Decentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services

TitleDecentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services
Authors
KeywordsCoded Computing
Computational modeling
Data models
Decentralized Unlearning
Differential privacy
Privacy
Security
Servers
Training
Trustworthy
Issue Date2024
Citation
IEEE Network, 2024 How to Cite?
AbstractThe widespread adoption of AI-Generated Content (AIGC) has captured remarkable interests from academia and industry. However, this advancement brings forth challenges in data ownership, as emphasized by the General Data Protection Regulation’s right to be forgotten. The right to be forgotten allows participants to erase their data contributions from well-trained AIGC models. Training AIGC models demands an extensive amount of data and computations, often surpassing the capabilities of individual participants. As a solution, Federated Learning (FL) is employed to collaboratively train an AIGC model without sharing raw data directly. To protect the right to be forgotten in FL, “federated unlearning” has been introduced to eradicate the data contributions of unlearning participants from fully trained AIGC models, achieved by deleting historical model updates. However, given that historical updates are distributed among various participants, they may still hold contributions from the unlearning participants. This presents a challenge to the trustworthiness of federated unlearning. In this paper, we first provide an overview of both centralized and federated learning and unlearning methods. We survey relevant works and classify four main categories for federated unlearning. Subsequently, we introduce a comprehensive decentralized unlearning architecture designed for trustworthy AIGC services. To address the challenge of participants retaining historical contributions from unlearning participants, we integrate a coded computing-based interaction mechanism, ensuring both scalability and security. We further present a case study using NanoGPT to demonstrate the effectiveness of the proposed framework for decentralized unlearning. We conclude by identifying potential future research directions.
Persistent Identifierhttp://hdl.handle.net/10722/353203
ISSN
2023 Impact Factor: 6.8
2023 SCImago Journal Rankings: 3.896

 

DC FieldValueLanguage
dc.contributor.authorLin, Yijing-
dc.contributor.authorDu, Hongyang-
dc.contributor.authorGao, Zhipeng-
dc.contributor.authorYao, Jing-
dc.contributor.authorJiang, Bingting-
dc.contributor.authorNiyato, Dusit-
dc.contributor.authorLi, Ruidong-
dc.contributor.authorZhang, Ping-
dc.date.accessioned2025-01-13T03:02:36Z-
dc.date.available2025-01-13T03:02:36Z-
dc.date.issued2024-
dc.identifier.citationIEEE Network, 2024-
dc.identifier.issn0890-8044-
dc.identifier.urihttp://hdl.handle.net/10722/353203-
dc.description.abstractThe widespread adoption of AI-Generated Content (AIGC) has captured remarkable interests from academia and industry. However, this advancement brings forth challenges in data ownership, as emphasized by the General Data Protection Regulation’s right to be forgotten. The right to be forgotten allows participants to erase their data contributions from well-trained AIGC models. Training AIGC models demands an extensive amount of data and computations, often surpassing the capabilities of individual participants. As a solution, Federated Learning (FL) is employed to collaboratively train an AIGC model without sharing raw data directly. To protect the right to be forgotten in FL, “federated unlearning” has been introduced to eradicate the data contributions of unlearning participants from fully trained AIGC models, achieved by deleting historical model updates. However, given that historical updates are distributed among various participants, they may still hold contributions from the unlearning participants. This presents a challenge to the trustworthiness of federated unlearning. In this paper, we first provide an overview of both centralized and federated learning and unlearning methods. We survey relevant works and classify four main categories for federated unlearning. Subsequently, we introduce a comprehensive decentralized unlearning architecture designed for trustworthy AIGC services. To address the challenge of participants retaining historical contributions from unlearning participants, we integrate a coded computing-based interaction mechanism, ensuring both scalability and security. We further present a case study using NanoGPT to demonstrate the effectiveness of the proposed framework for decentralized unlearning. We conclude by identifying potential future research directions.-
dc.languageeng-
dc.relation.ispartofIEEE Network-
dc.subjectCoded Computing-
dc.subjectComputational modeling-
dc.subjectData models-
dc.subjectDecentralized Unlearning-
dc.subjectDifferential privacy-
dc.subjectPrivacy-
dc.subjectSecurity-
dc.subjectServers-
dc.subjectTraining-
dc.subjectTrustworthy-
dc.titleDecentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/MNET.2024.3439411-
dc.identifier.scopuseid_2-s2.0-85200806307-
dc.identifier.eissn1558-156X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats