File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/MNET.2024.3439411
- Scopus: eid_2-s2.0-85200806307
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Decentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services
Title | Decentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services |
---|---|
Authors | |
Keywords | Coded Computing Computational modeling Data models Decentralized Unlearning Differential privacy Privacy Security Servers Training Trustworthy |
Issue Date | 2024 |
Citation | IEEE Network, 2024 How to Cite? |
Abstract | The widespread adoption of AI-Generated Content (AIGC) has captured remarkable interests from academia and industry. However, this advancement brings forth challenges in data ownership, as emphasized by the General Data Protection Regulation’s right to be forgotten. The right to be forgotten allows participants to erase their data contributions from well-trained AIGC models. Training AIGC models demands an extensive amount of data and computations, often surpassing the capabilities of individual participants. As a solution, Federated Learning (FL) is employed to collaboratively train an AIGC model without sharing raw data directly. To protect the right to be forgotten in FL, “federated unlearning” has been introduced to eradicate the data contributions of unlearning participants from fully trained AIGC models, achieved by deleting historical model updates. However, given that historical updates are distributed among various participants, they may still hold contributions from the unlearning participants. This presents a challenge to the trustworthiness of federated unlearning. In this paper, we first provide an overview of both centralized and federated learning and unlearning methods. We survey relevant works and classify four main categories for federated unlearning. Subsequently, we introduce a comprehensive decentralized unlearning architecture designed for trustworthy AIGC services. To address the challenge of participants retaining historical contributions from unlearning participants, we integrate a coded computing-based interaction mechanism, ensuring both scalability and security. We further present a case study using NanoGPT to demonstrate the effectiveness of the proposed framework for decentralized unlearning. We conclude by identifying potential future research directions. |
Persistent Identifier | http://hdl.handle.net/10722/353203 |
ISSN | 2023 Impact Factor: 6.8 2023 SCImago Journal Rankings: 3.896 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lin, Yijing | - |
dc.contributor.author | Du, Hongyang | - |
dc.contributor.author | Gao, Zhipeng | - |
dc.contributor.author | Yao, Jing | - |
dc.contributor.author | Jiang, Bingting | - |
dc.contributor.author | Niyato, Dusit | - |
dc.contributor.author | Li, Ruidong | - |
dc.contributor.author | Zhang, Ping | - |
dc.date.accessioned | 2025-01-13T03:02:36Z | - |
dc.date.available | 2025-01-13T03:02:36Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | IEEE Network, 2024 | - |
dc.identifier.issn | 0890-8044 | - |
dc.identifier.uri | http://hdl.handle.net/10722/353203 | - |
dc.description.abstract | The widespread adoption of AI-Generated Content (AIGC) has captured remarkable interests from academia and industry. However, this advancement brings forth challenges in data ownership, as emphasized by the General Data Protection Regulation’s right to be forgotten. The right to be forgotten allows participants to erase their data contributions from well-trained AIGC models. Training AIGC models demands an extensive amount of data and computations, often surpassing the capabilities of individual participants. As a solution, Federated Learning (FL) is employed to collaboratively train an AIGC model without sharing raw data directly. To protect the right to be forgotten in FL, “federated unlearning” has been introduced to eradicate the data contributions of unlearning participants from fully trained AIGC models, achieved by deleting historical model updates. However, given that historical updates are distributed among various participants, they may still hold contributions from the unlearning participants. This presents a challenge to the trustworthiness of federated unlearning. In this paper, we first provide an overview of both centralized and federated learning and unlearning methods. We survey relevant works and classify four main categories for federated unlearning. Subsequently, we introduce a comprehensive decentralized unlearning architecture designed for trustworthy AIGC services. To address the challenge of participants retaining historical contributions from unlearning participants, we integrate a coded computing-based interaction mechanism, ensuring both scalability and security. We further present a case study using NanoGPT to demonstrate the effectiveness of the proposed framework for decentralized unlearning. We conclude by identifying potential future research directions. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Network | - |
dc.subject | Coded Computing | - |
dc.subject | Computational modeling | - |
dc.subject | Data models | - |
dc.subject | Decentralized Unlearning | - |
dc.subject | Differential privacy | - |
dc.subject | Privacy | - |
dc.subject | Security | - |
dc.subject | Servers | - |
dc.subject | Training | - |
dc.subject | Trustworthy | - |
dc.title | Decentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/MNET.2024.3439411 | - |
dc.identifier.scopus | eid_2-s2.0-85200806307 | - |
dc.identifier.eissn | 1558-156X | - |