File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.neucom.2024.127630
- Scopus: eid_2-s2.0-85189935124
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: A federated learning incentive mechanism in a non-monopoly market
Title | A federated learning incentive mechanism in a non-monopoly market |
---|---|
Authors | |
Keywords | Federated learning Incentive mechanism Non-monopoly |
Issue Date | 14-Jun-2024 |
Publisher | Elsevier |
Citation | Neurocomputing, 2024, v. 586 How to Cite? |
Abstract | Federated learning, a privacy-preserving collaborative machine learning paradigm, has led to the proposal of various incentive mechanisms to encourage active participation of data owners. However, most of the existing mechanisms focused on the monopsony market scenario, where only one server-side entity (buyer) is involved. In real-world scenarios, multiple server parties may express simultaneous interest in the data of a client (seller), leading to a non-monopoly market. This paper aims to bridge this gap by introducing the concept of incentivizing federated learning in a non-monopoly market and presents a non-monopoly federated learning incentive mechanism, coined as NmFLI. NmFLI employs a double-auction mechanism to implement federated learning incentives and utilizes the Vickery–Clarke–Groves (VCG) mechanism to ensure client trustworthiness. Additionally, NmFLI devises a method for measuring data quality by calculating the value of clients based on their historical performance, which effectively balances accuracy and computational complexity. We demonstrate that NmFLI possesses properties such as individual rationality and strategy-proofness. Experimental results indicate that NmFLI can effectively incentivize federated learning and achieve higher accuracy than baseline models across various scenarios. For example, when the objectives of various tasks overlap, NmFLI outperforms the best baseline by 3.09% with imbalanced client data while maintaining the same data size. Moreover, NmFLI surpasses the best baseline by 6.12% with different amounts of client data. |
Persistent Identifier | http://hdl.handle.net/10722/351788 |
ISSN | 2023 Impact Factor: 5.5 2023 SCImago Journal Rankings: 1.815 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Na, Shijie | - |
dc.contributor.author | Liang, Yuzhi | - |
dc.contributor.author | Yiu, Siu Ming | - |
dc.date.accessioned | 2024-11-29T00:35:11Z | - |
dc.date.available | 2024-11-29T00:35:11Z | - |
dc.date.issued | 2024-06-14 | - |
dc.identifier.citation | Neurocomputing, 2024, v. 586 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351788 | - |
dc.description.abstract | Federated learning, a privacy-preserving collaborative machine learning paradigm, has led to the proposal of various incentive mechanisms to encourage active participation of data owners. However, most of the existing mechanisms focused on the monopsony market scenario, where only one server-side entity (buyer) is involved. In real-world scenarios, multiple server parties may express simultaneous interest in the data of a client (seller), leading to a non-monopoly market. This paper aims to bridge this gap by introducing the concept of incentivizing federated learning in a non-monopoly market and presents a non-monopoly federated learning incentive mechanism, coined as NmFLI. NmFLI employs a double-auction mechanism to implement federated learning incentives and utilizes the Vickery–Clarke–Groves (VCG) mechanism to ensure client trustworthiness. Additionally, NmFLI devises a method for measuring data quality by calculating the value of clients based on their historical performance, which effectively balances accuracy and computational complexity. We demonstrate that NmFLI possesses properties such as individual rationality and strategy-proofness. Experimental results indicate that NmFLI can effectively incentivize federated learning and achieve higher accuracy than baseline models across various scenarios. For example, when the objectives of various tasks overlap, NmFLI outperforms the best baseline by 3.09% with imbalanced client data while maintaining the same data size. Moreover, NmFLI surpasses the best baseline by 6.12% with different amounts of client data. | - |
dc.language | eng | - |
dc.publisher | Elsevier | - |
dc.relation.ispartof | Neurocomputing | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Federated learning | - |
dc.subject | Incentive mechanism | - |
dc.subject | Non-monopoly | - |
dc.title | A federated learning incentive mechanism in a non-monopoly market | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.neucom.2024.127630 | - |
dc.identifier.scopus | eid_2-s2.0-85189935124 | - |
dc.identifier.volume | 586 | - |
dc.identifier.eissn | 1872-8286 | - |
dc.identifier.issnl | 0925-2312 | - |