File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A federated learning incentive mechanism in a non-monopoly market

TitleA federated learning incentive mechanism in a non-monopoly market
Authors
KeywordsFederated learning
Incentive mechanism
Non-monopoly
Issue Date14-Jun-2024
PublisherElsevier
Citation
Neurocomputing, 2024, v. 586 How to Cite?
AbstractFederated learning, a privacy-preserving collaborative machine learning paradigm, has led to the proposal of various incentive mechanisms to encourage active participation of data owners. However, most of the existing mechanisms focused on the monopsony market scenario, where only one server-side entity (buyer) is involved. In real-world scenarios, multiple server parties may express simultaneous interest in the data of a client (seller), leading to a non-monopoly market. This paper aims to bridge this gap by introducing the concept of incentivizing federated learning in a non-monopoly market and presents a non-monopoly federated learning incentive mechanism, coined as NmFLI. NmFLI employs a double-auction mechanism to implement federated learning incentives and utilizes the Vickery–Clarke–Groves (VCG) mechanism to ensure client trustworthiness. Additionally, NmFLI devises a method for measuring data quality by calculating the value of clients based on their historical performance, which effectively balances accuracy and computational complexity. We demonstrate that NmFLI possesses properties such as individual rationality and strategy-proofness. Experimental results indicate that NmFLI can effectively incentivize federated learning and achieve higher accuracy than baseline models across various scenarios. For example, when the objectives of various tasks overlap, NmFLI outperforms the best baseline by 3.09% with imbalanced client data while maintaining the same data size. Moreover, NmFLI surpasses the best baseline by 6.12% with different amounts of client data.
Persistent Identifierhttp://hdl.handle.net/10722/351788
ISSN
2023 Impact Factor: 5.5
2023 SCImago Journal Rankings: 1.815

 

DC FieldValueLanguage
dc.contributor.authorNa, Shijie-
dc.contributor.authorLiang, Yuzhi-
dc.contributor.authorYiu, Siu Ming-
dc.date.accessioned2024-11-29T00:35:11Z-
dc.date.available2024-11-29T00:35:11Z-
dc.date.issued2024-06-14-
dc.identifier.citationNeurocomputing, 2024, v. 586-
dc.identifier.issn0925-2312-
dc.identifier.urihttp://hdl.handle.net/10722/351788-
dc.description.abstractFederated learning, a privacy-preserving collaborative machine learning paradigm, has led to the proposal of various incentive mechanisms to encourage active participation of data owners. However, most of the existing mechanisms focused on the monopsony market scenario, where only one server-side entity (buyer) is involved. In real-world scenarios, multiple server parties may express simultaneous interest in the data of a client (seller), leading to a non-monopoly market. This paper aims to bridge this gap by introducing the concept of incentivizing federated learning in a non-monopoly market and presents a non-monopoly federated learning incentive mechanism, coined as NmFLI. NmFLI employs a double-auction mechanism to implement federated learning incentives and utilizes the Vickery–Clarke–Groves (VCG) mechanism to ensure client trustworthiness. Additionally, NmFLI devises a method for measuring data quality by calculating the value of clients based on their historical performance, which effectively balances accuracy and computational complexity. We demonstrate that NmFLI possesses properties such as individual rationality and strategy-proofness. Experimental results indicate that NmFLI can effectively incentivize federated learning and achieve higher accuracy than baseline models across various scenarios. For example, when the objectives of various tasks overlap, NmFLI outperforms the best baseline by 3.09% with imbalanced client data while maintaining the same data size. Moreover, NmFLI surpasses the best baseline by 6.12% with different amounts of client data.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofNeurocomputing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectFederated learning-
dc.subjectIncentive mechanism-
dc.subjectNon-monopoly-
dc.titleA federated learning incentive mechanism in a non-monopoly market-
dc.typeArticle-
dc.identifier.doi10.1016/j.neucom.2024.127630-
dc.identifier.scopuseid_2-s2.0-85189935124-
dc.identifier.volume586-
dc.identifier.eissn1872-8286-
dc.identifier.issnl0925-2312-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats