File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR52729.2023.01568
- Scopus: eid_2-s2.0-85172429434
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: STDLens: Model Hijacking-Resilient Federated Learning for Object Detection
| Title | STDLens: Model Hijacking-Resilient Federated Learning for Object Detection |
|---|---|
| Authors | |
| Keywords | accountability ethics in vision fairness privacy Transparency |
| Issue Date | 17-Jun-2023 |
| Publisher | IEEE |
| Abstract | Federated Learning (FL) has been gaining popularity as a collaborative learning framework to train deep learning-based object detection models over a distributed population of clients. Despite its advantages, FL is vulnerable to model hijacking. The attacker can control how the object detection system should misbehave by implanting Trojaned gradients using only a small number of compromised clients in the collaborative learning process. This paper introduces STDLens, a principled approach to safeguarding FL against such attacks. We first investigate existing mitigation mechanisms and analyze their failures caused by the inherent errors in spatial clustering analysis on gradients. Based on the insights, we introduce a three-tier forensic framework to identify and expel Trojaned gradients and reclaim the performance over the course of FL. We consider three types of adaptive attacks and demonstrate the robustness of STDLens against advanced adversaries. Extensive experiments show that STDLens can protect FL against different model hijacking attacks and outperform existing methods in identifying and removing Trojaned gradients with significantly higher precision and much lower false-positive rates. The source code is available at https://github.com/git-disl/STDLens. |
| Persistent Identifier | http://hdl.handle.net/10722/359008 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chow, Ka-Ho | - |
| dc.contributor.author | Liu, Ling | - |
| dc.contributor.author | Wei, Wenqi | - |
| dc.contributor.author | Ilhan, Fatih | - |
| dc.contributor.author | Wu, Yanzhao | - |
| dc.date.accessioned | 2025-08-19T00:32:04Z | - |
| dc.date.available | 2025-08-19T00:32:04Z | - |
| dc.date.issued | 2023-06-17 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/359008 | - |
| dc.description.abstract | <p>Federated Learning (FL) has been gaining popularity as a collaborative learning framework to train deep learning-based object detection models over a distributed population of clients. Despite its advantages, FL is vulnerable to model hijacking. The attacker can control how the object detection system should misbehave by implanting Trojaned gradients using only a small number of compromised clients in the collaborative learning process. This paper introduces STDLens, a principled approach to safeguarding FL against such attacks. We first investigate existing mitigation mechanisms and analyze their failures caused by the inherent errors in spatial clustering analysis on gradients. Based on the insights, we introduce a three-tier forensic framework to identify and expel Trojaned gradients and reclaim the performance over the course of FL. We consider three types of adaptive attacks and demonstrate the robustness of STDLens against advanced adversaries. Extensive experiments show that STDLens can protect FL against different model hijacking attacks and outperform existing methods in identifying and removing Trojaned gradients with significantly higher precision and much lower false-positive rates. The source code is available at https://github.com/git-disl/STDLens.</p> | - |
| dc.language | eng | - |
| dc.publisher | IEEE | - |
| dc.relation.ispartof | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2023-24/06/2023, Vancouver, BC, Canada) | - |
| dc.subject | accountability | - |
| dc.subject | ethics in vision | - |
| dc.subject | fairness | - |
| dc.subject | privacy | - |
| dc.subject | Transparency | - |
| dc.title | STDLens: Model Hijacking-Resilient Federated Learning for Object Detection | - |
| dc.type | Conference_Paper | - |
| dc.identifier.doi | 10.1109/CVPR52729.2023.01568 | - |
| dc.identifier.scopus | eid_2-s2.0-85172429434 | - |
| dc.identifier.spage | 16343 | - |
| dc.identifier.epage | 16351 | - |
