File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: FOOLING DETECTION ALONE IS NOT ENOUGH: ADVERSARIAL ATTACK AGAINST MULTIPLE OBJECT TRACKING
Title | FOOLING DETECTION ALONE IS NOT ENOUGH: ADVERSARIAL ATTACK AGAINST MULTIPLE OBJECT TRACKING |
---|---|
Authors | |
Issue Date | 2020 |
Citation | 8th International Conference on Learning Representations, ICLR 2020, 2020 How to Cite? |
Abstract | Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy. In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%. |
Persistent Identifier | http://hdl.handle.net/10722/346900 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jia, Yunhan | - |
dc.contributor.author | Lu, Yantao | - |
dc.contributor.author | Shen, Junjie | - |
dc.contributor.author | Chen, Qi Alfred | - |
dc.contributor.author | Chen, Hao | - |
dc.contributor.author | Zhong, Zhenyu | - |
dc.contributor.author | Wei, Tao | - |
dc.date.accessioned | 2024-09-17T04:14:03Z | - |
dc.date.available | 2024-09-17T04:14:03Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | 8th International Conference on Learning Representations, ICLR 2020, 2020 | - |
dc.identifier.uri | http://hdl.handle.net/10722/346900 | - |
dc.description.abstract | Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy. In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%. | - |
dc.language | eng | - |
dc.relation.ispartof | 8th International Conference on Learning Representations, ICLR 2020 | - |
dc.title | FOOLING DETECTION ALONE IS NOT ENOUGH: ADVERSARIAL ATTACK AGAINST MULTIPLE OBJECT TRACKING | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85093853014 | - |