File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2024.3380604
- Scopus: eid_2-s2.0-85188896489
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Gradient-Based Instance-Specific Visual Explanations for Object Specification and Object Discrimination
Title | Gradient-Based Instance-Specific Visual Explanations for Object Specification and Object Discrimination |
---|---|
Authors | |
Keywords | Deep learning explainable AI explaining object detection gradient-based explanation human eye gaze instance-level explanation knowledge distillation non-maximum suppression object discrimination object specification |
Issue Date | 22-Mar-2024 |
Publisher | Institute of Electrical and Electronics Engineers |
Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, v. 46, n. 9, p. 5967-5985 How to Cite? |
Abstract | We propose the gradient-weighted Object Detector Activation Maps (ODAM), a visual explanation technique for interpreting the predictions of object detectors. Utilizing the gradients of detector targets flowing into the intermediate feature maps, ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute. Compared to previous works on classification activation maps (CAM), ODAM generates instance-specific explanations rather than class-specific ones. We show that ODAM is applicable to one-stage, two-stage, and transformer-based detectors with different types of detector backbones and heads, and produces higher-quality visual explanations than the state-of-the-art in terms of both effectiveness and efficiency. We discuss two explanation tasks for object detection: 1) object specification: what is the important region for the prediction? 2) object discrimination: which object is detected? Aiming at these two aspects, we present a detailed analysis of the visual explanations of detectors and carry out extensive experiments to validate the effectiveness of the proposed ODAM. Furthermore, we investigate user trust on the explanation maps, how well the visual explanations of object detectors agrees with human explanations, as measured through human eye gaze, and whether this agreement is related with user trust. Finally, we also propose two applications, ODAM-KD and ODAM-NMS, based on these two abilities of ODAM. ODAM-KD utilizes the object specification of ODAM to generate top-down attention for key predictions and instruct the knowledge distillation of object detection. ODAM-NMS considers the location of the model's explanation for each prediction to distinguish the duplicate detected objects. A training scheme, ODAM-Train, is proposed to improve the quality on object discrimination, and help with ODAM-NMS. |
Persistent Identifier | http://hdl.handle.net/10722/351192 |
ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhao, Chenyang | - |
dc.contributor.author | Hsiao, Janet H | - |
dc.contributor.author | Chan, Antoni B | - |
dc.date.accessioned | 2024-11-13T00:36:06Z | - |
dc.date.available | 2024-11-13T00:36:06Z | - |
dc.date.issued | 2024-03-22 | - |
dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, v. 46, n. 9, p. 5967-5985 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351192 | - |
dc.description.abstract | <p>We propose the gradient-weighted Object Detector Activation Maps (ODAM), a visual explanation technique for interpreting the predictions of object detectors. Utilizing the gradients of detector targets flowing into the intermediate feature maps, ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute. Compared to previous works on classification activation maps (CAM), ODAM generates instance-specific explanations rather than class-specific ones. We show that ODAM is applicable to one-stage, two-stage, and transformer-based detectors with different types of detector backbones and heads, and produces higher-quality visual explanations than the state-of-the-art in terms of both effectiveness and efficiency. We discuss two explanation tasks for object detection: 1) object specification: what is the important region for the prediction? 2) object discrimination: which object is detected? Aiming at these two aspects, we present a detailed analysis of the visual explanations of detectors and carry out extensive experiments to validate the effectiveness of the proposed ODAM. Furthermore, we investigate user trust on the explanation maps, how well the visual explanations of object detectors agrees with human explanations, as measured through human eye gaze, and whether this agreement is related with user trust. Finally, we also propose two applications, ODAM-KD and ODAM-NMS, based on these two abilities of ODAM. ODAM-KD utilizes the object specification of ODAM to generate top-down attention for key predictions and instruct the knowledge distillation of object detection. ODAM-NMS considers the location of the model's explanation for each prediction to distinguish the duplicate detected objects. A training scheme, ODAM-Train, is proposed to improve the quality on object discrimination, and help with ODAM-NMS.</p> | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Deep learning | - |
dc.subject | explainable AI | - |
dc.subject | explaining object detection | - |
dc.subject | gradient-based explanation | - |
dc.subject | human eye gaze | - |
dc.subject | instance-level explanation | - |
dc.subject | knowledge distillation | - |
dc.subject | non-maximum suppression | - |
dc.subject | object discrimination | - |
dc.subject | object specification | - |
dc.title | Gradient-Based Instance-Specific Visual Explanations for Object Specification and Object Discrimination | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/TPAMI.2024.3380604 | - |
dc.identifier.scopus | eid_2-s2.0-85188896489 | - |
dc.identifier.volume | 46 | - |
dc.identifier.issue | 9 | - |
dc.identifier.spage | 5967 | - |
dc.identifier.epage | 5985 | - |
dc.identifier.eissn | 1939-3539 | - |
dc.identifier.issnl | 0162-8828 | - |