File Download

There are no files associated with this item.

Supplementary

Conference Paper: Human Attention-Guided Explainable AI for Object Detection

TitleHuman Attention-Guided Explainable AI for Object Detection
Authors
Issue Date26-Jul-2023
Abstract

Although object detection AI plays an important role in many critical systems, corresponding Explainable AI (XAI) methods remain very limited. Here we first developed FullGrad-CAM and FullGrad-CAM++ by extending traditional gradient-based methods to generate object-specific explanations with higher plausibility. Since human attention may reflect features more in-terpretable to humans, we explored the possibility to use it as guidance to learn how to combine the explanatory information in the detector model to best present as an XAI saliency map that is interpretable (plausible) to humans. Interestingly, we found that human attention maps had higher faithfulness for explaining the detector model than existing saliency-based XAI methods. By using trainable activation functions and smoothing kernels to maximize the XAI saliency map similarity to human attention maps, the generated map had higher faithfulness and plausibility than both existing XAI methods and human atten-tion maps. The learned functions were model-specific, well generalizable to other databases.


Persistent Identifierhttp://hdl.handle.net/10722/337707

 

DC FieldValueLanguage
dc.contributor.authorLiu, G-
dc.contributor.authorZhang, J-
dc.contributor.authorChan, A-
dc.contributor.authorHsiao, J H-
dc.date.accessioned2024-03-11T10:23:15Z-
dc.date.available2024-03-11T10:23:15Z-
dc.date.issued2023-07-26-
dc.identifier.urihttp://hdl.handle.net/10722/337707-
dc.description.abstract<p>Although object detection AI plays an important role in many critical systems, corresponding Explainable AI (XAI) methods remain very limited. Here we first developed FullGrad-CAM and FullGrad-CAM++ by extending traditional gradient-based methods to generate object-specific explanations with higher plausibility. Since human attention may reflect features more in-terpretable to humans, we explored the possibility to use it as guidance to learn how to combine the explanatory information in the detector model to best present as an XAI saliency map that is interpretable (plausible) to humans. Interestingly, we found that human attention maps had higher faithfulness for explaining the detector model than existing saliency-based XAI methods. By using trainable activation functions and smoothing kernels to maximize the XAI saliency map similarity to human attention maps, the generated map had higher faithfulness and plausibility than both existing XAI methods and human atten-tion maps. The learned functions were model-specific, well generalizable to other databases.<br></p>-
dc.languageeng-
dc.relation.ispartof44th Annual Meeting of the Cognitive Science Society (26/07/2023-29/07/2023, Sydney)-
dc.titleHuman Attention-Guided Explainable AI for Object Detection-
dc.typeConference_Paper-
dc.identifier.issue45-
dc.identifier.spage2573-
dc.identifier.epage2580-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats