Home // EXPLAINABILITY 2025, The Second International Conference on Systems Explainability // View article


Identifying Confusion Trends in Concept-based XAI for Multi-Label Classification

Authors:
Haadia Amjad
Kilian Göller
Steffen Seitz
Carsten Knoll
Ronald Tetzlaff

Keywords: Concept-based XAI; Multi-Label Classification; Concept Distinctiveness.

Abstract:
Deep Neural Networks (DNNs) deployed in high-risk domains, such as healthcare and autonomous driving, must be not only accurate but also understandable to ensure user trust. In real-world computer vision tasks, these models often operate on complex images containing background noise and are heavily annotated. To make such models explainable, Concept-based Explainable AI (CXAI) methods need to be assessed for their applicability and problem-solving capacity. In this work, we explore CXAI use cases in multi-label classification by training two DNNs, VGG16 and ResNet50, on the 20 most annotated labels in the MS-COCO dataset (Microsoft Common Objects in Context). We apply two CXAI methods, CRP (Concept Relevance Propagation) and CRAFT (Concept Recursive Activation FacTorization), to generate concept-level explanations and investigate the overall evaluations. Our analysis reveals three key findings: (1) CXAI highlights learning weaknesses in DNNs, (2) higher concept distinctiveness reduces label and concept confusion, and (3) environmental concepts expose dataset-induced biases. Our results demonstrate the potential of CXAI to enhance the understanding of model generalizability and to diagnose bias instigated by the dataset.

Pages: 1 to 7

Copyright: Copyright (c) IARIA, 2025

Publication date: October 26, 2025

Published in: conference

ISBN: 978-1-68558-318-7

Location: Barcelona, Spain

Dates: from October 26, 2025 to October 30, 2025