Download PDFOpen PDF in browserOn Robustifying Concept ExplanationsEasyChair Preprint 11000, version 27 pages•Date: October 4, 2023AbstractWith increasing use of deep learning models, understanding and diagnosing their predictions is becoming increasingly important. A common approach for understanding predictions of deep nets is Concept Explanations. Concept explanations are a form of global model that aim to interpet a deep networks output using human-understandable concepts. However, prevailing concept explanations methods are not robust to concepts or datasets chosen for explanation computation. Keyphrases: Explainable AI, concept bottleneck, concept explanations, uncertainty
|