Download PDFOpen PDF in browserCurrent versionEstimation of Concept Explanations Should Be Uncertainty AwareEasyChair Preprint 11000, version 110 pages•Date: October 1, 2023AbstractWith increasing use of deep learning models, understanding and diagnosing their predictions is becoming increasingly important. A common approach for understanding predictions of deep nets is Concept Explanations. Concept explanations are a form of global model that aim to interpet a deep networks output using human-understandable concepts. However, prevailing concept explanations methods are not robust to concepts or datasets chosen for explanation computation. Keyphrases: Explainable AI, concept bottleneck, concept explanations, uncertainty
|