Friday, October 20, 2023

Study presents new method for explainable AI; CRP, Concept Relevance Propagation

In their paper "From attribution maps to human-understandable explanations through concept relevance propagation," researchers from Fraunhofer Fraunhofer Heinrich-Hertz-Institut (HHI) and the Berlin Institute for the Foundations of Learning and Data (BIFOLD) the concept relevance propagation (CRP), a new method that can explain individual AI decisions as concepts understandable to humans.


The paper has now been published in Nature Machine Intelligence


AI systems are largely black boxes: It is usually not comprehensible to humans how an AI arrives at a certain decision. CRP is a state-of-the-art explanatory method for deep neural networks that complements and deepens existing explanatory models. In doing so, CRP reveals not only the characteristics of the input that are relevant to the decision made, but also the concepts the AI used, the location where they are represented in the input, and which parts of the neural network are responsible for them.


More reading and full article - From attribution maps to human-understandable explanations through Concept Relevance Propagation