There is an increasing use of artificial intelligence (AI) in high-stakes decision-making areas like criminal justice, autonomous vehicles, food safety, and radiology. In radiology, the current AI standard is deep learning, which involves complex neural networks. These networks are often called “black boxes” because their decision-making processes are not fully understandable, raising concerns about potential biases.
To address this, there is a growing demand for explainable artificial intelligence (XAI). Notable XAI initiatives include DARPA’s projects and the ACM’s conferences on Fairness, Accountability, and Transparency (ACM FAccT). In medical imaging, the annual iMIMIC workshop at the MICCAI conference is dedicated to improving the interpretability of machine intelligence.
Current XAI Status
Current XAI techniques in radiology typically either provide a visual explanation, a textual explanation, an example-based explanation, or a combination of these. Visual explanations often provide a “heatmap” or “saliency map,” pinpointing where the algorithm based its decision on. Visual explanations are currently by far the most used XAI technique in radiology. Textual explanations provide textual descriptions, ranging from relatively simple descriptions such as “hyperintense lesion” up to entire medical reports. Example-based explanations provide relevant examples to explain how a neural network made a decision. It is similar to how a radiologist leverages past cases to analyze the case at hand.
Future XAI Potential
An important step is to evaluate how well an XAI technique performs. Several evaluation methods exist from computer vision, but these do not fully translate to radiology. Therefore, “Clinical XAI Guidelines” have recently been proposed to evaluate XAI techniques in medical images based on five criteria: (1) understandability, (2) clinical relevance, (3) truthfulness, (4) informative plausibility, and (5) computational efficiency. These five criteria were evaluated in radiological tasks for sixteen commonly used visual explanation techniques; none of them met all five criteria. This further reinforces the need for adopting explainable-by-design methods, which integrate explainability into AI models from their initial development stages.
In summary, explainable artificial intelligence (XAI) is a young, rapidly evolving, and exciting field. It is essential for us as a community to actively contribute to the direction of XAI in the field of radiology. By deciding together on the criteria and aspects that should be prioritized, we can shape the future development of XAI techniques in radiology. This involvement ensures that the emphasis is placed on the specific needs and challenges of the radiology domain, enabling us to create personalized XAI that aligns with the need of clinicians, radiologists, and patients, while complying with regulatory standards.
Read the original study by Bas van der Velden: Explainable AI: current status and future potential
Article originally posted on Medium.