Presentation + Paper
18 June 2024 Enhancing interpretability and bias control in deep learning models for medical image analysis using generative AI
Author Affiliations +
Abstract
Explainability and bias mitigation are crucial aspects of deep learning (DL) models for medical image analysis. Generative AI, particularly autoencoders, can enhance explainability by analyzing the latent space to identify and control variables that contribute to biases. By manipulating the latent space, biases can be mitigated in the classification layer. Furthermore, the latent space can be visualized to provide a more intuitive understanding of the model’s decision-making process. In our work, we demonstrate how the proposed approach enhances the explainability of the decision-making process, surpassing the capabilities of traditional methods like Grad-Cam. Our approach effectively identifies and mitigates biases in a straightforward manner, without necessitating model retraining or dataset modification, showing how Generative AI has the potential to play a pivotal role in addressing explainability and bias mitigation challenges, enhancing the trustworthiness and clinical utility of DL-powered medical image analysis tools.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Carlos Minutti-Martinez, Boris Escalante-Ramírez, and Jimena Olveres "Enhancing interpretability and bias control in deep learning models for medical image analysis using generative AI", Proc. SPIE 12998, Optics, Photonics, and Digital Technologies for Imaging Applications VIII, 1299806 (18 June 2024); https://doi.org/10.1117/12.3022263
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Medical imaging

Visualization

Artificial intelligence

Deep learning

Visual process modeling

Image restoration

Back to Top