The increasing deployment of AI in critical sectors necessitates advancements in explainable AI (XAI) to ensure transparency and trustworthiness of AI decisions. This paper introduces a novel methodology that leverages the Virtual Environmental Simulation for Physics-based Analysis (VESPA) framework in conjunction with Randomized Input Sampling for Explanation (RISE) to provide enhanced explainability for AI models, particularly in complex simulated environments. VESPA, known for its high-fidelity, physics-based simulations across diverse conditions, generates a vast dataset encompassing various sensor configurations, environmental factors, and material responses. This dataset serves as the foundation for applying RISE, a model-agnostic approach that generates pixel-level importance maps by probing the AI model with masked versions of the input images. Through this integration, we offer a systematic way to visualize and understand the influence of different environmental elements on AI decisions. Our approach not only sheds light on the ”black box” of AI decision-making processes but also provides a scalable framework for evaluating AI models’ robustness and reliability under a wide array of simulated scenarios.
|