Skip to content

faizahkureshi232/XAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XAI Framework: Explainable AI with SHAP, Grad-CAM, and LRP

Overview

This project presents an Explainable AI (XAI) framework that integrates SHAP, Grad-CAM, and Layer-wise Relevance Propagation (LRP) to provide detailed insights into the decision-making process of machine learning models. By leveraging these powerful XAI techniques, the framework aims to highlight the features that drive model predictions and ensure the model is learning meaningful patterns.

The framework has been tested on two datasets:

  1. Brain Tumor Dataset: Medical images to predict and classify brain tumors.
  2. ImageNet Dataset: General-purpose dataset for image classification tasks.

The results confirm that the framework effectively identifies correct and relevant features, demonstrating that the model is learning from meaningful and interpretable patterns.


Features

  • SHAP (SHapley Additive exPlanations):
    Provides global and local interpretability by assigning feature importance values to input data.

  • Grad-CAM (Gradient-weighted Class Activation Mapping):
    Visualizes class-specific discriminative regions in an input image using gradient-based heatmaps.

  • LRP (Layer-wise Relevance Propagation):
    Explains individual predictions by distributing the model output back to its input features.

  • Multi-Dataset Evaluation:
    Validated on diverse datasets (Brain Tumor and ImageNet) to ensure robustness across domains.


Results

  1. Brain Tumor Dataset:

    • The framework successfully highlighted tumor regions as critical features in predictions.
    • Interpretations aligned with domain expert expectations, ensuring trustworthiness in medical applications.
  2. ImageNet Dataset:

    • Demonstrated correct feature extraction for object classification tasks, validating model generalization.

Both datasets revealed that the underlying models rely on relevant and interpretable features, proving their effectiveness and the utility of XAI techniques in ensuring model reliability.


Installation

  1. Clone the repository:

    bash

    Copy code

    git clone https://github.com/username/XAI-Framework.git cd XAI-Framework

  2. Install dependencies:

    bash

    Copy code

    pip install -r requirements.txt


Usage

  1. Prepare your datasets in the required format (see datasets/ directory for examples).

  2. Run the framework:

    bash

    Copy code

    python run_xai.py --dataset brain_tumor --technique gradcam

    Replace --dataset with imagenet and --technique with shap or lrp to explore other options.

  3. Output explanations and visualizations will be saved in the outputs/ directory.


Project Structure

graphql

Copy code

XAI-Framework/ ├── datasets/ # Sample datasets and preprocessing scripts ├── explainers/ # SHAP, Grad-CAM, and LRP implementations ├── outputs/ # Visualizations and generated explanations ├── utils/ # Utility functions ├── requirements.txt # Dependencies ├── run_xai.py # Main execution script └── README.md # Project description


Contributing

We welcome contributions to improve this framework! Feel free to submit a pull request or open an issue for suggestions or bug reports.


Acknowledgments

  • SHAP: Lundberg and Lee, "A Unified Approach to Interpreting Model Predictions."
  • Grad-CAM: Selvaraju et al., "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization."
  • LRP: Montavon et al., "Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition."

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published