Welcome to the repository for exploring adversarial attacks and defenses in computer vision models! This project investigates how adversarial examples can manipulate machine learning models and evaluates various simple defenses to improve robustness.
This is a companion piece to the main post here!
This repository contains the Jupyter notebook used to conduct experiments on adversarial attacks, specifically targeting computer vision models like ResNet and Vision Transformers (ViTs). Here, we delve into:
- How adversarial noise impacts model predictions.
- Different defense methods (e.g., compression, blurring, noise addition).
- Comparisons between models of different sizes and architectures.
- Insights into effective mitigation techniques for adversarial attacks.
-
adversarial_experiments.ipynb
: The main notebook containing all code, visualizations, and explanations for:- Implementing Basic Iterative Method (BIM) attacks.
- Evaluating the impact of adversarial examples on different models.
- Testing various defense methods and analyzing their effectiveness.
-
data/
: Sample images and datasets used for running adversarial attacks and defense tests.
To run the notebook, you'll need the following Python packages:
numpy
pytorch
matplotlib
torchvision
scikit-image
jupyterlab
ornotebook
Install dependencies using:
pip install -r requirements.yaml
To run the experiments:
- Clone this repository:
git clone https://github.com/yourusername/adversarial-attacks-defense.git
- Navigate to the project directory:
cd adversarial-attacks-defense
- Launch Jupyter Lab or Notebook:
jupyter lab
- Open and run the
adversarial_experiments.ipynb
notebook.
The notebook is structured to guide you through each stage of the experiments, from loading data, applying adversarial attacks, and running different defense strategies, to visualizing the results.
- Sections to Explore:
- Introduction to Adversarial Attacks: Basic concepts and examples of adversarial noise.
- Experiment Setup: Details on the models used, the data, and the attack implementation.
- Defense Methods Evaluation: Testing simple defenses like blurring, compression, and adding noise.
-
Can I apply these techniques to other models?
Yes! You can modify the notebook to use other pretrained models from PyTorch. The structure is flexible for adapting to different architectures. -
Are there any limitations?
These experiments are computationally intensive, especially when dealing with larger models like ResNet-101. Running them on a GPU is highly recommended.
Contributions are welcome! If you have ideas for new defense methods, or ways to improve the experiments, feel free to:
- Fork the repository.
- Create a feature branch.
- Submit a pull request.
This project is licensed under the MIT License—see the LICENSE
file for details.
Thanks to the creators of the datasets and pretrained models used in this project. Special thanks to Neuralception for providing insights into BIM attack implementation.