Team Members: Vamsi Krishna Bunga (vb2279), Sakshi Kulkarni (smk8939), Amrutha Patil (ap7982), Charmee Mehta (cm6389)
NOTE: We used Kaggle notebook instead of google colab because of its resource restrictions.
This project focuses on addressing the susceptibility of deep neural networks (DNNs) to trojan attacks. In these attacks, inputs altered with hidden triggers can deceive the network, causing intentional errors in classification. Detecting these altered inputs is a challenging task, especially when the models are in operation. To combat this, our project proposes a two-fold strategy combining Fine-Pruning and STRong Intentional Perturbation (STRIP). This method strengthens the network's defense against these vulnerabilities, enabling the real-time detection of trojan inputs and improving overall network security.
- All the code that is produced is in the notebook file in the base directory.
- You can download the dataset files from the Drive link given below and place them inside the CSAW-HackML-2020/data folder, or you can directly download the required files using the gdown module in the notebook.
- Get the file_id of these files from the sharable links and replace them in the gdown command to download them into the notebook environment.
- Now you have all the data and models that are required in the environment. Just run the notebook in kaggle notebook with internet access enabled.
Google Drive link: https://drive.google.com/drive/folders/1N0rXiI9aMYqwwyi4cCtIw1zh3meuwLf6?usp=share_link
Sakshi Kulkarni (smk8939)
Amrutha Patil (ap7982)
Vamsi Krishna Bunga (vb2279)
Charmee Mehta (cm6389)