Skip to content

Commit 42b1128

Browse files
authored
Update assignment3.md
1 parent d278335 commit 42b1128

File tree

1 file changed

+39
-55
lines changed

1 file changed

+39
-55
lines changed

assignments/2021/assignment3.md

Lines changed: 39 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -15,68 +15,46 @@ This assignment is due on **Wednesday, May 27 2020** at 11:59pm PDT.
1515
<li><a href="{{ site.hw_3_jupyter }}">Option B: Jupyter starter code</a></li>
1616
</ul>
1717
</details>
18-
19-
- [Goals](#goals)
2018
- [Setup](#setup)
21-
- [Option A: Google Colaboratory (Recommended)](#option-a-google-colaboratory-recommended)
22-
- [Option B: Local Development](#option-b-local-development)
19+
- [Goals](#goals)
20+
- [Google Colaboratory](#option-a-google-colaboratory-recommended)
2321
- [Q1: Image Captioning with Vanilla RNNs (29 points)](#q1-image-captioning-with-vanilla-rnns-29-points)
2422
- [Q2: Image Captioning with LSTMs (23 points)](#q2-image-captioning-with-lstms-23-points)
25-
- [Q3: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)](#q3-network-visualization-saliency-maps-class-visualization-and-fooling-images-15-points)
26-
- [Q4: Style Transfer (15 points)](#q4-style-transfer-15-points)
23+
- [Q3: Image Captioning with Transformers ( points)](#q3-image-captioning-with-transformers-18-points)
24+
- [Q4: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)](#q3-network-visualization-saliency-maps-class-visualization-and-fooling-images-15-points)
2725
- [Q5: Generative Adversarial Networks (15 points)](#q5-generative-adversarial-networks-15-points)
26+
- [Q6: Self-Supervised Learning for Image Classification (16 points)](#q6-self-supervised-learning-16-points)
27+
- [Optional: Style Transfer (15 points)](#optional-style-transfer-15-points)
2828
- [Submitting your work](#submitting-your-work)
2929

30-
### Goals
31-
32-
In this assignment, you will implement recurrent neural networks and apply them to image captioning on the Microsoft COCO data. You will also explore methods for visualizing the features of a pretrained model on ImageNet, and use this model to implement Style Transfer. Finally, you will train a Generative Adversarial Network to generate images that look like a training dataset!
33-
34-
The goals of this assignment are as follows:
35-
36-
- Understand the architecture of recurrent neural networks (RNNs) and how they operate on sequences by sharing weights over time.
37-
- Understand and implement both Vanilla RNNs and Long-Short Term Memory (LSTM) networks.
38-
- Understand how to combine convolutional neural nets and recurrent nets to implement an image captioning system.
39-
- Explore various applications of image gradients, including saliency maps, fooling images, class visualizations.
40-
- Understand and implement techniques for image style transfer.
41-
- Understand how to train and implement a Generative Adversarial Network (GAN) to produce images that resemble samples from a dataset.
4230

4331
### Setup
4432

45-
You should be able to use your setup from assignments 1 and 2.
46-
47-
You can work on the assignment in one of two ways: **remotely** on Google Colaboratory or **locally** on your own machine.
48-
49-
**Regardless of the method chosen, ensure you have followed the [setup instructions](/setup-instructions) before proceeding.**
50-
51-
#### Option A: Google Colaboratory (Recommended)
52-
53-
**Download.** Starter code containing Colab notebooks can be downloaded [here]({{site.hw_3_colab}}).
54-
55-
If you choose to work with Google Colab, please familiarize yourself with the [recommended workflow]({{site.baseurl}}/setup-instructions/#working-remotely-on-google-colaboratory).
33+
Please familiarize yourself with the [recommended workflow]({{site.baseurl}}/setup-instructions/#working-remotely-on-google-colaboratory) before starting the assignment. You should also watch the Colab walkthrough tutorial below.
5634

5735
<iframe style="display: block; margin: auto;" width="560" height="315" src="https://www.youtube.com/embed/IZUz4pRYlus" frameborder="0" allowfullscreen></iframe>
5836

5937
**Note**. Ensure you are periodically saving your notebook (`File -> Save`) so that you don't lose your progress if you step away from the assignment and the Colab VM disconnects.
6038

61-
Once you have completed all Colab notebooks **except `collect_submission.ipynb`**, proceed to the [submission instructions](#submitting-your-work).
62-
63-
#### Option B: Local Development
39+
While we don't officially support local development, we've added a <b>requirements.txt</b> file that you can use to setup a virtual env.
6440

65-
**Download.** Starter code containing jupyter notebooks can be downloaded [here]({{site.hw_3_jupyter}}).
41+
Once you have completed all Colab notebooks **except `collect_submission.ipynb`**, proceed to the [submission instructions](#submitting-your-work).
6642

67-
**Install Packages**. Once you have the starter code, activate your environment (the one you installed in the [Software Setup]({{site.baseurl}}/setup-instructions/) page) and run `pip install -r requirements.txt`.
43+
### Goals
6844

69-
**Download data**. Next, you will need to download the COCO captioning data, a pretrained SqueezeNet model (for TensorFlow), and a few ImageNet validation images. Run the following from the `assignment3` directory:
45+
In this assignment, you will implement recurrent neural networks and apply them to image captioning on the Microsoft COCO data. You will also explore methods for visualizing the features of a pretrained model on ImageNet, and use this model to implement Style Transfer. Finally, you will train a Generative Adversarial Network to generate images that look like a training dataset!
7046

71-
```bash
72-
cd cs231n/datasets
73-
./get_datasets.sh
74-
```
75-
**Start Jupyter Server**. After you've downloaded the data, you can start the Jupyter server from the `assignment3` directory by executing `jupyter notebook` in your terminal.
47+
The goals of this assignment are as follows:
7648

77-
Complete each notebook, then once you are done, go to the [submission instructions](#submitting-your-work).
49+
- Understand the architecture of recurrent neural networks (RNNs) and how they operate on sequences by sharing weights over time.
50+
- Understand and implement Vanilla RNNs, Long-Short Term Memory (LSTM), and Transformer networks for Image captioning.
51+
- Understand how to combine convolutional neural nets and recurrent nets to implement an image captioning system.
52+
- Explore various applications of image gradients, including saliency maps, fooling images, class visualizations.
53+
- Understand how to train and implement a Generative Adversarial Network (GAN) to produce images that resemble samples from a dataset.
54+
- Understand how to leverage self-supervised learning techniques to help with image classification tasks.
55+
- *(optional) Understand and implement techniques for image style transfer.
7856

79-
**You can do Questions 3, 4, and 5 in TensorFlow or PyTorch. There are two versions of each of these notebooks, one for TensorFlow and one for PyTorch. No extra credit will be awarded if you do a question in both TensorFlow and PyTorch**
57+
**You will use PyTorch for the majority of this homework.**
8058

8159
### Q1: Image Captioning with Vanilla RNNs (29 points)
8260

@@ -86,37 +64,43 @@ The notebook `RNN_Captioning.ipynb` will walk you through the implementation of
8664

8765
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs, and apply them to image captioning on MS-COCO.
8866

89-
### Q3: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
67+
### Q3: Image Captioning with Transformers (18 points)
9068

91-
The notebooks `NetworkVisualization-TensorFlow.ipynb`, and `NetworkVisualization-PyTorch.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
69+
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of Transformer Model, and apply them to image captioning on MS-COCO.
9270

93-
### Q4: Style Transfer (15 points)
71+
### Q4: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
9472

95-
In thenotebooks `StyleTransfer-TensorFlow.ipynb` or `StyleTransfer-PyTorch.ipynb` you will learn how to create images with the content of one image but the style of another. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
73+
The notebook `NetworkVisualization-PyTorch.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
9674

9775
### Q5: Generative Adversarial Networks (15 points)
9876

99-
In the notebooks `GANS-TensorFlow.ipynb` or `GANS-PyTorch.ipynb` you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awarded if you complete both notebooks.
77+
In the notebook `GANS-PyTorch.ipynb` you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awarded if you complete both notebooks.
78+
79+
### Q6: Self-Supervised Learning (16-points)
80+
81+
In the notebook `self_supervised_learning,ipynb`, you will learn how to
82+
83+
### Optional: Style Transfer (15 points)
84+
85+
In thenotebooks `StyleTransfer-TensorFlow.ipynb` or `StyleTransfer-PyTorch.ipynb` you will learn how to create images with the content of one image but the style of another. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
10086

10187
### Submitting your work
10288

10389
**Important**. Please make sure that the submitted notebooks have been run and the cell outputs are visible.
10490

105-
Once you have completed all notebooks and filled out the necessary code, there are **_two_** steps you must follow to submit your assignment:
91+
Once you have completed all notebooks and filled out the necessary code, you need to follow the below instructions to submit your work:
10692

107-
**1.** If you selected Option A and worked on the assignment in Colab, open `collect_submission.ipynb` in Colab and execute the notebook cells. If you selected Option B and worked on the assignment locally, run the bash script in `assignment3` by executing `bash collectSubmission.sh`.
93+
**1.** Open `collect_submission.ipynb` in Colab and execute the notebook cells.
10894

10995
This notebook/script will:
11096

111-
* Generate a zip file of your code (`.py` and `.ipynb`) called `a3.zip`.
97+
* Generate a zip file of your code (`.py` and `.ipynb`) called `a2.zip`.
11298
* Convert all notebooks into a single PDF file.
11399

114-
**Note for Option B users**. You must have (a) `nbconvert` installed with Pandoc and Tex support and (b) `PyPDF2` installed to successfully convert your notebooks to a PDF file. Please follow these [installation instructions](https://nbconvert.readthedocs.io/en/latest/install.html#installing-nbconvert) to install (a) and run `pip install PyPDF2` to install (b). If you are, for some inexplicable reason, unable to successfully install the above dependencies, you can manually convert each jupyter notebook to HTML (`File -> Download as -> HTML (.html)`), save the HTML page as a PDF, then concatenate all the PDFs into a single PDF submission using your favorite PDF viewer.
115-
116100
If your submission for this step was successful, you should see the following display message:
117101

118-
`### Done! Please submit a3.zip and the pdfs to Gradescope. ###`
102+
`### Done! Please submit a2.zip and the pdfs to Gradescope. ###`
119103

120-
**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/103764).
104+
**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/257661).
121105

122-
**Note for Option A users**. Remember to download `a3.zip` and `assignment.pdf` locally before submitting to Gradescope.
106+
Remember to download `a2.zip` and `assignment.pdf` locally before submitting to Gradescope.

0 commit comments

Comments
 (0)