Skip to content

Commit ae2d4d0

Browse files
committed
Update assignment3.md
1 parent e125dad commit ae2d4d0

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

assignments/2021/assignment3.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Once you have completed all Colab notebooks **except `collect_submission.ipynb`*
4242

4343
### Goals
4444

45-
In this assignment, you will implement recurrent neural networks and apply them to image captioning on the Microsoft COCO data. You will also explore methods for visualizing the features of a pretrained model on ImageNet, and use this model to implement Style Transfer. Finally, you will train a Generative Adversarial Network to generate images that look like a training dataset!
45+
In this assignment, you will implement language networks and apply them to image captioning on the COCO dataset. Then you will explore methods for visualizing the features of a pretrained model on ImageNet and train a Generative Adversarial Network to generate images that look like a training dataset. Finally, you will be introduced to self-supervised learning to automatically learn the visual representations of an unlabeled dataset.
4646

4747
The goals of this assignment are as follows:
4848

@@ -58,31 +58,31 @@ The goals of this assignment are as follows:
5858

5959
### Q1: Image Captioning with Vanilla RNNs (29 points)
6060

61-
The notebook `RNN_Captioning.ipynb` will walk you through the implementation of an image captioning system on MS-COCO using vanilla recurrent networks.
61+
The notebook `RNN_Captioning.ipynb` will walk you through the implementation of vanilla recurrent neural networks and apply them to image captioning on COCO.
6262

6363
### Q2: Image Captioning with LSTMs (23 points)
6464

65-
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs, and apply them to image captioning on MS-COCO.
65+
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs and apply them to image captioning on COCO.
6666

6767
### Q3: Image Captioning with Transformers (18 points)
6868

69-
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of Transformer Model, and apply them to image captioning on MS-COCO.
69+
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of a Transformer model and apply it to image captioning on COCO. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
7070

7171
### Q4: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
7272

73-
The notebook `NetworkVisualization-PyTorch.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
73+
The notebook `Network_Visualization.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images.
7474

7575
### Q5: Generative Adversarial Networks (15 points)
7676

77-
In the notebook `GANS-PyTorch.ipynb` you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awarded if you complete both notebooks.
77+
In the notebook `Generative_Adversarial_Networks.ipynb` you will learn how to generate images that match a training dataset and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
7878

7979
### Q6: Self-Supervised Learning (16-points)
8080

81-
In the notebook `self_supervised_learning,ipynb`, you will learn how to
81+
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to ... **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
8282

8383
### Optional: Style Transfer (15 points)
8484

85-
In thenotebooks `StyleTransfer-TensorFlow.ipynb` or `StyleTransfer-PyTorch.ipynb` you will learn how to create images with the content of one image but the style of another. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
85+
In the notebook `Style_Transfer.ipynb`, you will learn how to create images with the content of one image but the style of another.
8686

8787
### Submitting your work
8888

@@ -94,13 +94,13 @@ Once you have completed all notebooks and filled out the necessary code, you nee
9494

9595
This notebook/script will:
9696

97-
* Generate a zip file of your code (`.py` and `.ipynb`) called `a2.zip`.
97+
* Generate a zip file of your code (`.py` and `.ipynb`) called `a3.zip`.
9898
* Convert all notebooks into a single PDF file.
9999

100100
If your submission for this step was successful, you should see the following display message:
101101

102-
`### Done! Please submit a2.zip and the pdfs to Gradescope. ###`
102+
`### Done! Please submit a3.zip and the pdfs to Gradescope. ###`
103103

104104
**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/257661).
105105

106-
Remember to download `a2.zip` and `assignment.pdf` locally before submitting to Gradescope.
106+
Remember to download `a3.zip` and `assignment.pdf` locally before submitting to Gradescope.

0 commit comments

Comments
 (0)