You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: assignments/2021/assignment3.md
+11-11Lines changed: 11 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ Once you have completed all Colab notebooks **except `collect_submission.ipynb`*
42
42
43
43
### Goals
44
44
45
-
In this assignment, you will implement recurrent neural networks and apply them to image captioning on the Microsoft COCO data. You will also explore methods for visualizing the features of a pretrained model on ImageNet, and use this model to implement Style Transfer. Finally, you will train a Generative Adversarial Network to generate images that look like a training dataset!
45
+
In this assignment, you will implement language networks and apply them to image captioning on the COCO dataset. Then you will explore methods for visualizing the features of a pretrained model on ImageNet and train a Generative Adversarial Network to generate images that look like a training dataset. Finally, you will be introduced to self-supervised learning to automatically learn the visual representations of an unlabeled dataset.
46
46
47
47
The goals of this assignment are as follows:
48
48
@@ -58,31 +58,31 @@ The goals of this assignment are as follows:
58
58
59
59
### Q1: Image Captioning with Vanilla RNNs (29 points)
60
60
61
-
The notebook `RNN_Captioning.ipynb` will walk you through the implementation of an image captioning system on MS-COCO using vanilla recurrent networks.
61
+
The notebook `RNN_Captioning.ipynb` will walk you through the implementation of vanilla recurrent neural networks and apply them to image captioning on COCO.
62
62
63
63
### Q2: Image Captioning with LSTMs (23 points)
64
64
65
-
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs, and apply them to image captioning on MS-COCO.
65
+
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs and apply them to image captioning on COCO.
66
66
67
67
### Q3: Image Captioning with Transformers (18 points)
68
68
69
-
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of Transformer Model, and apply them to image captioning on MS-COCO.
69
+
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of a Transformer model and apply it to image captioning on COCO.**When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
70
70
71
71
### Q4: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
72
72
73
-
The notebook `NetworkVisualization-PyTorch.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
73
+
The notebook `Network_Visualization.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images.
In the notebook `GANS-PyTorch.ipynb` you will learn how to generate images that match a training dataset, and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awarded if you complete both notebooks.
77
+
In the notebook `Generative_Adversarial_Networks.ipynb` you will learn how to generate images that match a training dataset and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
78
78
79
79
### Q6: Self-Supervised Learning (16-points)
80
80
81
-
In the notebook `self_supervised_learning,ipynb`, you will learn how to
81
+
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to ... **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
82
82
83
83
### Optional: Style Transfer (15 points)
84
84
85
-
In thenotebooks `StyleTransfer-TensorFlow.ipynb` or `StyleTransfer-PyTorch.ipynb` you will learn how to create images with the content of one image but the style of another. Please complete only one of the notebooks (TensorFlow or PyTorch). No extra credit will be awardeded if you complete both notebooks.
85
+
In the notebook `Style_Transfer.ipynb`, you will learn how to create images with the content of one image but the style of another.
86
86
87
87
### Submitting your work
88
88
@@ -94,13 +94,13 @@ Once you have completed all notebooks and filled out the necessary code, you nee
94
94
95
95
This notebook/script will:
96
96
97
-
* Generate a zip file of your code (`.py` and `.ipynb`) called `a2.zip`.
97
+
* Generate a zip file of your code (`.py` and `.ipynb`) called `a3.zip`.
98
98
* Convert all notebooks into a single PDF file.
99
99
100
100
If your submission for this step was successful, you should see the following display message:
101
101
102
-
`### Done! Please submit a2.zip and the pdfs to Gradescope. ###`
102
+
`### Done! Please submit a3.zip and the pdfs to Gradescope. ###`
103
103
104
104
**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/257661).
105
105
106
-
Remember to download `a2.zip` and `assignment.pdf` locally before submitting to Gradescope.
106
+
Remember to download `a3.zip` and `assignment.pdf` locally before submitting to Gradescope.
0 commit comments