You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Q5: Self-Supervised Learning for Image Classification (15 points)](#q5-self-supervised-learning-15-points)
26
+
-[Optional (Extra Credit): Image Captioning with Vanilla RNNs (tbd points)](#optional-image-captioning-with-vanilla-rnns-29-points)
27
+
-[Optional (Extra Credit): Style Transfer (tbd points)](#optional-style-transfer-15-points)
28
28
-[Submitting your work](#submitting-your-work)
29
29
30
30
@@ -56,31 +56,31 @@ The goals of this assignment are as follows:
56
56
57
57
**You will use PyTorch for the majority of this homework.**
58
58
59
-
### Q1: Image Captioning with Vanilla RNNs (29 points)
60
-
61
-
The notebook `RNN_Captioning.ipynb` will walk you through the implementation of vanilla recurrent neural networks and apply them to image captioning on COCO.
62
-
63
-
### Q2: Image Captioning with LSTMs (23 points)
59
+
### Q1: Image Captioning with LSTMs (23 points)
64
60
65
61
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs and apply them to image captioning on COCO.
66
62
67
-
### Q3: Image Captioning with Transformers (18 points)
63
+
### Q2: Image Captioning with Transformers (18 points)
68
64
69
65
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of a Transformer model and apply it to image captioning on COCO. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
70
66
71
-
### Q4: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
67
+
### Q3: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
72
68
73
69
The notebook `Network_Visualization.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images.
In the notebook `Generative_Adversarial_Networks.ipynb` you will learn how to generate images that match a training dataset and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
78
74
79
-
### Q6: Self-Supervised Learning (16-points)
75
+
### Q5: Self-Supervised Learning (16-points)
80
76
81
-
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to ... **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
77
+
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to leverage self-supervised pretraining to obtain better performance on image classification task **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
The notebook `RNN_Captioning.ipynb` will walk you through the implementation of vanilla recurrent neural networks and apply them to image captioning on COCO.
82
82
83
-
### Optional: Style Transfer (15 points)
83
+
### Optional (Extra Credit): Style Transfer (tbd points)
84
84
85
85
In the notebook `Style_Transfer.ipynb`, you will learn how to create images with the content of one image but the style of another.
0 commit comments