Skip to content

Commit 37d11ad

Browse files
committed
fixing links
1 parent b3c8869 commit 37d11ad

File tree

2 files changed

+85
-93
lines changed

2 files changed

+85
-93
lines changed

lab3/solutions/Part1_IntroductionCapsa.ipynb

+45-54
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,32 @@
11
{
22
"cells": [
33
{
4+
"attachments": {},
45
"cell_type": "markdown",
56
"metadata": {
6-
"id": "view-in-github",
7-
"colab_type": "text"
7+
"id": "SWa-rLfIlTaf"
88
},
9-
"source": [
10-
"<a href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab3/solutions/Part1_IntroductionCapsa.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
11-
]
12-
},
13-
{
14-
"cell_type": "markdown",
159
"source": [
1610
"<table align=\"center\">\n",
1711
" <td align=\"center\"><a target=\"_blank\" href=\"http://introtodeeplearning.com\">\n",
1812
" <img src=\"https://i.ibb.co/Jr88sn2/mit.png\" style=\"padding-bottom:5px;\" />\n",
1913
" Visit MIT Deep Learning</a></td>\n",
20-
" <td align=\"center\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/2023/lab3/solutions/Lab3_Part_1_Introduction_to_CAPSA.ipynb\">\n",
14+
" <td align=\"center\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab3/solutions/Part1_IntroductionCapsa.ipynb\">\n",
2115
" <img src=\"https://i.ibb.co/2P3SLwK/colab.png\" style=\"padding-bottom:5px;\" />Run in Google Colab</a></td>\n",
22-
" <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/2023/lab3/solutions/Lab3_Part_1_Introduction_to_CAPSA.ipynb\">\n",
16+
" <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/master/lab3/solutions/Part1_IntroductionCapsa.ipynb\">\n",
2317
" <img src=\"https://i.ibb.co/xfJbPmL/github.png\" height=\"70px\" style=\"padding-bottom:5px;\" />View Source on GitHub</a></td>\n",
2418
"</table>\n",
2519
"\n",
2620
"# Copyright Information"
27-
],
28-
"metadata": {
29-
"id": "SWa-rLfIlTaf"
30-
}
21+
]
3122
},
3223
{
3324
"cell_type": "code",
25+
"execution_count": null,
26+
"metadata": {
27+
"id": "-LohleBMlahL"
28+
},
29+
"outputs": [],
3430
"source": [
3531
"# Copyright 2023 MIT Introduction to Deep Learning. All Rights Reserved.\n",
3632
"# \n",
@@ -41,12 +37,7 @@
4137
"# © MIT Introduction to Deep Learning\n",
4238
"# http://introtodeeplearning.com\n",
4339
"#"
44-
],
45-
"metadata": {
46-
"id": "-LohleBMlahL"
47-
},
48-
"execution_count": null,
49-
"outputs": []
40+
]
5041
},
5142
{
5243
"cell_type": "markdown",
@@ -167,6 +158,9 @@
167158
},
168159
{
169160
"cell_type": "markdown",
161+
"metadata": {
162+
"id": "Fz3UxT8vuN95"
163+
},
170164
"source": [
171165
"In the plot above, the blue points are the training data, which will be used as inputs to train the neural network model. The red line is the ground truth data, which will be used to evaluate the performance of the model.\n",
172166
"\n",
@@ -177,10 +171,7 @@
177171
"1. What are your observations about where the train data and test data lie relative to each other?\n",
178172
"2. What, if any, areas do you expect to have high/low aleatoric (data) uncertainty?\n",
179173
"3. What, if any, areas do you expect to have high/low epistemic (model) uncertainty?"
180-
],
181-
"metadata": {
182-
"id": "Fz3UxT8vuN95"
183-
}
174+
]
184175
},
185176
{
186177
"cell_type": "markdown",
@@ -259,6 +250,9 @@
259250
},
260251
{
261252
"cell_type": "markdown",
253+
"metadata": {
254+
"id": "7Vktjwfu0ReH"
255+
},
262256
"source": [
263257
"\n",
264258
"#### **TODO: Analyzing the performance of standard regression model**\n",
@@ -267,10 +261,7 @@
267261
"\n",
268262
"1. Where does the model perform well?\n",
269263
"2. Where does the model perform poorly?"
270-
],
271-
"metadata": {
272-
"id": "7Vktjwfu0ReH"
273-
}
264+
]
274265
},
275266
{
276267
"cell_type": "markdown",
@@ -377,17 +368,17 @@
377368
},
378369
{
379370
"cell_type": "markdown",
371+
"metadata": {
372+
"id": "HpDMT_1FERQE"
373+
},
380374
"source": [
381375
"#### **TODO: Evaluating bias with wrapped regression model**\n",
382376
"\n",
383377
"Write short (~1 sentence) answers to the questions below to complete the `TODO`s:\n",
384378
"\n",
385379
"1. How does the bias score relate to the train/test data density from the first plot?\n",
386380
"2. What is one limitation of the Histogram approach that simply bins the data based on frequency?"
387-
],
388-
"metadata": {
389-
"id": "HpDMT_1FERQE"
390-
}
381+
]
391382
},
392383
{
393384
"cell_type": "markdown",
@@ -436,6 +427,11 @@
436427
},
437428
{
438429
"cell_type": "code",
430+
"execution_count": null,
431+
"metadata": {
432+
"id": "dT2Rx8JCg3NR"
433+
},
434+
"outputs": [],
439435
"source": [
440436
"# Capsa makes the aleatoric uncertainty an attribute of the prediction!\n",
441437
"pred = np.array(prediction.y_hat).flatten()\n",
@@ -448,12 +444,7 @@
448444
"plt.fill_between(x_test_clipped.flatten(), pred-2*unc, pred+2*unc, \n",
449445
" color='b', alpha=0.2, label='aleatoric')\n",
450446
"plt.legend()"
451-
],
452-
"metadata": {
453-
"id": "dT2Rx8JCg3NR"
454-
},
455-
"execution_count": null,
456-
"outputs": []
447+
]
457448
},
458449
{
459450
"cell_type": "markdown",
@@ -513,6 +504,11 @@
513504
},
514505
{
515506
"cell_type": "code",
507+
"execution_count": null,
508+
"metadata": {
509+
"id": "eauNoKDOj_ZT"
510+
},
511+
"outputs": [],
516512
"source": [
517513
"# Capsa makes the epistemic uncertainty an attribute of the prediction!\n",
518514
"pred = np.array(prediction.y_hat).flatten()\n",
@@ -524,15 +520,13 @@
524520
"plt.plot(x_test, y_test, c='r', zorder=-1, label='ground truth')\n",
525521
"plt.fill_between(x_test.flatten(), pred-20*unc, pred+20*unc, color='b', alpha=0.2, label='epistemic')\n",
526522
"plt.legend()"
527-
],
528-
"metadata": {
529-
"id": "eauNoKDOj_ZT"
530-
},
531-
"execution_count": null,
532-
"outputs": []
523+
]
533524
},
534525
{
535526
"cell_type": "markdown",
527+
"metadata": {
528+
"id": "N4LMn2tLPBdg"
529+
},
536530
"source": [
537531
"#### **TODO: Estimating epistemic uncertainty**\n",
538532
"\n",
@@ -541,10 +535,7 @@
541535
"1. For what values of $x$ is the epistemic uncertainty high or increasing suddenly?\n",
542536
"2. How does your answer in (1) relate to how the $x$ values are distributed (refer back to original plot)? Think about both the train and test data.\n",
543537
"3. How could you reduce the epistemic uncertainty in regions where it is high?"
544-
],
545-
"metadata": {
546-
"id": "N4LMn2tLPBdg"
547-
}
538+
]
548539
},
549540
{
550541
"cell_type": "markdown",
@@ -563,18 +554,18 @@
563554
},
564555
{
565556
"cell_type": "code",
566-
"source": [],
557+
"execution_count": null,
567558
"metadata": {
568559
"id": "nIpfPcpjlsKK"
569560
},
570-
"execution_count": null,
571-
"outputs": []
561+
"outputs": [],
562+
"source": []
572563
}
573564
],
574565
"metadata": {
575566
"colab": {
576-
"provenance": [],
577-
"include_colab_link": true
567+
"include_colab_link": true,
568+
"provenance": []
578569
},
579570
"kernelspec": {
580571
"display_name": "Python 3",
@@ -586,4 +577,4 @@
586577
},
587578
"nbformat": 4,
588579
"nbformat_minor": 0
589-
}
580+
}

lab3/solutions/Part2_BiasAndUncertainty.ipynb

+40-39
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,32 @@
11
{
22
"cells": [
33
{
4+
"attachments": {},
45
"cell_type": "markdown",
6+
"metadata": {
7+
"id": "Kxl9-zNYhxlQ"
8+
},
59
"source": [
610
"<table align=\"center\">\n",
711
" <td align=\"center\"><a target=\"_blank\" href=\"http://introtodeeplearning.com\">\n",
812
" <img src=\"https://i.ibb.co/Jr88sn2/mit.png\" style=\"padding-bottom:5px;\" />\n",
913
" Visit MIT Deep Learning</a></td>\n",
10-
" <td align=\"center\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/2023/lab3/solutions/Lab3_Bias_And_Uncertainty.ipynb\">\n",
14+
" <td align=\"center\"><a target=\"_blank\" href=\"https://colab.research.google.com/github/aamini/introtodeeplearning/blob/master/lab3/solutions/Part2_BiasAndUncertainty.ipynb\">\n",
1115
" <img src=\"https://i.ibb.co/2P3SLwK/colab.png\" style=\"padding-bottom:5px;\" />Run in Google Colab</a></td>\n",
12-
" <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/2023/lab3/solutions/Lab3_Bias_And_Uncertainty.ipynb\">\n",
16+
" <td align=\"center\"><a target=\"_blank\" href=\"https://github.com/aamini/introtodeeplearning/blob/master/lab3/solutions/Part2_BiasAndUncertainty.ipynb\">\n",
1317
" <img src=\"https://i.ibb.co/xfJbPmL/github.png\" height=\"70px\" style=\"padding-bottom:5px;\" />View Source on GitHub</a></td>\n",
1418
"</table>\n",
1519
"\n",
1620
"# Copyright Information"
17-
],
18-
"metadata": {
19-
"id": "Kxl9-zNYhxlQ"
20-
}
21+
]
2122
},
2223
{
2324
"cell_type": "code",
25+
"execution_count": null,
26+
"metadata": {
27+
"id": "aAcJJN3Xh3S1"
28+
},
29+
"outputs": [],
2430
"source": [
2531
"# Copyright 2023 MIT Introduction to Deep Learning. All Rights Reserved.\n",
2632
"# \n",
@@ -31,12 +37,7 @@
3137
"# © MIT Introduction to Deep Learning\n",
3238
"# http://introtodeeplearning.com\n",
3339
"#"
34-
],
35-
"metadata": {
36-
"id": "aAcJJN3Xh3S1"
37-
},
38-
"execution_count": null,
39-
"outputs": []
40+
]
4041
},
4142
{
4243
"cell_type": "markdown",
@@ -269,6 +270,11 @@
269270
},
270271
{
271272
"cell_type": "code",
273+
"execution_count": null,
274+
"metadata": {
275+
"id": "YqsBHBf3yUlm"
276+
},
277+
"outputs": [],
272278
"source": [
273279
"### Estimating representation bias with Capsa HistogramVAEWrapper ###\n",
274280
"\n",
@@ -299,23 +305,18 @@
299305
"\n",
300306
"# Call the Capsa-wrapped classifier to generate outputs: predictions, uncertainty, and bias!\n",
301307
"predictions, uncertainty, bias = wrapped_model.predict(test_imgs, batch_size=512)"
302-
],
303-
"metadata": {
304-
"id": "YqsBHBf3yUlm"
305-
},
306-
"execution_count": null,
307-
"outputs": []
308+
]
308309
},
309310
{
310311
"cell_type": "markdown",
312+
"metadata": {
313+
"id": "629ng-_H6WOk"
314+
},
311315
"source": [
312316
"# 3.3 Analyzing representation bias with Capsa\n",
313317
"\n",
314318
"From the above output, we have an estimate for the representation bias score! We can analyze the representation scores to start to think about manifestations of bias in the facial detection dataset. Before you run the next code block, which faces would you expect to be underrepresented in the dataset? Which ones do you think will be overrepresented?"
315-
],
316-
"metadata": {
317-
"id": "629ng-_H6WOk"
318-
}
319+
]
319320
},
320321
{
321322
"cell_type": "code",
@@ -377,15 +378,15 @@
377378
},
378379
{
379380
"cell_type": "code",
380-
"source": [
381-
"fig, ax = plt.subplots(figsize=(15,5))\n",
382-
"ax.imshow(mdl.util.create_grid_of_images(averaged_imgs, (1,10)))"
383-
],
381+
"execution_count": null,
384382
"metadata": {
385383
"id": "kn9IpPKYSECg"
386384
},
387-
"execution_count": null,
388-
"outputs": []
385+
"outputs": [],
386+
"source": [
387+
"fig, ax = plt.subplots(figsize=(15,5))\n",
388+
"ax.imshow(mdl.util.create_grid_of_images(averaged_imgs, (1,10)))"
389+
]
389390
},
390391
{
391392
"cell_type": "markdown",
@@ -420,15 +421,15 @@
420421
},
421422
{
422423
"cell_type": "markdown",
424+
"metadata": {
425+
"id": "NEfeWo2p7wKm"
426+
},
423427
"source": [
424428
"Since we've already used the `HistogramVAEWrapper` to calculate the histograms for representation bias quantification, we can use the exact same VAE wrapper to shed insight into epistemic uncertainty! Capsa helps us do exactly that. When we called the model, we returned the classification prediction, uncertainty, and bias for every sample:\n",
425429
"`predictions, uncertainty, bias = wrapped_model.predict(test_imgs, batch_size=512)`.\n",
426430
"\n",
427431
"Let's analyze these estimated uncertainties:"
428-
],
429-
"metadata": {
430-
"id": "NEfeWo2p7wKm"
431-
}
432+
]
432433
},
433434
{
434435
"cell_type": "code",
@@ -660,6 +661,11 @@
660661
}
661662
],
662663
"metadata": {
664+
"accelerator": "GPU",
665+
"colab": {
666+
"provenance": []
667+
},
668+
"gpuClass": "standard",
663669
"kernelspec": {
664670
"display_name": "Python 3",
665671
"language": "python",
@@ -676,13 +682,8 @@
676682
"nbconvert_exporter": "python",
677683
"pygments_lexer": "ipython3",
678684
"version": "3.8.10"
679-
},
680-
"colab": {
681-
"provenance": []
682-
},
683-
"accelerator": "GPU",
684-
"gpuClass": "standard"
685+
}
685686
},
686687
"nbformat": 4,
687688
"nbformat_minor": 0
688-
}
689+
}

0 commit comments

Comments
 (0)