You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"In the plot above, the blue points are the training data, which will be used as inputs to train the neural network model. The red line is the ground truth data, which will be used to evaluate the performance of the model.\n",
172
166
"\n",
@@ -177,10 +171,7 @@
177
171
"1. What are your observations about where the train data and test data lie relative to each other?\n",
178
172
"2. What, if any, areas do you expect to have high/low aleatoric (data) uncertainty?\n",
179
173
"3. What, if any, areas do you expect to have high/low epistemic (model) uncertainty?"
180
-
],
181
-
"metadata": {
182
-
"id": "Fz3UxT8vuN95"
183
-
}
174
+
]
184
175
},
185
176
{
186
177
"cell_type": "markdown",
@@ -259,6 +250,9 @@
259
250
},
260
251
{
261
252
"cell_type": "markdown",
253
+
"metadata": {
254
+
"id": "7Vktjwfu0ReH"
255
+
},
262
256
"source": [
263
257
"\n",
264
258
"#### **TODO: Analyzing the performance of standard regression model**\n",
@@ -267,10 +261,7 @@
267
261
"\n",
268
262
"1. Where does the model perform well?\n",
269
263
"2. Where does the model perform poorly?"
270
-
],
271
-
"metadata": {
272
-
"id": "7Vktjwfu0ReH"
273
-
}
264
+
]
274
265
},
275
266
{
276
267
"cell_type": "markdown",
@@ -377,17 +368,17 @@
377
368
},
378
369
{
379
370
"cell_type": "markdown",
371
+
"metadata": {
372
+
"id": "HpDMT_1FERQE"
373
+
},
380
374
"source": [
381
375
"#### **TODO: Evaluating bias with wrapped regression model**\n",
382
376
"\n",
383
377
"Write short (~1 sentence) answers to the questions below to complete the `TODO`s:\n",
384
378
"\n",
385
379
"1. How does the bias score relate to the train/test data density from the first plot?\n",
386
380
"2. What is one limitation of the Histogram approach that simply bins the data based on frequency?"
387
-
],
388
-
"metadata": {
389
-
"id": "HpDMT_1FERQE"
390
-
}
381
+
]
391
382
},
392
383
{
393
384
"cell_type": "markdown",
@@ -436,6 +427,11 @@
436
427
},
437
428
{
438
429
"cell_type": "code",
430
+
"execution_count": null,
431
+
"metadata": {
432
+
"id": "dT2Rx8JCg3NR"
433
+
},
434
+
"outputs": [],
439
435
"source": [
440
436
"# Capsa makes the aleatoric uncertainty an attribute of the prediction!\n",
"1. For what values of $x$ is the epistemic uncertainty high or increasing suddenly?\n",
542
536
"2. How does your answer in (1) relate to how the $x$ values are distributed (refer back to original plot)? Think about both the train and test data.\n",
543
537
"3. How could you reduce the epistemic uncertainty in regions where it is high?"
"# 3.3 Analyzing representation bias with Capsa\n",
313
317
"\n",
314
318
"From the above output, we have an estimate for the representation bias score! We can analyze the representation scores to start to think about manifestations of bias in the facial detection dataset. Before you run the next code block, which faces would you expect to be underrepresented in the dataset? Which ones do you think will be overrepresented?"
"Since we've already used the `HistogramVAEWrapper` to calculate the histograms for representation bias quantification, we can use the exact same VAE wrapper to shed insight into epistemic uncertainty! Capsa helps us do exactly that. When we called the model, we returned the classification prediction, uncertainty, and bias for every sample:\n",
0 commit comments