-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathtransparency-in-algorithmic-fairness.html
523 lines (434 loc) · 26.7 KB
/
transparency-in-algorithmic-fairness.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<title>Transparency in Algorithmic Fairness</title>
<!-- <link rel="stylesheet" href="dist/reset.css"> -->
<link rel="stylesheet" href="dist/reveal.css">
<link rel="stylesheet" href="custom_themes/sussex_dark.css" id="theme">
<!-- Theme used for syntax highlighting of code -->
<link rel="stylesheet" href="plugin/highlight/monokai.css">
</head>
<style>
.citeme {
float: right;
color: #fff;
font-size: 12pt;
font-style: italic;
margin: 0;
}
</style>
<body>
<div id="hidden" style="display:none;"><div id="static-content">
<footer></footer>
</div> </div>
<div class="reveal"><div class="slides">
<section data-background-color="#000">
<h1><span class="highlight">Transparency in Algorithmic Fairness</span></h1>
<p><font size="10">Novi Quadrianto</font></p>
<p>Reader in Machine Learning</p>
<p>Department of Informatics, University of Sussex</p>
<img src="images/sussex_logo.png" style="width: 12%; border: none;"/>
</section>
<section data-background-color="#000" data-markdown><script type="text/template">
## Talk outline
- Importance and challenges of algorithmic fairness
- Algorithmic fairness <span class="highlight">with transparency at its heart</span>
- in-processing approach
- pre-processing approach
- Future directions
</script></section>
<section data-background-color="#000">
<h2>Machine learning systems</h2>
<p align="left">Machine learning systems are being implemented in all walks of life</p>
<table>
<tbody>
<tr>
<td style="border: none;" width="30%"> <img src="images/scs.jpg" width=65% title="Social Credit System" style="margin:-20px 0px"/> <p style="font-size:10px">Picture credit: Kevin Hong</p></td>
<td style="border: none;" width="30%"> <img src="images/algowatch.jpg" width=65% title="Automating Society" style="margin:-20px 0px"/><p style="font-size:10px">Picture credit: AlgorithmWatch</p></td>
<td style="border: none;" width="30%"> <img src="images/CDEI.png" width=65% title="Automating Society" style="margin:-20px 0px"/><p style="font-size:10px">Picture credit: Centre for Data Ethics and Innovation, UK</p></td>
</tr>
<tr>
<td style="border: none;" width="30%">Social credit system, China</td>
<td style="border: none;" width="30%">Personal budget calculation, UK</td>
<td style="border: none;" width="30%">Financial services, Crime and justice, </td>
</tr>
<tr>
<td style="border: none;" width="30%"></td>
<td style="border: none;" width="30%">Loan decision, Finland</td>
<td style="border: none;" width="30%">Recruitment,</td>
</tr>
<tr>
<td style="border: none;" width="30%"></td>
<td style="border: none;" width="30%">etc.</td>
<td style="border: none;" width="30%">Local government</td>
</tr>
</tbody>
</table>
</section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Fairness in machine learning
- A fair machine learning system takes biased datasets and outputs non-discriminatory decisions to people with differing <span class="highlight">protected attributes</span> such as race, gender, and age
- For example, ensuring classifier to be equally accurate on male and female populations
<img width=90% src="images/fairML.png" title="Fair ML"/>
</textarea></section>
<section>
<p> </p>
<p> </p>
<h1>Algorithmic fairness definitions</h1>
</section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Fairness definitions
- Discrimination in the law:
- <span class="highlight">Direct discrimination</span> with respect to intent
- <span class="highlight">Indirect discrimination</span> with respect to consequences
<span class="citeme">Article 21, EU Charter of Fundamental Rights</span>
- From the legal context to algorithmic fairness, e.g.:
- Removing direct discrimination by <span class="highlight">not using group information at test time</span>
- Removing indirect discrimination by enforcing <span class="highlight">equality on the outcomes</span> between groups
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Several notions of group fairness:
When protected attribute $s$, target label $y$, predicted label $\widehat{y}$ is binary:
- Equality of acceptance rate (AR)
:
$
\text{Pr}(\widehat{y}=1|s=1)=\text{Pr}(\widehat{y}=1|s=0)
$
- Equality of true positive rate (TPR):
$
\text{Pr}(\widehat{y}=1|s=1,y=1)=\text{Pr}(\widehat{y}=1|s=0,y=1)
$
- Equality of positive predicted value (PPV):
$
\text{Pr}(y=1|s=1,\widehat{y}=1)=\text{Pr}(y=1|s=0,\widehat{y}=1)
$
- <span class="highlight">Let's have all the fairness metrics</span>! If we are fair with regards to all notions of fair, then we're fair... right?
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Mutual exclusivity by Bayes' rule
<p align="left">For each protected attribute $s$, we have:</p>
$
\underbrace{\text{Pr}(y=1|\widehat{y}=1)}_{\text{Positive Predicted Value (PPV)}} = \frac{\overbrace{\text{Pr}(\widehat{y}=1|y=1)}^{\text{True Positive Rate (TPR)}}\times\overbrace{\text{Pr}(y=1)}^{\text{Base Rate (BR)}}}{\text{Pr}(\widehat{y}=1|y=1)\text{Pr}(y=1) + \underbrace{\text{Pr}(\widehat{y}=1|y=-1)}_{\text{False Positive Rate (FPR)}}(1-\text{Pr}(y=1))}
$
<p align="left">• Suppose we have FPR<sub>s=1</sub> = FPR<sub>s=0</sub> and TPR<sub>s=1</sub> = TPR<sub>s=0</sub> (<span class="highlight">equality of TPR/FPR</span>), can we have PPV<sub>s=1</sub> = PPV<sub>s=0</sub> (<span class="highlight">equality of PPV</span>)?</p>
<p align="left">• YES! But only if we have a <span class="highlight">perfect dataset</span> (i.e. BR<sub>s=1</sub> = BR<sub>s=0</sub>) or a <span class="highlight">perfect predictor</span> (i.e. FPR=0 and TPR=1 for s=1 and s=0)</p>
<span class="citeme">Kehrenberg, Chen, NQ: Tuning fairness with quantities that matter, under review.</span>
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Which fairness notion to use?
- There's no right answer, all the above are "fair"
- It's important to consult <span class="highlight">domain experts</span> to find which is the best fit for each problem
<!--- There is no one-size fits all fairness metric-->
- Importantly, if we want to encourage <span class="highlight">public conversations</span>, <span class="highlight">transparency</span> in how fairness is met is an integral ingredient
<!--- There is also a notion of <span class="highlight">individual fairness</span> (similar individuals to be treated similarly), and
metrics <span class="highlight">in-between</span> group and individual fairness.-->
</textarea></section>
<section>
<p> </p>
<p> </p>
<h1>Algorithmic fairness methods with <span class="highlight">transparency</span></h1>
</section>
<section data-markdown data-background-color="#000"><textarea data-template>
## How to enforce fairness?
<span class="highlight">Three</span> different ways to enforce fairness:
<img src="images/fairmethods.png" width=75% title="Fair Methods"/>
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## In-processing
- Specify fairness metrics as constraints on learning
- Optimise for accuracy under those constraints
- No free lunch: additional constraints lower accuracy
<img src="images/pareto.png" width=33% style="margin:0px 0px"/>
<span class="citeme">NQ and Sharmanska, Recycling privileged learning and distribution matching for fairness, NIPS 2017</span>
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Challenges?
- Fairness-accuracy trade off is controlled by some intermediate, unintuitive control parameters such as $\tau=5$ and $\mu=1.2$.
<center>
<img src="images/pareto_control_.png" width=33% style="margin:0px 0px"/>
</center>
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Intuitive control parameters
- Linear model: $f(x) = \langle\theta,x\rangle$, with probability score: $ P(y=1|x, \theta, s)$
- Manipulate the likelihood $\text{Pr}(y=1|x, \theta, s)$ to enforce fairness
- Introduce latent <span class="highlight">target labels</span> $\widehat{y}$ (we are changing the learning goal): $\text{Pr}(y=1|x, \theta, s) = \sum\limits_{\widehat{y}\in\\{1,-1\\}} \text{Pr}(y=1|\widehat{y},s)\text{Pr}(\widehat{y}|x, \theta, s)$
- For $\widehat{y}=\\{1,-1\\}$ and $s=\\{0,1\\}$, choose four free parameters $\text{Pr}(y=1|\widehat{y}=i,s=j)$ such that $\theta$ is targeting a fair output
<!-- $
\text{Pr}(y=1|\bar{y}=0,s=0) \;\ \text{Pr}(y=1|\bar{y}=0,s=1)
$
$
\text{Pr}(y=1|\bar{y}=1,s=0) \;\ \text{Pr}(y=1|\bar{y}=1,s=1)
$-->
<span class="citeme">Kehrenberg, Chen, NQ, Tuning Fairness with quantities that matter, under review.</span>
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Transparency in fairness
- Controlling fairness with the <span class="highlight">quantities that matter</span> (e.g. $TPR=0.9$, and $TNR = 0.6$ for $s=0$ and $s=1$)
- Existing approaches rely on intermediate, unintuitive control parameters
such as $\tau=5$ and $\mu=1.2$.
<img src="images/eo_together_acc_.png" width=30% />
<img src="images/eo_together_control_.png" width=29.4% />
<img src="images/eo_together_legend_.png" width=31% />
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Challenges?
- Understanding what have been changed to achieve fairness
- Looking at pre-processing methods, by learning <span class="highlight">a fair representation</span>
<center><img src="images/models/adversarialfair.png" width=35% title="Adversarially Fair" style="margin:-10px -10px"/> <p style="font-size:10px">Picture credit: Madras, et al., 2018.</p></center>
- How to understand or visualise the learned fair representation $Z$?
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Pre-processing: Learning fair representations
- Inspired by the work on <span class="highlight">attribution map</span> in computer vision for transparency of deep learning models:
<center>
<img src="images/gradcam.jpg" width=50% title="GradCam" style="margin:0px -10px"/> <p style="font-size:10px">Picture credit: Selvaraju, et al., 2017.</p>
</center>
- Ensure that the learned representation can be "<span class="highlight">overlayed</span>" back to the input data
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Fair representations in the data domain
- A fair representation $Z:=T_\omega(x)$ which has the protected attribute removed, but remains in the space of $x$
- Main disentangling assumption: $\phi(x) = \phi(T_{\omega}(x)) + \underbrace{\phi (\lnot T_{\omega}(x))}_{\text{Spurious}}$
- <span class="highlight">Spurious</span>: dependent on the protected attribute
- <span class="highlight">Non-Spurious</span>: independent of the protected attribute
<span class="citeme">NQ, Sharmanska, Thomas, Discovering fair representations in the data domain, CVPR 2019</span>
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Architecture and optimization
<img src="images/models/Architecture.png" width=70% title="Architecture"/>
<p align="left">$\underset{T_\omega}{\text{min.}}\quad \underbrace{\mathrm{Dep.}(\phi ( T_{\omega}(\mathrm{x})),S|y=1)}_{\text{non-spurious loss}} \quad\underbrace{- \mathrm{Dep.}(\phi (\lnot T_{\omega}(\mathbf{x})),S|y=1)}_{\text{spurious loss}} $
$+ \text{prediction loss} \quad+ \text{reconstruction loss}$</p>
</textarea></section>
<section data-background-color="#000">
<h2>Transparency in fairness</h2>
<p align="left">Experiments on <span class="highlight"> CelebA dataset</span>: $202,599$ celebrity face images with $40$ attributes, <span class="highlight">gender</span> as the binary protected attribute, the attribute <span class="highlight">smiling</span> as the classification task</p>
<small>
<table>
<thead><tr>
<td></td>
<td>Acc.</td>
<td><span class="highlight">TPR Diff.</span></td>
<td>TPR male</td>
<td>TPR female</td>
</tr>
</thead>
<tbody>
<tr>
<td>original <span class="highlight">image</span> representation $\mathbf{x}\in\mathbb{R}^{2048}$</td>
<td width="10%">89.70</td>
<td width="10%">7.54</td>
<td width="15%">92.03</td>
<td width="15%">84.50</td>
</tr>
<!-- <tr>
<td>original <span class="highlight">attribute</span> representation $\mathbf{x}\in\mathbb{R}^{40}$</td>
<td width="10%">79.1</td>
<td width="10%">39.9</td>
<td width="10%">90.8</td>
<td width="10%">50.9</td>
</tr>-->
<tr>
<td>fair <span class="highlight">image</span> representation in the data domain $T_{\omega}(\mathbf{x})\in\mathbb{R}^{2048}$</td>
<td width="10%">91.31</td>
<td width="10%">4.76</td>
<td width="15%">91.85</td>
<td width="15%">87.09</td>
</tr>
<!-- <tr>
<td>fair <span class="highlight">attribute</span> representation in the data domain $T_{\omega}(\mathbf{x})\in\mathbb{R}^{40}$</td>
<td width="10%">75.9</td>
<td width="10%">12.4</td>
<td width="10%">87.2</td>
<td width="10%">74.8</td>
</tr>-->
</tbody>
</table>
</small>
<table align="center" style="border-collapse: collapse; border: none;">
<tbody ><tr style="border: none;"><td width="40%" colspan="4" style="border: none;"></td><td width="1%" style="border: none;"></td><td width="40%" style="border: none;"></td></tr>
<tr style="border: none;">
<td width="10%" style="border: none;">Non-spurious</td>
<td width="20%" style="border: none;"><img src="images/celeba_res/006126.jpg" width=100% title="Im1"/></td>
<td width="10%" style="border: none;">Spurious</td>
<td width="20%" style="border: none;"><img src="images/celeba_res/006126res.jpg" width=100% title="RIm1"/></td>
</tr></tbody>
</table>
</section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Challenges?
- Spurious/non-spurious (w.r.t. gender) visualisations are too coarse!
<table align="center" style="border-collapse: collapse; border: none;">
<tbody ><tr style="border: none;"><td width="40%" colspan="4" style="border: none;"></td><td width="1%" style="border: none;"></td><td width="40%" style="border: none;"></td></tr>
<tr style="border: none;">
<td width="30%" style="border: none;">Spurious (what we have)</td>
<td width="20%" style="border: none;"><img src="images/celeba_res/006126res.jpg" width=100% style="margin:-20px 0px"/></td>
<td width="30%" style="border: none;">Spurious (what we want)</td>
<td width="20%" style="border: none;"><img src="images/spurious.png" width=60% style="margin:-20px 0px"/></td>
</tr></tbody>
</table>
- Residual unfairness (transferability) problem
<center>
<img src="images/nyclu.png" width="35%" title="New York Civil Liberties Union" style="margin:-20px 0px"/><p style="font-size:10px">Picture credit: New York Civil Liberties Union</p>
</center>
</textarea></section>
<section data-background-color="#000">
<h2>Fair and transferable </h2>
<table>
<tbody>
<tr>
<td style="border: none;" width="50%"> <img src="images/diagram.png" width=100% style="margin:-20px 0px"/></td>
<td style="border: none;"> <p>Disentangling the latent space (c.f. on the reconstruction space) into two components:<ul>
<li><span class="highlight">Spurious</span>: dependent on the protected attribute</li>
<li><span class="highlight">Non-Spurious</span>: independent of the protected attribute</li>
</ul></p></td>
</tr>
</tbody>
</table>
<span class="citeme">Kehrenberg, Bartlett, Thomas, NQ, NoSiNN: Removing spurious correlations with null-sampling, under review</span>
</section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Invertible Neural Network
- We leverage flow-based models to <span class="highlight">preserve all information relevant to $y$</span> that is independent of $s$ (during pre-training phase)
- Flow-based models permit exact likelihood estimation through warping a base density with a series of <span class="highlight">invertible transformation</span>
<!--- We have:
$\underset{\theta}{\text{min.}}\quad \mathbb{E}_x [- \log p_\theta (x|z)] + \lambda \text{Dep.}(z^{\text{invariant}}, s)$
with $\log p(x) = \log \mathcal{N}(z_0; 0, \mathbb{I}) + \sum_\{i=1\}^\{L\} \log | \det (\frac{\mathrm{d} f_i}{\mathrm{d}z_\{i-1\}})|$-->
- We conjecture that the latent representations of flow-based models are more robust to <span class="highlight">out-of-distribution</span> data
<!--
- The invertible network $f_\theta$ maps the inputs $x$ to a representation $z$: $f_\theta(x) = [z^{\text{spurious}}, z^{\text{invariant}}] := z$
- We have:
$\underset{\theta}{\text{min.}}\quad \mathbb{E}_x [- \log p_\theta (x|z)] + \lambda \text{Dep.}(z^{\text{invariant}}, s)$ with $\log p(x) = \log \mathcal{N}(z_0; 0, \mathbb{I}) + \sum_\{i=1\}^\{L\} \log | \det (\frac{\mathrm{d} f_i}{\mathrm{d}z_\{i-1\}})|$
\mathbb{E}_x [- \log p_\theta (x|z)] + \lambda \text{Dep.}(z_d, s)$, with $\log p(x) = \log \mathcal{N}(z; 0, \mathbb{I}) + \sum_{i=1}^{L} \log | \det ( \frac{\mathop{}\!\mathrm{d} f_i}{ \mathrm{d}z_{i-1}}) |-->
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Transferability results
- Experiments on <span class="highlight">Adult Income dataset</span>
- We evaluate the performance of our method for mixing factors ($\eta$) of value $\\{0, 0.1, ..., 1\\}$ (at $\eta = 0.5$ the dataset is perfectly balanced)
<center>
<img src="images/transfer_2.png" width=40% title="Transferability" style="margin:0px 0px"/>
</center>
</textarea></section>
<section data-background-color="#000">
<h2>Transparency in fairness</h2>
<table>
<tbody>
<tr>
<td style="border: none;" width="30%"> <img src="images/train_original_x_15000.png" width=100% style="margin:-20px 0px"/> <p>Original data</p></td>
<td style="border: none;" width="30%"> <img src="images/celeba_reconstructions_with_bias_highlighted.png" width=100% style="margin:-20px 0px"/><p>Non-Spurious</p></td>
<td style="border: none;" width="30%"> <img src="images/train_reconstruction_s_15000.png" width=100% style="margin:-20px 0px"/><p>Spurious</p></td>
</tr>
</tbody>
</table>
<p align="left">• <span class="highlight">Gender</span> as the protected attribute</p>
<p align="left">• Unfortunately, the model lightens the <span class="highlight">skintone</span> when gender-neutralising male faces</p>
</section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Challenges?
- Can we provide <span class="highlight">an individual-level explanation</span> of fair systems without the difficult learning of fair ("genderless faces") representations?
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Pre-processing: Contrastive examples
- The <span class="highlight">ideal dataset</span> contains an imaginary data point for each person, i.e. the one inside the black box, whereby we <span class="highlight">intervene</span> and set the gender attribute to the opposite that is in real life
<img src="images/balanced_Page_1.png" width=50% title="Balanced1"/>
<img src="images/balanced_Page_2.png" width=30% title="Balanced2"/>
<span class="citeme">Sharmanska, Hendricks, Darrell, NQ, Contrastive examples for addressing the tyranny of the majority, under review</span>
</textarea></section>
<section data-markdown data-background-color="#000"><textarea data-template>
## Transparency in fairness
- All previous work with adversarial learning try to <span class="highlight">remove</span> protected attributes from data
- Instead, we use adversarial learning to <span class="highlight">generate</span> data points with pre-specified protected attributes (contrastive examples)
- Contrastive examples "can be easily interpreted" <!--because they do have the semantic meaning of the input-->
<center>
<table style="border-collapse: collapse; border: none;">
<tbody ><tr style="border: none;"><td width="40%" colspan="4" style="border: none;"></td><td width="1%" style="border: none;"></td><td width="40%" style="border: none;"></td></tr>
<tr style="border: none;">
<td width="10%" style="border: none;"></td>
<td width="30%" style="border: none;">Real</td>
<td width="30%" style="border: none;">GAN contrastive</td>
<td width="30%" style="border: none;">NN contrastive</td>
</tr>
<tr style="border: none;">
<td width="10%" style="border: none;">Male</td>
<td width="30%" style="border: none;"><img src="images/008526_m_real.jpg" width=60% title="RIm1"/></td>
<td width="30%" style="border: none;"><img src="images/008526_m_fake.jpg" width=60% title="RIm1"/></td>
<td width="30%" style="border: none;"><img src="images/008526_m_match.jpg" width=60% title="RIm1"/></td>
</tr></tbody>
</table></center>
</textarea></section>
<section data-background-color="#000">
<h2>Results on the CelebA dataset</h2>
<p align="left">• We use <span class="highlight">gender and age</span> as the two protected attributes.</p>
<p align="left">• We use <span class="highlight">smiling</span> as the classification task. </p>
<small>
<table>
<thead><tr>
<td>method</td>
<td>Acc.</td>
<td><span class="highlight">TPR Diff.</span></td>
<td><span class="highlight">FPR Diff.</span></td>
</tr>
</thead>
<tbody>
<tr>
<td>logistic regression (original)</td>
<td width="10%">89.71</td>
<td width="10%">6.69</td>
<td width="10%">6.40</td>
</tr>
<tr>
<td>logistic regression (original and GAN contrastive)</td>
<td width="10%">88.94</td>
<td width="10%">3.50</td>
<td width="10%">2.79</td>
</tr>
<tr>
<td>logistic regression (original and NN contrastive)</td>
<td width="10%">88.78</td>
<td width="10%">3.32</td>
<td width="10%">3.53</td>
</tr>
<tr>
<td>$\ddagger$logistic regression (original and GAN contrastive with output consistency)</td>
<td width="10%">94.15</td>
<td width="10%">3.51</td>
<td width="10%">2.18</td>
</tr>
</tbody>
</table>
</small><br>
<p align="left">$\ddagger$: <span class="highlight">Rejection learning</span> -- classifier only makes a prediction if there is an agreement between original and contrastive examples (occurs in $17,237$ out of $20,000$ test examples, i.e. $86.185\%$).</p>
</section>
<section data-markdown data-background-color="#fff"><textarea data-template>
## Future directions
- <b>BayesianGDPR</b>: Bayesian models and algorithms for fairness and transparency (ERC Starting Grant -- 1 April 2020 - 31 March 2025)
<center>
<img src="images/bayesiangdpr.png" width=40% title="BayesianGDPR" />
</center>
- <b>TANGO</b>: Deep mutual understanding for human-machine symbiosis (EU FET Proactive Human-Centric AI <b><span class="highlight">proposal</span></b>)
<center>
<img src="images/TANGO.jpg" width=60% title="TANGO"/>
</center>
</textarea></section>
<section data-markdown data-background-color="#fff"><textarea data-template>
## Future directions
- <b>Inclusive Green Infrastructures for Urban Well-Being</b> (British Academy -- 18 November 2019 - 17 November 2021)
<center>
<img src="images/greeninfrastructure.png" width=100% title="Green Infrastructures" style="margin:0px 0px"/>
</center>
</textarea></section>
<section data-background-color="#000" data-markdown><textarea data-template>
## Talk outline
- Importance and challenges of algorithmic fairness
- Algorithmic fairness <span class="highlight">with transparency at its heart</span>
- in-processing approach
- pre-processing approach
- Future directions
<h1><span class="highlight" >Thank you!</span></h1>
</textarea></section>
</div></div>
<script type="module" src="settings.js"></script>
</body>
</html>