-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathalgorithmic-recourse.html
118 lines (100 loc) · 4.91 KB
/
algorithmic-recourse.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
<!DOCTYPE html>
<html lang="en">
<title>Algorithmic Recourse</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<script src="custom_themes/html_elements.js"></script>
<link rel="stylesheet" href="dist/reveal.css">
<link rel="stylesheet" href="custom_themes/sussex2.css" id="theme">
<div class="reveal">
<div class="slides">
<div id="background-template">
<footer>
<p>Algorithmic Recourse
</footer>
</div>
<section class="dark-cyan">
<p>
<h1>Algorithmic Recourse</h1>
<p style="color: white">Predictive Analytics Lab<br>
University of Sussex
<p><img src="images/logos/University_of_Sussex_Logo.svg2000_white.png" style="width: 5rem;">
</section>
<section>
<h2>Why explanations?</h2>
<ol>
<li>to inform and help the subject understand why a particular decision was reached</li>
<li>to provide grounds to contest an adverse decision</li>
<li>to understand what could be changed to receive a desired resulting the future (based on the current decision model)</li>
</ol>
</section>
<section>
<h2>What is recourse?</h2>
<p><em>Specific subset of explanations:</em> to understand what could be changed to receive a desired resulting the future (based on the current decision model)</p>
<p><em>Further Definitions/Variations:</em></p>
<ul>
<li>the ability of a person to obtain a desired outcome from a fixed model</li>
<li>an actionable set of changes a person can undertake in order to improve their outcome</li>
<li>the systematic process of reversing unfavourable decision by algorithms and bureaucracies across a range of counterfactual scenarios</li>
</ul>
</section>
<section>
<h2>Computing Recourse</h2>
<ol>
<li>Find the cheapest modification of features</li>
<li>That changes the outcome</li>
<li>While only allowing feasible changes</li>
</ol>
</section>
<section>
<h2>Computing Recourse</h2>
<img src="images/recourse/formulation.png">
</section>
<section>
<h2>Counterfactual vs Contrastive</h2>
<p>No generally agreed on definitons, but some definitions from philosophy</p>
<p>Contrastive is a subset of counterfactual</p>
<p>Counterfactual: Another possible world</p>
<p>Contrastive: Comparison between two worlds</p>
</section>
<section>
<h2>Counterfactual vs Contrastive</h2>
<p><b>Fact:</b> An item to be explained</p>
<p><b>Foil:</b> A counterfactual item to the fact</p>
<p><b>Surrogate:</b> A factual item that contrasts the fact</p>
<p><b>Contrastive Counterfactual Explanation:</b> compares a fact and a foil</p>
<p><b>Contrastive Bi-factual Explanation:</b> Compares a fact and a surrogate</p>
</section>
<section>
<h2>Criticisms</h2>
<p>Who decides what changes are feasible?</p>
<p>What happens if the decision process changes over time?</p>
<p>What's the difference between recourse and cheating the system?</p>
<p>Some research has shown contrastive counterfactual explanations don't improve the understanding of a system, but improve the users <em>belief</em> that they understand the system.</p>
</section>
<section>
<h3>Algorithmic recourse under imperfect causal knowledge: a probabilistic approach</h3>
<p>Assume we have a GCM, but no SEM</p>
<p>Approach 1: Rely on a probabilistic regression using a Gaussian Process prior over functions $~f_r$</p>
</section>
<section>
<h3>Algorithmic recourse under imperfect causal knowledge: a probabilistic approach</h3>
<p>Leads to:</p>
<p>$$\min_{a=\textrm{do}(X=\theta)} \textrm{cost}^F(a)$$</p>
<p>Subject to: $$ \mathbb{E}_{X^{\textrm{SCF}}(a)} [ h(X^{\textrm{SCF}}(a)) ] \geq \textrm{thresh}(a) $$</p>
<p>$X^{\textrm{SCF}}$ = Counterfactual Random Variable conditioned on $x$</p>
</section>
<section>
<h3>Algorithmic recourse under imperfect causal knowledge: a probabilistic approach</h3>
<p>Assume we have a GCM, but no SEM</p>
<p>Approach 2: Use a series of cVAEs to model the data.</p>
<p>"Model each conditional $p(x_r|x_{pa(r)})$ with a conditional variational autoencoder (CVAE)"</p>
</section>
<section>
<h3>Algorithmic recourse under imperfect causal knowledge: a probabilistic approach</h3>
<img src="images/recourse/probabilistic_2.png">
<p>Then find the minimum cost as before, but the expectation is taken over the corresponding interventional distribution.</p>
</section>
</div>
</div>
<script type="module" src="setup.js"></script>