-
Notifications
You must be signed in to change notification settings - Fork 13
/
academic.html
517 lines (450 loc) · 28 KB
/
academic.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
---
layout: default
title: Academic work
---
<script type="text/javascript" language="JavaScript">
function ToggleContent(d) {
document.getElementById(d).style.display =
(document.getElementById(d).style.display == 'block') ? 'none' : 'block';
}
</script>
<h1 class="page-heading">Academic work</h1>
<h2>Theses</h2>
<p><span class="pt">Understanding and expressing scalable concurrency</span><br>
PhD thesis, April 2013<br>Northeastern University<br>
[ <a href="javascript:;" onClick="ToggleContent('phd-thesis');">abstract</a> | <a href="turon-thesis.pdf">pdf</a> ]
<div class="abs" id="phd-thesis" style="display:none;"><i>Abstract:</i>
The Holy Grail of parallel programming is to provide good
speedup while hiding or avoiding the pitfalls of
concurrency. But some level in the tower of abstraction must
face facts: parallel processors execute code concurrently,
and the interplay between concurrent code, synchronization,
and the memory subsystem is a major determiner of
performance. Effective parallel programming must ultimately
be supported by <i>scalable concurrent algorithms</i> -- algorithms
that tolerate (or even embrace) concurrency for the sake of
scaling with available parallelism. This dissertation makes
several contributions to the understanding and expression of
such algorithms:
<p> It shows how to understand scalable algorithms in terms of local protocols
governing each part of their hidden state. These protocols are visual
artifacts that can be used to informally explain an algorithm at a
whiteboard. But they also play a formal role in a new logic for verifying
concurrent algorithms, enabling correctness proofs that are local in
space, time, and thread execution. Correctness is stated in terms of
refinement: clients of an algorithm can reason as if they were using the
much simpler specification code it refines.
<p> It shows how to express synchronization in a declarative but scalable way,
based on a new library providing "join patterns". By declarative, we mean that
the programmer needs only to write down the constraints of a synchronization
problem, and the library will automatically derive a correct solution. By
scalable, we mean that the derived solutions deliver robust performance with
increasing processor count and problem complexity. For common synchronization
problems, join patterns perform as well as specialized algorithms from the
literature.
<p> It shows how to express scalable algorithms through "reagents", a new
monadic abstraction. With reagents, algorithms no longer need to be
constructed from "whole cloth," i.e. by using system-level primitives
directly. Instead, they are built using a mixture of shared-state and
message-passing combinators. Concurrency experts benefit, because they can
write libraries at a higher level, with more reuse, without sacrificing
scalability. Their clients benefit, because composition empowers them to
extend and tailor a library without knowing the details of its underlying
algorithms. </div>
<p><span class="pt">Metaprogramming with Traits</span><br>
Undergraduate honors thesis, 2007<br>University of Chicago<br>
[ <a href="javascript:;" onClick="ToggleContent('chicago-thesis');">abstract</a>
| <a href="thesis.pdf">pdf</a> ]
<div class="abs" id="chicago-thesis" style="display:none;"><i>Abstract:</i>
In many domains, classes have highly regular internal structure. For
example, so-called business objects often contain boilerplate code for
mapping database fields to class members. The boilerplate code must be
repeated per-field for every class, because existing mechanisms for
constructing classes do not provide a way to capture and reuse such
member-level structure. As a result, programmers often resort to ad
hoc code generation. This paper presents a lightweight mechanism for
specifying and reusing member-level structure in Java programs. The
proposal is based on a modest extension to traits that we have termed
trait-based metaprogramming. Although the semantics of the mechanism
are straightforward, its type theory is difficult to reconcile with
nominal subtyping. We achieve reconciliation by introducing a hybrid
structural/nominal type system that extends Java's type system. The
thesis includes a formal calculus defined by translation to
Featherweight Generic Java.
</div>
<h2>Papers</h2>
<div class="papers">
<p><span class="pt">GPS: Navigating weak memory with ghosts, protocols, and separation</span><br>
Aaron Turon, Viktor Vafeiadis, Derek Dreyer
<br>
<i>Manuscript</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('gps);">abstract</a>
| <a href="http://plv.mpi-sws.org/gps/">project page</a> ]
<div class="abs" id="gps" style="display:none;"><i>Abstract:</i>
Weak memory models axiomatize the unexpected behavior that one can
expect to observe in multi-threaded programs running on modern
hardware. In so doing, however, they complicate the already-difficult
task of reasoning about correctness of concurrent code. Worse, they
render impotent the sophisticated formal methods that have been
developed to tame concurrency, which almost universally assume a
strong (i.e., sequentially consistent) memory model.
<p>
This work introduces <b>GPS</b>, the first program logic to provide
a full-fledged suite of modern verification techniques—including
ghost state, rely-guarantee "protocols", <em>and</em> separation
logic—for high-level, structured reasoning about weak memory. We
demonstrate the effectiveness of GPS by applying it to challenging
examples drawn from the Linux kernel as well as lock-free data
structures. We also define the semantics of GPS and prove its
soundness directly in terms of the axiomatic C11 weak memory model.
</p>
</div>
<p><span class="pt">Freeze after writing: quasi-deterministic parallel programming with LVars</span><br>
Lindsey Kuper, Aaron Turon, Neelakantan R. Krishnaswami, Ryan R. Newton<br>
<i>to appear in POPL 2014</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('lvish');">abstract</a>
| <a href="lvish.pdf">pdf</a> | <a href="lvish-tr.pdf">technical appendix</a> ]
<div class="abs" id="lvish" style="display:none;"><i>Abstract:</i>
Deterministic-by-construction parallel programming models offer the advantages
of parallel speedup while avoiding the nondeterministic, hard-to-reproduce bugs
that plague fully concurrent code. A principled approach to
deterministic-by-construction parallel programming with shared state is offered
by <i>LVars</i>: shared memory locations whose semantics are defined in terms of
an application-specific lattice. Writes to an LVar take the least upper bound
of the old and new values with respect to the lattice, while reads from an LVar
can observe only that its contents have crossed a specified threshold in the
lattice. Although it guarantees determinism, this interface is quite limited.
<p>
We extend LVars in two ways. First, we add the ability to "freeze" and then
read the contents of an LVar directly. Second, we add the ability to attach
event handlers to an LVar, triggering a callback when the LVar's value changes.
Together, handlers and freezing enable an expressive and useful style of
parallel programming. We prove that in a language where communication takes
place through these extended LVars, programs are at worst <i>quasi-deterministic</i>: on
every run, they either produce the same answer or raise an error. We
demonstrate the viability of our approach by implementing a library for Haskell
supporting a variety of LVar-based data structures, together with a case
study that illustrates the programming model and yields promising parallel
speedup.
</div>
<p><span class="pt">Modular rollback through control logging:<br>Twin pearls in continuation, and a monadic third</span><br>
Olin Shivers, Aaron Turon, Conor McBride<br>
<i>Submitted</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('rollback-triple');">abstract</a>
| <a href="rollback-triple.pdf">pdf</a> ]
<div class="abs" id="rollback-triple" style="display:none;"><i>Abstract:</i>
We present a technique, based on the use of first-class control operators,
enabling programs to maintain and invoke rollback logs for sequences of
reversible effects.
Our technique is modular, in that it provides complete separation
between some library of effectful operations, and a client, "driver"
program which invokes and rolls back sequences of these operations.
In particular, the checkpoint mechanism,
which is entirely encapsulated within the effect library,
logs not only the library's effects, but also the client's control state.
Thus, logging and rollback can be almost completely transparent to the
client code.
<p>This separation of concerns manifests itself nicely when we must
implement software with sophisticated error handling.
We illustrate with two examples that exploit the architecture
to disentangle some core parsing task from its error management.
The parser code is completely separate from the error-correction code,
although the two components are deeply intertwined at run time.
</div>
<p><span class="pt">Unifying refinement and Hoare-style reasoning
in a logic for higher-order concurrency</span><br>
Aaron Turon, Derek Dreyer, and Lars Birkedal<br>
<i>in ICFP 2013</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('caresl');">abstract</a>
| <a href="caresl/caresl.pdf">pdf</a> ]
<div class="abs" id="caresl" style="display:none;"><i>Abstract:</i>
Modular programming and modular verification go hand in hand, but most
existing logics for concurrency ignore two crucial forms of
modularity: *higher-order functions*, which are essential for building
reusable components, and *granularity abstraction*, a key technique
for hiding the intricacies of fine-grained concurrent data structures
from the clients of those data structures. In this paper, we present
CaReSL, the first logic to support the use of granularity abstraction
for modular verification of higher-order concurrent programs. After
motivating the features of CaReSL through a variety of illustrative
examples, we demonstrate its effectiveness by using it to tackle a
significant case study: the first formal proof of (partial)
correctness for Hendler et al.'s "flat combining" algorithm.
</div>
<p><span class="pt">Logical relations for fine-grained concurrency</span><br>
Aaron Turon, Jacob Thamsborg, Amal Ahmed, Lars Birkedal, and Derek Dreyer<br>
<i>in POPL 2013</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('relcon');">abstract</a>
| <a href="relcon/relcon.pdf">pdf</a> | <a href="relcon/appendix.pdf">technical appendix</a> ]
<div class="abs" id="relcon" style="display:none;"><i>Abstract:</i>
Fine-grained concurrent data structures (or FCDs) reduce the
granularity of critical sections in both time and space, thus making
it possible for clients to access different parts of a mutable data
structure in parallel. However, the tradeoff is that the
implementations of FCDs are very subtle and tricky to reason about
directly. Consequently, they are carefully designed to be contextual
refinements of their coarse-grained counterparts, meaning that their
clients can reason about them as if all access to them were
sequentialized.
<p>In this paper, we propose a new semantic model, based
on Kripke logical relations, that supports direct proofs of contextual
refinement in the setting of a type-safe high-level language. The key
idea behind our model is to provide a simple way of expressing the
``local life stories'' of individual pieces of an FCD's hidden state
by means of protocols that the threads concurrently accessing that
state must follow. By endowing these protocols with a simple yet
powerful transition structure, as well as the ability to assert
invariants on both heap states and specification code, we are able to
support clean and intuitive refinement proofs for the most
sophisticated types of FCDs, such as conditional compare-and-set
(CCAS).
</div>
<p><span class="pt">Superficially substructural types</span><br>
Neelakantan R. Krishnaswami, Aaron Turon, Derek Dreyer, and Deepak Garg<br>
<i>in ICFP 2012</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('superficial');">abstract</a>
| <a href="superficial.pdf">pdf</a> | <a href="superficial-app.pdf">technical appendix</a> ]
<div class="abs" id="superficial" style="display:none;"><i>Abstract:</i>
Many substructural type systems have been proposed for controlling
access to shared state in higher-order languages. Central to these
systems is the notion of a <i>resource</i>, which may be split into
disjoint pieces that different parts of a program can manipulate
independently without worrying about interfering with one another.
Some systems support a <i>logical</i> notion of resource (such as
permissions), under which two resources may be considered disjoint even
if they govern the <i>same</i> piece of state. However, in nearly
all existing systems, the notions of resource and disjointness are
fixed at the outset, baked into the model of the language, and fairly
coarse-grained in the kinds of sharing they enable.
<p>In this paper, inspired by recent work on "fictional disjointness"
in separation logic, we propose a simple and flexible way of enabling
any module in a program to create its own custom type of splittable
resource (represented as a commutative monoid), thus providing
fine-grained control over how the module's private state is shared
with its clients. This functionality can be incorporated into an
otherwise standard substructural type system by means of a new typing
rule we call <i>the sharing rule</i>, whose soundness we prove
semantically via a novel resource-oriented Kripke logical relation.
</div>
<p><span class="pt">Reagents: expressing and composing fine-grained concurrency</span><br>
Aaron Turon<br>
<i>in PLDI 2012</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('reagents');">abstract</a>
| <a href="reagents.pdf">pdf</a> | <a href="pldi-2012-reagents.key">Keynote slides</a> | <a href="pldi-2012-reagents.pdf">pdf slides</a> ]
<div class="abs" id="reagents" style="display:none;"><i>Abstract:</i>
Efficient communication and synchronization is crucial for fine-grained
parallelism. Libraries providing such features, while indispensable, are
difficult to write, and often cannot be tailored or composed to meet the needs
of specific users. We introduce <i>reagents</i>, a set of combinators for
concisely expressing concurrency algorithms. Reagents scale as well as their
hand-coded counterparts, while providing the composability existing libraries
lack. </div>
<p><span class="pt">Scalable Join Patterns</span><br>
Aaron Turon, Claudio Russo<br>
<i>in OOPSLA 2011</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('scalable-joins');">abstract</a>
| <a href="scalable-joins.pdf">pdf</a> | <a href="OOPSLA2011.key">Keynote slides</a> | <a href="OOPSLA2011.pdf">pdf slides</a> ]
<div class="abs" id="scalable-joins" style="display:none;"><i>Abstract:</i>
Coordination can destroy scalability in parallel programming. A
comprehensive library of synchronization primitives is therefore an
essential tool for exploiting parallelism. Unfortunately, such
primitives do not easily combine to yield scalable solutions to more
complex problems. We demonstrate that a concurrency library based on
Fournet and Gonthier's join calculus can provide declarative and
scalable coordination. By declarative, we mean that the
programmer needs only to write down the constraints of a coordination
problem, and the library will automatically derive a correct solution.
By scalable, we mean that the derived solutions deliver robust
performance both as the number of processors increases, and as the
complexity of the coordination problem grows. We validate our claims
empirically on seven coordination problems, comparing our generic
solution to specialized algorithms from the literature.
</div>
<p><span class="pt">Modular rollback through control logging<br>A pair of twin functional pearls</span><br>
Olin Shivers, Aaron Turon<br>
<i>in ICFP 2011</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('control-to-correct');">abstract</a>
| <a href="rollback.pdf">pdf</a> | <a href="icfp-2011-slides.key">Keynote slides</a> | <a href="icfp-2011-slides.pdf">pdf slides</a> ]
<div class="abs" id="control-to-correct" style="display:none;"><i>Abstract:</i>
We present a technique, based on the use of first-class control
operators, permitting programs to maintain and invoke rollback
logs for sequences of reversible effects. Our technique is modular,
in that it provides for complete separation between some library of
effectful operations, and a client, "driver" program which invokes
and rolls back sequences of these operations. In particular, the
checkpoint mechanism, which is entirely encapsulated within the
effect library, logs not only the library's effects, but also the client's
control state. Thus, logging and rollback can be almost completely
transparent to the client code.
<p>
This separation of concerns manifests itself nicely when we
must implement software with sophisticated error handling. We
illustrate with two examples that both exploit the architecture to
disentangle some core parsing task from its error management. The
parser code is completely separate from the error-correction code,
although the two components are deeply intertwined at run time.
</div>
<p><span class="pt">A resource analysis of the pi-calculus</span><br>
Aaron Turon, Mitchell Wand<br>
<i>In MFPS 2011</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('pi-semantics');">abstract</a>
| <a href="pi-semantics-final.pdf">pdf</a> | <a href="mfps-2011-slides.pdf">slides</a> ]
<div class="abs" id="pi-semantics" style="display:none;"><i>Abstract:</i>
We give a new treatment of the pi-calculus based on the semantic
theory of separation logic, continuing a research program begun by
Hoare and O'Hearn. Using a novel resource model that distinguishes
between public and private ownership, we refactor the operational
semantics so that sending, receiving, and allocating are commands that
influence owned resources. These ideas lead naturally to two
denotational models: one for safety and one for liveness. Both models
are fully abstract for the corresponding observables, but more
importantly both are very simple. The close connections with the
model theory of separation logic (in particular, with Brookes's action
trace model) give rise to a logic of processes and resources.
</div>
<p><span class="pt">A separation logic for refining concurrent objects</span><br>
Aaron Turon, Mitchell Wand<br>
<i>in POPL 2011</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('sepref-paper');">abstract</a>
| <a href="sepref.pdf">pdf</a> ]
<div class="abs" id="sepref-paper" style="display:none;"><i>Abstract:</i>
Fine-grained concurrent data structures are crucial for gaining performance from multiprocessing, but their design is a subtle art. Recent literature has made large strides in verifying these data structures, using either atomicity refinement or separation logic with rely-guarantee reasoning. In this paper we show how the ownership discipline of separation logic can be used to enable atomicity refinement, and we develop a new rely-guarantee method that is localized to the definition of a data structure. We present the first semantics of separation logic that is sensitive to atomicity, and show how to control this sensitivity through ownership. The result is a logic that enables compositional reasoning about atomicity and interference, even for programs that use fine-grained synchronization and dynamic memory allocation.
</div>
<p><span class="pt">All-Termination(T)</span><br>
Panagiotis Manolios, Aaron Turon<br><i>in TACAS 2009</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('allt-paper');">abstract</a>
| <a href="tacas09.pdf">pdf</a> | <a href="tacas09-talk.pdf">slides</a> ]
<div class="abs" id="allt-paper"
style="display:none;"><i>Abstract:</i> We introduce the
All-Termination(<i>T</i>) problem: given a termination
solver <i>T</i> and a collection of functions <i>F</i>, find
every subset of the formal parameters to <i>F</i> whose
consideration is sufficient to show, using <i>T</i>,
that <i>F</i> terminates. An important and motivating
application is enhancing theorem proving systems by constructing
the set of strongest induction schemes for <i>F</i>,
modulo <i>T</i>. These schemes can be derived from the set of
termination cores, the minimal sets returned by
All-Termination(<i>T</i>), without any reference to an explicit
measure function. We study the All-Termination(<i>T</i>) problem
as applied to the size-change termination analysis (SCT), a
PSpace-complete problem that underlies many termination
solvers. Surprisingly, we show that All-Termination(SCT) is also
PSpace-complete, even though it substantially generalizes SCT
. We develop a practical algorithm for All- Termination(SCT),
and show experimentally that on the ACL2 regression suite (whose
size is over 100MB) our algorithm generates stronger induction
schemes on 90% of multiargument functions.
</div>
<p><span class="pt">Regular expression derivatives reexamined</span><br>
Scott Owens, John Reppy, Aaron Turon<br>
In <i>JFP</i>, March 2009, vol 19, issue 02, pp. 173-190<br>
[ <a href="javascript:;" onClick="ToggleContent('re-deriv-paper');">abstract</a>
| <a href="re-deriv.pdf">pdf</a> ]
<div class="abs" id="re-deriv-paper" style="display:none;"><i>Abstract:</i>
The derivative of a set of strings <i>S</i> with respect to a
symbol <i>a</i> is the set of strings generated by stripping the
leading <i>a</i> from the strings in <i>S</i> that start
with <i>a</i>. For regular sets of strings, i.e., sets defined by
regular expressions (REs), the derivative is also a regular set. In a
1964 paper, Janusz Brzozowski presented an elegant method for directly
constructing a recognizer from a regular expression based on
regular-expression derivatives (Brzozowski, 1964). His approach is
elegant and easily supports extended regular expressions; i.e., REs
extended with Boolean operations such as complement. Unfortunately, RE
derivatives have been lost in the sands of time, and few computer
scientists are aware of them. Recently, we independently developed two
scanner generators, one for PLT Scheme and one for Standard ML, using
RE derivatives. Our experiences with this approach have been quite
positive: the implementation techniques are simple, the generated
scanners are usually optimal in size, and the extended RE language
allows for more compact scanner specifications. Of special interest
is that the implementation techniques are well-suited to functional
languages that provide good support for symbolic term manipulation
(e.g., inductive datatypes and pattern matching).
<p>The purpose of this paper is largely educational. Our positive
experience with RE derivatives leads us to believe that they deserve
the attention of the current generation of functional programmers,
especially those implementing RE recognizers. We begin with a review
of background material in Section 2, introducing the notation and
definitions of regular expressions and their recognizers. Section 3
gives a fresh presentation of Brzozowski's work, including DFA
construction with RE derivatives. In addition to reexamining
Brzozowski's work, we also report on the key implementation challenges
we faced in Section 4, including new techniques for handling large
character sets such as Unicode (Unicode Consortium, 2003). Section 5
reports our experience in general and includes an empirical comparison
of the derivative-based scanner generator for SML/NJ with the more
traditional tool it replaces. We conclude with a review of related
work and a summary.
</div>
<p><span class="pt">Metaprogramming with Traits</span><br>
John Reppy, Aaron Turon<br>
In <span class="pl">ECOOP</span>, July 2007.<br>
[ <a href="javascript:;" onClick="ToggleContent('ecoop07-paper');">abstract</a>
| <a href="ecoop07-meta-traits.pdf">pdf</a> ]
<div class="abs" id="ecoop07-paper" style="display:none;"><i>Abstract:</i>
In many domains, classes have highly regular internal structure. For
example, so-called business objects often contain boilerplate code for
mapping database fields to class members. The boilerplate code must be
repeated per-field for every class, because existing mechanisms for
constructing classes do not provide a way to capture and reuse such
member-level structure. As a result, programmers often resort to ad
hoc code generation. This paper presents a lightweight mechanism for
specifying and reusing member-level structure in Java programs. The
proposal is based on a modest extension to traits that we have termed
trait-based metaprogramming. Although the semantics of the mechanism
are straightforward, its type theory is difficult to reconcile with
nominal subtyping. We achieve reconciliation by introducing a hybrid
structural/nominal type system that extends Java's type system. The
paper includes a formal calculus defined by translation to
Featherweight Generic Java.
</div>
<p><span class="pt">A foundation for trait-based metaprogramming</span><br>
John Reppy, Aaron Turon<br>
In <span class="pl"><a href="http://public.research.att.com/~kfisher/FOOL/FOOLWOOD06/program.html">FOOL/WOOD</a></span>, January 2006.<br>
[ <a href="javascript:;" onClick="ToggleContent('traits-foundation-paper');">abstract</a>
| <a href="traits-fool.pdf">pdf</a>
| <a href="traits-slides.pdf">slides</a>
| <a href="http://www.cs.uchicago.edu/research/publications/techreports/TR-2006-02">tech report</a> ]
<div class="abs" id="traits-foundation-paper" style="display:none;"><i>Abstract:</i>
Schaerli et al. introduced traits as reusable units of behavior
independent of the inheritance hierarchy. Despite their relative
simplicity, traits offer a surprisingly rich calculus. Trait calculi
typically include operations for resolving conflicts when composing
two traits. In the existing work on traits, these operations (method
exclusion and aliasing) are shallow, i.e., they have no effect on the
body of the other methods in the trait. In this paper, we present a
new trait system, based on the Fisher-Reppy trait calculus, that adds
deep operations (method hiding and renaming) to support conflict
resolution. The proposed operations are deep in the sense that they
preserve any existing connections between the affected method and the
other methods of the trait. Our system uses Riecke-Stone dictionaries
to support these features. In addition, we define a more fine-grained
mechanism for tracking trait types than in previous systems. The
resulting calculus is more flexible and expressive, and can serve as
the foundation for trait-based metaprogramming, an idiom we introduce.
</div>
</div>
<h2>Other documents</h2>
<p><span class="pt">All-Termination(SCP)</span><br>
Panagiotis Manolios, Aaron Turon<br>
<i>Manuscript</i>.<br>
[ <a href="javascript:;" onClick="ToggleContent('allt-scp-paper');">abstract</a>
| <a href="allt-scp.pdf">pdf</a> | <a href="mitchfest.pdf">slides</a> ]
<div class="abs" id="allt-scp-paper"
style="display:none;"><i>Abstract:</i> We recently introduced the
All-Termination(<i>T</i>) problem: given a termination solver <i>T</i>
and a function <i>F</i>, find every subset of the formal parameters
to <i>F</i> whose consideration is sufficient to show, using <i>T</i>,
that <i>F</i> terminates. These subsets can be harnessed by a theorem
prover to locate and justify induction schemes, and are also useful
for guiding rewriting heuristics and ensuring their termination. In
this paper, we study the All-Termination problem for SCP (polynomial
size-change analysis), a powerful, cubic-time termination
analysis. SCP is the first nonmonotonic termination analysis studied
in the context of All-Termination, making its analysis both
challenging and informative. We develop an algorithm for solving the
All-Termination(SCP) problem, and briefly report on initial
experimental results obtained on the ACL2 regression suite.
</div>