Skip to content

Commit 5202ac5

Browse files
committed
Correct book examples for hardware re-ordering
1 parent 5c6f7fa commit 5202ac5

File tree

1 file changed

+62
-54
lines changed

1 file changed

+62
-54
lines changed

src/doc/unstable-book/src/compiler-barriers.md

Lines changed: 62 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -21,78 +21,86 @@ A `compiler_barrier` restricts the kinds of memory re-ordering the
2121
compiler is allowed to do. Specifically, depending on the given ordering
2222
semantics, the compiler may be disallowed from moving reads or writes
2323
from before or after the call to the other side of the call to
24-
`compiler_barrier`.
24+
`compiler_barrier`. Note that it does **not** prevent the *hardware*
25+
from doing such re-orderings -- for that, the `volatile_*` class of
26+
functions, or full memory fences, need to be used.
2527

2628
## Examples
2729

28-
The need to prevent re-ordering of reads and writes often arises when
29-
working with low-level devices. Consider a piece of code that interacts
30-
with an ethernet card with a set of internal registers that are accessed
31-
through an address port register (`a: &mut usize`) and a data port
32-
register (`d: &usize`). To read internal register 5, the following code
33-
might then be used:
30+
`compiler_barrier` is generally only useful for preventing a thread from
31+
racing *with itself*. That is, if a given thread is executing one piece
32+
of code, and is then interrupted, and starts executing code elsewhere
33+
(while still in the same thread, and conceptually still on the same
34+
core). In traditional programs, this can only occur when a signal
35+
handler is registered. Consider the following code:
3436

3537
```rust
36-
fn read_fifth(a: &mut usize, d: &usize) -> usize {
37-
*a = 5;
38-
*d
38+
#use std::sync::atomic::{AtomicBool, AtomicUsize};
39+
#use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
40+
#use std::sync::atomic::Ordering;
41+
static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
42+
static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
43+
44+
fn main() {
45+
IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
46+
IS_READY.store(true, Ordering::Relaxed);
47+
}
48+
49+
fn signal_handler() {
50+
if IS_READY.load(Ordering::Relaxed) {
51+
assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
52+
}
3953
}
4054
```
4155

42-
In this case, the compiler is free to re-order these two statements if
43-
it thinks doing so might result in better performance, register use, or
44-
anything else compilers care about. However, in doing so, it would break
45-
the code, as `x` would be set to the value of some other device
46-
register!
56+
The way it is currently written, the `assert_eq!` is *not* guaranteed to
57+
succeed, despite everything happening in a single thread. To see why,
58+
remember that the compiler is free to swap the stores to
59+
`IMPORTANT_VARIABLE` and `IS_READ` since they are both
60+
`Ordering::Relaxed`. If it does, and the signal handler is invoked right
61+
after `IS_READY` is updated, then the signal handler will see
62+
`IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
4763

48-
By inserting a compiler barrier, we can force the compiler to not
49-
re-arrange these two statements, making the code function correctly
50-
again:
64+
Using a `compiler_barrier`, we can remedy this situation:
5165

5266
```rust
5367
#![feature(compiler_barriers)]
54-
use std::sync::atomic;
55-
56-
fn read_fifth(a: &mut usize, d: &usize) -> usize {
57-
*a = 5;
58-
atomic::compiler_barrier(atomic::Ordering::SeqCst);
59-
*d
68+
#use std::sync::atomic::{AtomicBool, AtomicUsize};
69+
#use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
70+
#use std::sync::atomic::Ordering;
71+
use std::sync::atomic::compiler_barrier;
72+
73+
static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
74+
static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
75+
76+
fn main() {
77+
IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
78+
// prevent earlier writes from being moved beyond this point
79+
compiler_barrier(Ordering::Release);
80+
IS_READY.store(true, Ordering::Relaxed);
6081
}
61-
```
62-
63-
Compiler barriers are also useful in code that implements low-level
64-
synchronization primitives. Consider a structure with two different
65-
atomic variables, with a dependency chain between them:
6682

67-
```rust
68-
use std::sync::atomic;
69-
70-
fn thread1(x: &atomic::AtomicUsize, y: &atomic::AtomicUsize) {
71-
x.store(1, atomic::Ordering::Release);
72-
let v1 = y.load(atomic::Ordering::Acquire);
73-
}
74-
fn thread2(x: &atomic::AtomicUsize, y: &atomic::AtomicUsize) {
75-
y.store(1, atomic::Ordering::Release);
76-
let v2 = x.load(atomic::Ordering::Acquire);
83+
fn signal_handler() {
84+
if IS_READY.load(Ordering::Relaxed) {
85+
assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
86+
}
7787
}
7888
```
7989

80-
This code will guarantee that `thread1` sees any writes to `y` made by
81-
`thread2`, and that `thread2` sees any writes to `x`. Intuitively, one
82-
might also expect that if `thread2` sees `v2 == 0`, `thread1` must see
83-
`v1 == 1` (since `thread2`'s store happened before its `load`, and its
84-
load did not see `thread1`'s store). However, the code as written does
85-
*not* guarantee this, because the compiler is allowed to re-order the
86-
store and load within each thread. To enforce this particular behavior,
87-
a call to `compiler_barrier(Ordering::SeqCst)` would need to be inserted
88-
between the `store` and `load` in both functions.
89-
90-
Compiler barriers with weaker re-ordering semantics (such as
91-
`Ordering::Acquire`) can also be useful, but are beyond the scope of
92-
this text. Curious readers are encouraged to read the Linux kernel's
93-
discussion of [memory barriers][1], as well as C++ references on
94-
[`std::memory_order`][2] and [`atomic_signal_fence`][3].
90+
In more advanced cases (for example, if `IMPORTANT_VARIABLE` was an
91+
`AtomicPtr` that starts as `NULL`), it may also be unsafe for the
92+
compiler to hoist code using `IMPORTANT_VARIABLE` above the
93+
`IS_READY.load`. In that case, a `compiler_barrier(Ordering::Acquire)`
94+
should be placed at the top of the `if` to prevent this optimizations.
95+
96+
A deeper discussion of compiler barriers with various re-ordering
97+
semantics (such as `Ordering::SeqCst`) is beyond the scope of this text.
98+
Curious readers are encouraged to read the Linux kernel's discussion of
99+
[memory barriers][1], the C++ references on [`std::memory_order`][2] and
100+
[`atomic_signal_fence`][3], and [this StackOverflow answer][4] for
101+
further details.
95102

96103
[1]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
97104
[2]: http://en.cppreference.com/w/cpp/atomic/memory_order
98105
[3]: http://www.cplusplus.com/reference/atomic/atomic_signal_fence/
106+
[4]: http://stackoverflow.com/a/18454971/472927

0 commit comments

Comments
 (0)