1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
5 <title>LLVM Atomic Instructions and Concurrency Guide</title>
6 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
7 <link rel="stylesheet" href="llvm.css" type="text/css">
12 LLVM Atomic Instructions and Concurrency Guide
16 <li><a href="#introduction">Introduction</a></li>
17 <li><a href="#loadstore">Load and store</a></li>
18 <li><a href="#otherinst">Other atomic instructions</a></li>
19 <li><a href="#ordering">Atomic orderings</a></li>
20 <li><a href="#iropt">Atomics and IR optimization</a></li>
21 <li><a href="#codegen">Atomics and Codegen</a></li>
24 <div class="doc_author">
25 <p>Written by Eli Friedman</p>
28 <!-- *********************************************************************** -->
30 <a name="introduction">Introduction</a>
32 <!-- *********************************************************************** -->
36 <p>Historically, LLVM has not had very strong support for concurrency; some
37 minimal intrinsics were provided, and <code>volatile</code> was used in some
38 cases to achieve rough semantics in the presence of concurrency. However, this
39 is changing; there are now new instructions which are well-defined in the
40 presence of threads and asynchronous signals, and the model for existing
41 instructions has been clarified in the IR.</p>
43 <p>The atomic instructions are designed specifically to provide readable IR and
44 optimized code generation for the following:</p>
46 <li>The new C++0x <code><atomic></code> header.
47 (<a href="http://www.open-std.org/jtc1/sc22/wg21/">C++0x draft available here</a>.)
48 (<a href="http://www.open-std.org/jtc1/sc22/wg14/">C1x draft available here</a>)</li>
49 <li>Proper semantics for Java-style memory, for both <code>volatile</code> and
50 regular shared variables.
51 (<a href="http://java.sun.com/docs/books/jls/third_edition/html/memory.html">Java Specification</a>)</li>
52 <li>gcc-compatible <code>__sync_*</code> builtins.
53 (<a href="http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html">Description</a>)</li>
54 <li>Other scenarios with atomic semantics, including <code>static</code>
55 variables with non-trivial constructors in C++.</li>
58 <p>Atomic and volatile in the IR are orthogonal; "volatile" is the C/C++
59 volatile, which ensures that every volatile load and store happens and is
60 performed in the stated order. A couple examples: if a
61 SequentiallyConsistent store is immediately followed by another
62 SequentiallyConsistent store to the same address, the first store can
63 be erased. This transformation is not allowed for a pair of volatile
64 stores. On the other hand, a non-volatile non-atomic load can be moved
65 across a volatile load freely, but not an Acquire load.</p>
67 <p>This document is intended to provide a guide to anyone either writing a
68 frontend for LLVM or working on optimization passes for LLVM with a guide
69 for how to deal with instructions with special semantics in the presence of
70 concurrency. This is not intended to be a precise guide to the semantics;
71 the details can get extremely complicated and unreadable, and are not
72 usually necessary.</p>
76 <!-- *********************************************************************** -->
78 <a name="loadstore">Load and store</a>
80 <!-- *********************************************************************** -->
84 <p>The basic <code>'load'</code> and <code>'store'</code> allow a variety of
85 optimizations, but can have unintuitive results in a concurrent environment.
86 For a frontend writer, the rule is essentially that all memory accessed
87 with basic loads and stores by multiple threads should be protected by a
88 lock or other synchronization; otherwise, you are likely to run into
89 undefined behavior. (Do not use volatile as a substitute for atomics; it
90 might work on some platforms, but does not provide the necessary guarantees
93 <p>From the optimizer's point of view, the rule is that if there
94 are not any instructions with atomic ordering involved, concurrency does
95 not matter, with one exception: if a variable might be visible to another
96 thread or signal handler, a store cannot be inserted along a path where it
97 might not execute otherwise. For example, suppose LICM wants to take all the
98 loads and stores in a loop to and from a particular address and promote them
99 to registers. LICM is not allowed to insert an unconditional store after
100 the loop with the computed value unless a store unconditionally executes
101 within the loop. Note that speculative loads are allowed; a load which
102 is part of a race returns <code>undef</code>, but does not have undefined
105 <p>For cases where simple loads and stores are not sufficient, LLVM provides
106 atomic loads and stores with varying levels of guarantees.</p>
110 <!-- *********************************************************************** -->
112 <a name="otherinst">Other atomic instructions</a>
114 <!-- *********************************************************************** -->
118 <p><code>cmpxchg</code> and <code>atomicrmw</code> are essentially like an
119 atomic load followed by an atomic store (where the store is conditional for
120 <code>cmpxchg</code>), but no other memory operation can happen between
121 the load and store. Note that our cmpxchg does not have quite as many
122 options for making cmpxchg weaker as the C++0x version.</p>
124 <p>A <code>fence</code> provides Acquire and/or Release ordering which is not
125 part of another operation; it is normally used along with Monotonic memory
126 operations. A Monotonic load followed by an Acquire fence is roughly
127 equivalent to an Acquire load.</p>
129 <p>Frontends generating atomic instructions generally need to be aware of the
130 target to some degree; atomic instructions are guaranteed to be lock-free,
131 and therefore an instruction which is wider than the target natively supports
132 can be impossible to generate.</p>
136 <!-- *********************************************************************** -->
138 <a name="ordering">Atomic orderings</a>
140 <!-- *********************************************************************** -->
144 <p>In order to achieve a balance between performance and necessary guarantees,
145 there are six levels of atomicity. They are listed in order of strength;
146 each level includes all the guarantees of the previous level except for
149 <!-- ======================================================================= -->
151 <a name="o_unordered">Unordered</a>
156 <p>Unordered is the lowest level of atomicity. It essentially guarantees that
157 races produce somewhat sane results instead of having undefined behavior.
158 It also guarantees the operation to be lock-free, so it do not depend on
159 the data being part of a special atomic structure or depend on a separate
160 per-process global lock. Note that code generation will fail for
161 unsupported atomic operations; if you need such an operation, use explicit
165 <dt>Relevant standard</dt>
166 <dd>This is intended to match the Java memory model for shared
168 <dt>Notes for frontends</dt>
169 <dd>This cannot be used for synchronization, but is useful for Java and
170 other "safe" languages which need to guarantee that the generated
171 code never exhibits undefined behavior. Note that this guarantee
172 is cheap on common platforms for loads of a native width, but can
173 be expensive or unavailable for wider loads, like a 64-bit store
174 on ARM. (A frontend for Java or other "safe" languages would normally
175 split a 64-bit store on ARM into two 32-bit unordered stores.)
176 <dt>Notes for optimizers</dt>
177 <dd>In terms of the optimizer, this prohibits any transformation that
178 transforms a single load into multiple loads, transforms a store
179 into multiple stores, narrows a store, or stores a value which
180 would not be stored otherwise. Some examples of unsafe optimizations
181 are narrowing an assignment into a bitfield, rematerializing
182 a load, and turning loads and stores into a memcpy call. Reordering
183 unordered operations is safe, though, and optimizers should take
184 advantage of that because unordered operations are common in
185 languages that need them.</dd>
186 <dt>Notes for code generation</dt>
187 <dd>These operations are required to be atomic in the sense that if you
188 use unordered loads and unordered stores, a load cannot see a value
189 which was never stored. A normal load or store instruction is usually
190 sufficient, but note that an unordered load or store cannot
191 be split into multiple instructions (or an instruction which
192 does multiple memory operations, like <code>LDRD</code> on ARM).</dd>
197 <!-- ======================================================================= -->
199 <a name="o_monotonic">Monotonic</a>
204 <p>Monotonic is the weakest level of atomicity that can be used in
205 synchronization primitives, although it does not provide any general
206 synchronization. It essentially guarantees that if you take all the
207 operations affecting a specific address, a consistent ordering exists.
210 <dt>Relevant standard</dt>
211 <dd>This corresponds to the C++0x/C1x <code>memory_order_relaxed</code>;
212 see those standards for the exact definition.
213 <dt>Notes for frontends</dt>
214 <dd>If you are writing a frontend which uses this directly, use with caution.
215 The guarantees in terms of synchronization are very weak, so make
216 sure these are only used in a pattern which you know is correct.
217 Generally, these would either be used for atomic operations which
218 do not protect other memory (like an atomic counter), or along with
219 a <code>fence</code>.</dd>
220 <dt>Notes for optimizers</dt>
221 <dd>In terms of the optimizer, this can be treated as a read+write on the
222 relevant memory location (and alias analysis will take advantage of
223 that). In addition, it is legal to reorder non-atomic and Unordered
224 loads around Monotonic loads. CSE/DSE and a few other optimizations
225 are allowed, but Monotonic operations are unlikely to be used in ways
226 which would make those optimizations useful.</dd>
227 <dt>Notes for code generation</dt>
228 <dd>Code generation is essentially the same as that for unordered for loads
229 and stores. No fences is required. <code>cmpxchg</code> and
230 <code>atomicrmw</code> are required to appear as a single operation.</dd>
235 <!-- ======================================================================= -->
237 <a name="o_acquire">Acquire</a>
242 <p>Acquire provides a barrier of the sort necessary to acquire a lock to access
243 other memory with normal loads and stores.
246 <dt>Relevant standard</dt>
247 <dd>This corresponds to the C++0x/C1x <code>memory_order_acquire</code>. It
248 should also be used for C++0x/C1x <code>memory_order_consume</code>.
249 <dt>Notes for frontends</dt>
250 <dd>If you are writing a frontend which uses this directly, use with caution.
251 Acquire only provides a semantic guarantee when paired with a Release
253 <dt>Notes for optimizers</dt>
254 <dd>Optimizers not aware of atomics can treat this like a nothrow call.
255 Tt is also possible to move stores from before an Acquire load
256 or read-modify-write operation to after it, and move non-Acquire
257 loads from before an Acquire operation to after it.</dd>
258 <dt>Notes for code generation</dt>
259 <dd>Architectures with weak memory ordering (essentially everything relevant
260 today except x86 and SPARC) require some sort of fence to maintain
261 the Acquire semantics. The precise fences required varies widely by
262 architecture, but for a simple implementation, most architectures provide
263 a barrier which is strong enough for everything (<code>dmb</code> on ARM,
264 <code>sync</code> on PowerPC, etc.). Putting such a fence after the
265 equivalent Monotonic operation is sufficient to maintain Acquire
266 semantics for a memory operation.</dd>
271 <!-- ======================================================================= -->
273 <a name="o_acquire">Release</a>
278 <p>Release is similar to Acquire, but with a barrier of the sort necessary to
282 <dt>Relevant standard</dt>
283 <dd>This corresponds to the C++0x/C1x <code>memory_order_release</code>.</dd>
284 <dt>Notes for frontends</dt>
285 <dd>If you are writing a frontend which uses this directly, use with caution.
286 Release only provides a semantic guarantee when paired with a Acquire
288 <dt>Notes for optimizers</dt>
289 <dd>Optimizers not aware of atomics can treat this like a nothrow call.
290 It is also possible to move loads from after a Release store
291 or read-modify-write operation to before it, and move non-Release
292 stores from after an Release operation to before it.</dd>
293 <dt>Notes for code generation</dt>
294 <dd>See the section on Acquire; a fence before the relevant operation is
295 usually sufficient for Release. Note that a store-store fence is not
296 sufficient to implement Release semantics; store-store fences are
297 generally not exposed to IR because they are extremely difficult to
303 <!-- ======================================================================= -->
305 <a name="o_acqrel">AcquireRelease</a>
310 <p>AcquireRelease (<code>acq_rel</code> in IR) provides both an Acquire and a
311 Release barrier (for fences and operations which both read and write memory).
314 <dt>Relevant standard</dt>
315 <dd>This corresponds to the C++0x/C1x <code>memory_order_acq_rel</code>.
316 <dt>Notes for frontends</dt>
317 <dd>If you are writing a frontend which uses this directly, use with caution.
318 Acquire only provides a semantic guarantee when paired with a Release
319 operation, and vice versa.</dd>
320 <dt>Notes for optimizers</dt>
321 <dd>In general, optimizers should treat this like a nothrow call; the
322 the possible optimizations are usually not interesting.</dd>
323 <dt>Notes for code generation</dt>
324 <dd>This operation has Acquire and Release semantics; see the sections on
325 Acquire and Release.</dd>
330 <!-- ======================================================================= -->
332 <a name="o_seqcst">SequentiallyConsistent</a>
337 <p>SequentiallyConsistent (<code>seq_cst</code> in IR) provides
338 Acquire semantics for loads and Release semantics for
339 stores. Additionally, it guarantees that a total ordering exists
340 between all SequentiallyConsistent operations.
343 <dt>Relevant standard</dt>
344 <dd>This corresponds to the C++0x/C1x <code>memory_order_seq_cst</code>,
345 Java volatile, and the gcc-compatible <code>__sync_*</code> builtins
346 which do not specify otherwise.
347 <dt>Notes for frontends</dt>
348 <dd>If a frontend is exposing atomic operations, these are much easier to
349 reason about for the programmer than other kinds of operations, and using
350 them is generally a practical performance tradeoff.</dd>
351 <dt>Notes for optimizers</dt>
352 <dd>Optimizers not aware of atomics can treat this like a nothrow call.
353 For SequentiallyConsistent loads and stores, the same reorderings are
354 allowed as for Acquire loads and Release stores, except that
355 SequentiallyConsistent operations may not be reordered.</dd>
356 <dt>Notes for code generation</dt>
357 <dd>SequentiallyConsistent loads minimally require the same barriers
358 as Acquire operations and SequeuentiallyConsistent stores require
359 Release barriers. Additionally, the code generator must enforce
360 ordering between SequeuentiallyConsistent stores followed by
361 SequeuentiallyConsistent loads. This is usually done by emitting
362 either a full fence before the loads or a full fence after the
363 stores; which is preferred varies by architecture.</dd>
370 <!-- *********************************************************************** -->
372 <a name="iropt">Atomics and IR optimization</a>
374 <!-- *********************************************************************** -->
378 <p>Predicates for optimizer writers to query:
380 <li>isSimple(): A load or store which is not volatile or atomic. This is
381 what, for example, memcpyopt would check for operations it might
383 <li>isUnordered(): A load or store which is not volatile and at most
384 Unordered. This would be checked, for example, by LICM before hoisting
386 <li>mayReadFromMemory()/mayWriteToMemory(): Existing predicate, but note
387 that they return true for any operation which is volatile or at least
389 <li>Alias analysis: Note that AA will return ModRef for anything Acquire or
390 Release, and for the address accessed by any Monotonic operation.
393 <p>There are essentially two components to supporting atomic operations. The
394 first is making sure to query isSimple() or isUnordered() instead
395 of isVolatile() before transforming an operation. The other piece is
396 making sure that a transform does not end up replacing, for example, an
397 Unordered operation with a non-atomic operation. Most of the other
398 necessary checks automatically fall out from existing predicates and
399 alias analysis queries.</p>
401 <p>Some examples of how optimizations interact with various kinds of atomic
404 <li>memcpyopt: An atomic operation cannot be optimized into part of a
405 memcpy/memset, including unordered loads/stores. It can pull operations
406 across some atomic operations.
407 <li>LICM: Unordered loads/stores can be moved out of a loop. It just treats
408 monotonic operations like a read+write to a memory location, and anything
409 stricter than that like a nothrow call.
410 <li>DSE: Unordered stores can be DSE'ed like normal stores. Monotonic stores
411 can be DSE'ed in some cases, but it's tricky to reason about, and not
412 especially important.
413 <li>Folding a load: Any atomic load from a constant global can be
414 constant-folded, because it cannot be observed. Similar reasoning allows
415 scalarrepl with atomic loads and stores.
420 <!-- *********************************************************************** -->
422 <a name="codegen">Atomics and Codegen</a>
424 <!-- *********************************************************************** -->
428 <p>Atomic operations are represented in the SelectionDAG with
429 <code>ATOMIC_*</code> opcodes. On architectures which use barrier
430 instructions for all atomic ordering (like ARM), appropriate fences are
431 split out as the DAG is built.</p>
433 <p>The MachineMemOperand for all atomic operations is currently marked as
434 volatile; this is not correct in the IR sense of volatile, but CodeGen
435 handles anything marked volatile very conservatively. This should get
436 fixed at some point.</p>
438 <p>Common architectures have some way of representing at least a pointer-sized
439 lock-free <code>cmpxchg</code>; such an operation can be used to implement
440 all the other atomic operations which can be represented in IR up to that
441 size. Backends are expected to implement all those operations, but not
442 operations which cannot be implemented in a lock-free manner. It is
443 expected that backends will give an error when given an operation which
444 cannot be implemented. (The LLVM code generator is not very helpful here
445 at the moment, but hopefully that will change.)</p>
447 <p>The implementation of atomics on LL/SC architectures (like ARM) is currently
448 a bit of a mess; there is a lot of copy-pasted code across targets, and
449 the representation is relatively unsuited to optimization (it would be nice
450 to be able to optimize loops involving cmpxchg etc.).</p>
452 <p>On x86, all atomic loads generate a <code>MOV</code>.
453 SequentiallyConsistent stores generate an <code>XCHG</code>, other stores
454 generate a <code>MOV</code>. SequentiallyConsistent fences generate an
455 <code>MFENCE</code>, other fences do not cause any code to be generated.
456 cmpxchg uses the <code>LOCK CMPXCHG</code> instruction.
457 <code>atomicrmw xchg</code> uses <code>XCHG</code>,
458 <code>atomicrmw add</code> and <code>atomicrmw sub</code> use
459 <code>XADD</code>, and all other <code>atomicrmw</code> operations generate
460 a loop with <code>LOCK CMPXCHG</code>. Depending on the users of the
461 result, some <code>atomicrmw</code> operations can be translated into
462 operations like <code>LOCK AND</code>, but that does not work in
465 <p>On ARM, MIPS, and many other RISC architectures, Acquire, Release, and
466 SequentiallyConsistent semantics require barrier instructions
467 for every such operation. Loads and stores generate normal instructions.
468 <code>cmpxchg</code> and <code>atomicrmw</code> can be represented using
469 a loop with LL/SC-style instructions which take some sort of exclusive
470 lock on a cache line (<code>LDREX</code> and <code>STREX</code> on
471 ARM, etc.). At the moment, the IR does not provide any way to represent a
472 weak <code>cmpxchg</code> which would not require a loop.</p>
475 <!-- *********************************************************************** -->
479 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
480 src="http://jigsaw.w3.org/css-validator/images/vcss-blue" alt="Valid CSS"></a>
481 <a href="http://validator.w3.org/check/referer"><img
482 src="http://www.w3.org/Icons/valid-html401-blue" alt="Valid HTML 4.01"></a>
484 <a href="http://llvm.org/">LLVM Compiler Infrastructure</a><br>
485 Last modified: $Date: 2011-08-09 02:07:00 -0700 (Tue, 09 Aug 2011) $