1 ==============================================
2 LLVM Atomic Instructions and Concurrency Guide
3 ==============================================
11 Historically, LLVM has not had very strong support for concurrency; some minimal
12 intrinsics were provided, and ``volatile`` was used in some cases to achieve
13 rough semantics in the presence of concurrency. However, this is changing;
14 there are now new instructions which are well-defined in the presence of threads
15 and asynchronous signals, and the model for existing instructions has been
18 The atomic instructions are designed specifically to provide readable IR and
19 optimized code generation for the following:
21 * The new C++0x ``<atomic>`` header. (`C++0x draft available here
22 <http://www.open-std.org/jtc1/sc22/wg21/>`_.) (`C1x draft available here
23 <http://www.open-std.org/jtc1/sc22/wg14/>`_.)
25 * Proper semantics for Java-style memory, for both ``volatile`` and regular
26 shared variables. (`Java Specification
27 <http://java.sun.com/docs/books/jls/third_edition/html/memory.html>`_)
29 * gcc-compatible ``__sync_*`` builtins. (`Description
30 <http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html>`_)
32 * Other scenarios with atomic semantics, including ``static`` variables with
33 non-trivial constructors in C++.
35 Atomic and volatile in the IR are orthogonal; "volatile" is the C/C++ volatile,
36 which ensures that every volatile load and store happens and is performed in the
37 stated order. A couple examples: if a SequentiallyConsistent store is
38 immediately followed by another SequentiallyConsistent store to the same
39 address, the first store can be erased. This transformation is not allowed for a
40 pair of volatile stores. On the other hand, a non-volatile non-atomic load can
41 be moved across a volatile load freely, but not an Acquire load.
43 This document is intended to provide a guide to anyone either writing a frontend
44 for LLVM or working on optimization passes for LLVM with a guide for how to deal
45 with instructions with special semantics in the presence of concurrency. This
46 is not intended to be a precise guide to the semantics; the details can get
47 extremely complicated and unreadable, and are not usually necessary.
49 .. _Optimization outside atomic:
51 Optimization outside atomic
52 ===========================
54 The basic ``'load'`` and ``'store'`` allow a variety of optimizations, but can
55 lead to undefined results in a concurrent environment; see `NotAtomic`_. This
56 section specifically goes into the one optimizer restriction which applies in
57 concurrent environments, which gets a bit more of an extended description
58 because any optimization dealing with stores needs to be aware of it.
60 From the optimizer's point of view, the rule is that if there are not any
61 instructions with atomic ordering involved, concurrency does not matter, with
62 one exception: if a variable might be visible to another thread or signal
63 handler, a store cannot be inserted along a path where it might not execute
64 otherwise. Take the following example:
68 /* C code, for readability; run through clang -O2 -S -emit-llvm to get
72 for (int i = 0; i < 100; i++) {
78 The following is equivalent in non-concurrent situations:
85 for (int i = 0; i < 100; i++) {
92 However, LLVM is not allowed to transform the former to the latter: it could
93 indirectly introduce undefined behavior if another thread can access ``x`` at
94 the same time. (This example is particularly of interest because before the
95 concurrency model was implemented, LLVM would perform this transformation.)
97 Note that speculative loads are allowed; a load which is part of a race returns
98 ``undef``, but does not have undefined behavior.
103 For cases where simple loads and stores are not sufficient, LLVM provides
104 various atomic instructions. The exact guarantees provided depend on the
105 ordering; see `Atomic orderings`_.
107 ``load atomic`` and ``store atomic`` provide the same basic functionality as
108 non-atomic loads and stores, but provide additional guarantees in situations
109 where threads and signals are involved.
111 ``cmpxchg`` and ``atomicrmw`` are essentially like an atomic load followed by an
112 atomic store (where the store is conditional for ``cmpxchg``), but no other
113 memory operation can happen on any thread between the load and store. Note that
114 LLVM's cmpxchg does not provide quite as many options as the C++0x version.
116 A ``fence`` provides Acquire and/or Release ordering which is not part of
117 another operation; it is normally used along with Monotonic memory operations.
118 A Monotonic load followed by an Acquire fence is roughly equivalent to an
121 Frontends generating atomic instructions generally need to be aware of the
122 target to some degree; atomic instructions are guaranteed to be lock-free, and
123 therefore an instruction which is wider than the target natively supports can be
124 impossible to generate.
126 .. _Atomic orderings:
131 In order to achieve a balance between performance and necessary guarantees,
132 there are six levels of atomicity. They are listed in order of strength; each
133 level includes all the guarantees of the previous level except for
134 Acquire/Release. (See also `LangRef Ordering <LangRef.html#ordering>`_.)
141 NotAtomic is the obvious, a load or store which is not atomic. (This isn't
142 really a level of atomicity, but is listed here for comparison.) This is
143 essentially a regular load or store. If there is a race on a given memory
144 location, loads from that location return undef.
147 This is intended to match shared variables in C/C++, and to be used in any
148 other context where memory access is necessary, and a race is impossible. (The
149 precise definition is in `LangRef Memory Model <LangRef.html#memmodel>`_.)
152 The rule is essentially that all memory accessed with basic loads and stores
153 by multiple threads should be protected by a lock or other synchronization;
154 otherwise, you are likely to run into undefined behavior. If your frontend is
155 for a "safe" language like Java, use Unordered to load and store any shared
156 variable. Note that NotAtomic volatile loads and stores are not properly
157 atomic; do not try to use them as a substitute. (Per the C/C++ standards,
158 volatile does provide some limited guarantees around asynchronous signals, but
159 atomics are generally a better solution.)
162 Introducing loads to shared variables along a codepath where they would not
163 otherwise exist is allowed; introducing stores to shared variables is not. See
164 `Optimization outside atomic`_.
166 Notes for code generation
167 The one interesting restriction here is that it is not allowed to write to
168 bytes outside of the bytes relevant to a store. This is mostly relevant to
169 unaligned stores: it is not allowed in general to convert an unaligned store
170 into two aligned stores of the same width as the unaligned store. Backends are
171 also expected to generate an i8 store as an i8 store, and not an instruction
172 which writes to surrounding bytes. (If you are writing a backend for an
173 architecture which cannot satisfy these restrictions and cares about
174 concurrency, please send an email to llvmdev.)
179 Unordered is the lowest level of atomicity. It essentially guarantees that races
180 produce somewhat sane results instead of having undefined behavior. It also
181 guarantees the operation to be lock-free, so it do not depend on the data being
182 part of a special atomic structure or depend on a separate per-process global
183 lock. Note that code generation will fail for unsupported atomic operations; if
184 you need such an operation, use explicit locking.
187 This is intended to match the Java memory model for shared variables.
190 This cannot be used for synchronization, but is useful for Java and other
191 "safe" languages which need to guarantee that the generated code never
192 exhibits undefined behavior. Note that this guarantee is cheap on common
193 platforms for loads of a native width, but can be expensive or unavailable for
194 wider loads, like a 64-bit store on ARM. (A frontend for Java or other "safe"
195 languages would normally split a 64-bit store on ARM into two 32-bit unordered
199 In terms of the optimizer, this prohibits any transformation that transforms a
200 single load into multiple loads, transforms a store into multiple stores,
201 narrows a store, or stores a value which would not be stored otherwise. Some
202 examples of unsafe optimizations are narrowing an assignment into a bitfield,
203 rematerializing a load, and turning loads and stores into a memcpy
204 call. Reordering unordered operations is safe, though, and optimizers should
205 take advantage of that because unordered operations are common in languages
208 Notes for code generation
209 These operations are required to be atomic in the sense that if you use
210 unordered loads and unordered stores, a load cannot see a value which was
211 never stored. A normal load or store instruction is usually sufficient, but
212 note that an unordered load or store cannot be split into multiple
213 instructions (or an instruction which does multiple memory operations, like
214 ``LDRD`` on ARM without LPAE, or not naturally-aligned ``LDRD`` on LPAE ARM).
219 Monotonic is the weakest level of atomicity that can be used in synchronization
220 primitives, although it does not provide any general synchronization. It
221 essentially guarantees that if you take all the operations affecting a specific
222 address, a consistent ordering exists.
225 This corresponds to the C++0x/C1x ``memory_order_relaxed``; see those
226 standards for the exact definition.
229 If you are writing a frontend which uses this directly, use with caution. The
230 guarantees in terms of synchronization are very weak, so make sure these are
231 only used in a pattern which you know is correct. Generally, these would
232 either be used for atomic operations which do not protect other memory (like
233 an atomic counter), or along with a ``fence``.
236 In terms of the optimizer, this can be treated as a read+write on the relevant
237 memory location (and alias analysis will take advantage of that). In addition,
238 it is legal to reorder non-atomic and Unordered loads around Monotonic
239 loads. CSE/DSE and a few other optimizations are allowed, but Monotonic
240 operations are unlikely to be used in ways which would make those
241 optimizations useful.
243 Notes for code generation
244 Code generation is essentially the same as that for unordered for loads and
245 stores. No fences are required. ``cmpxchg`` and ``atomicrmw`` are required
246 to appear as a single operation.
251 Acquire provides a barrier of the sort necessary to acquire a lock to access
252 other memory with normal loads and stores.
255 This corresponds to the C++0x/C1x ``memory_order_acquire``. It should also be
256 used for C++0x/C1x ``memory_order_consume``.
259 If you are writing a frontend which uses this directly, use with caution.
260 Acquire only provides a semantic guarantee when paired with a Release
264 Optimizers not aware of atomics can treat this like a nothrow call. It is
265 also possible to move stores from before an Acquire load or read-modify-write
266 operation to after it, and move non-Acquire loads from before an Acquire
267 operation to after it.
269 Notes for code generation
270 Architectures with weak memory ordering (essentially everything relevant today
271 except x86 and SPARC) require some sort of fence to maintain the Acquire
272 semantics. The precise fences required varies widely by architecture, but for
273 a simple implementation, most architectures provide a barrier which is strong
274 enough for everything (``dmb`` on ARM, ``sync`` on PowerPC, etc.). Putting
275 such a fence after the equivalent Monotonic operation is sufficient to
276 maintain Acquire semantics for a memory operation.
281 Release is similar to Acquire, but with a barrier of the sort necessary to
285 This corresponds to the C++0x/C1x ``memory_order_release``.
288 If you are writing a frontend which uses this directly, use with caution.
289 Release only provides a semantic guarantee when paired with a Acquire
293 Optimizers not aware of atomics can treat this like a nothrow call. It is
294 also possible to move loads from after a Release store or read-modify-write
295 operation to before it, and move non-Release stores from after an Release
296 operation to before it.
298 Notes for code generation
299 See the section on Acquire; a fence before the relevant operation is usually
300 sufficient for Release. Note that a store-store fence is not sufficient to
301 implement Release semantics; store-store fences are generally not exposed to
302 IR because they are extremely difficult to use correctly.
307 AcquireRelease (``acq_rel`` in IR) provides both an Acquire and a Release
308 barrier (for fences and operations which both read and write memory).
311 This corresponds to the C++0x/C1x ``memory_order_acq_rel``.
314 If you are writing a frontend which uses this directly, use with caution.
315 Acquire only provides a semantic guarantee when paired with a Release
316 operation, and vice versa.
319 In general, optimizers should treat this like a nothrow call; the possible
320 optimizations are usually not interesting.
322 Notes for code generation
323 This operation has Acquire and Release semantics; see the sections on Acquire
326 SequentiallyConsistent
327 ----------------------
329 SequentiallyConsistent (``seq_cst`` in IR) provides Acquire semantics for loads
330 and Release semantics for stores. Additionally, it guarantees that a total
331 ordering exists between all SequentiallyConsistent operations.
334 This corresponds to the C++0x/C1x ``memory_order_seq_cst``, Java volatile, and
335 the gcc-compatible ``__sync_*`` builtins which do not specify otherwise.
338 If a frontend is exposing atomic operations, these are much easier to reason
339 about for the programmer than other kinds of operations, and using them is
340 generally a practical performance tradeoff.
343 Optimizers not aware of atomics can treat this like a nothrow call. For
344 SequentiallyConsistent loads and stores, the same reorderings are allowed as
345 for Acquire loads and Release stores, except that SequentiallyConsistent
346 operations may not be reordered.
348 Notes for code generation
349 SequentiallyConsistent loads minimally require the same barriers as Acquire
350 operations and SequentiallyConsistent stores require Release
351 barriers. Additionally, the code generator must enforce ordering between
352 SequentiallyConsistent stores followed by SequentiallyConsistent loads. This
353 is usually done by emitting either a full fence before the loads or a full
354 fence after the stores; which is preferred varies by architecture.
356 Atomics and IR optimization
357 ===========================
359 Predicates for optimizer writers to query:
361 * ``isSimple()``: A load or store which is not volatile or atomic. This is
362 what, for example, memcpyopt would check for operations it might transform.
364 * ``isUnordered()``: A load or store which is not volatile and at most
365 Unordered. This would be checked, for example, by LICM before hoisting an
368 * ``mayReadFromMemory()``/``mayWriteToMemory()``: Existing predicate, but note
369 that they return true for any operation which is volatile or at least
372 * Alias analysis: Note that AA will return ModRef for anything Acquire or
373 Release, and for the address accessed by any Monotonic operation.
375 To support optimizing around atomic operations, make sure you are using the
376 right predicates; everything should work if that is done. If your pass should
377 optimize some atomic operations (Unordered operations in particular), make sure
378 it doesn't replace an atomic load or store with a non-atomic operation.
380 Some examples of how optimizations interact with various kinds of atomic
383 * ``memcpyopt``: An atomic operation cannot be optimized into part of a
384 memcpy/memset, including unordered loads/stores. It can pull operations
385 across some atomic operations.
387 * LICM: Unordered loads/stores can be moved out of a loop. It just treats
388 monotonic operations like a read+write to a memory location, and anything
389 stricter than that like a nothrow call.
391 * DSE: Unordered stores can be DSE'ed like normal stores. Monotonic stores can
392 be DSE'ed in some cases, but it's tricky to reason about, and not especially
395 * Folding a load: Any atomic load from a constant global can be constant-folded,
396 because it cannot be observed. Similar reasoning allows scalarrepl with
397 atomic loads and stores.
402 Atomic operations are represented in the SelectionDAG with ``ATOMIC_*`` opcodes.
403 On architectures which use barrier instructions for all atomic ordering (like
404 ARM), appropriate fences are split out as the DAG is built.
406 The MachineMemOperand for all atomic operations is currently marked as volatile;
407 this is not correct in the IR sense of volatile, but CodeGen handles anything
408 marked volatile very conservatively. This should get fixed at some point.
410 Common architectures have some way of representing at least a pointer-sized
411 lock-free ``cmpxchg``; such an operation can be used to implement all the other
412 atomic operations which can be represented in IR up to that size. Backends are
413 expected to implement all those operations, but not operations which cannot be
414 implemented in a lock-free manner. It is expected that backends will give an
415 error when given an operation which cannot be implemented. (The LLVM code
416 generator is not very helpful here at the moment, but hopefully that will
419 The implementation of atomics on LL/SC architectures (like ARM) is currently a
420 bit of a mess; there is a lot of copy-pasted code across targets, and the
421 representation is relatively unsuited to optimization (it would be nice to be
422 able to optimize loops involving cmpxchg etc.).
424 On x86, all atomic loads generate a ``MOV``. SequentiallyConsistent stores
425 generate an ``XCHG``, other stores generate a ``MOV``. SequentiallyConsistent
426 fences generate an ``MFENCE``, other fences do not cause any code to be
427 generated. cmpxchg uses the ``LOCK CMPXCHG`` instruction. ``atomicrmw xchg``
428 uses ``XCHG``, ``atomicrmw add`` and ``atomicrmw sub`` use ``XADD``, and all
429 other ``atomicrmw`` operations generate a loop with ``LOCK CMPXCHG``. Depending
430 on the users of the result, some ``atomicrmw`` operations can be translated into
431 operations like ``LOCK AND``, but that does not work in general.
433 On ARM, MIPS, and many other RISC architectures, Acquire, Release, and
434 SequentiallyConsistent semantics require barrier instructions for every such
435 operation. Loads and stores generate normal instructions. ``cmpxchg`` and
436 ``atomicrmw`` can be represented using a loop with LL/SC-style instructions
437 which take some sort of exclusive lock on a cache line (``LDREX`` and ``STREX``
438 on ARM, etc.). At the moment, the IR does not provide any way to represent a
439 weak ``cmpxchg`` which would not require a loop.