1 ==========================
2 Auto-Vectorization in LLVM
3 ==========================
8 LLVM has two vectorizers: The :ref:`Loop Vectorizer <loop-vectorizer>`,
9 which operates on Loops, and the :ref:`SLP Vectorizer
10 <slp-vectorizer>`. These vectorizers
11 focus on different optimization opportunities and use different techniques.
12 The SLP vectorizer merges multiple scalars that are found in the code into
13 vectors while the Loop Vectorizer widens instructions in loops
14 to operate on multiple consecutive iterations.
16 Both the Loop Vectorizer and the SLP Vectorizer are enabled by default.
26 The Loop Vectorizer is enabled by default, but it can be disabled
27 through clang using the command line flag:
29 .. code-block:: console
31 $ clang ... -fno-vectorize file.c
36 The loop vectorizer uses a cost model to decide on the optimal vectorization factor
37 and unroll factor. However, users of the vectorizer can force the vectorizer to use
38 specific values. Both 'clang' and 'opt' support the flags below.
40 Users can control the vectorization SIMD width using the command line flag "-force-vector-width".
42 .. code-block:: console
44 $ clang -mllvm -force-vector-width=8 ...
45 $ opt -loop-vectorize -force-vector-width=8 ...
47 Users can control the unroll factor using the command line flag "-force-vector-unroll"
49 .. code-block:: console
51 $ clang -mllvm -force-vector-unroll=2 ...
52 $ opt -loop-vectorize -force-vector-unroll=2 ...
57 The LLVM Loop Vectorizer has a number of features that allow it to vectorize
60 Loops with unknown trip count
61 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
63 The Loop Vectorizer supports loops with an unknown trip count.
64 In the loop below, the iteration ``start`` and ``finish`` points are unknown,
65 and the Loop Vectorizer has a mechanism to vectorize loops that do not start
66 at zero. In this example, 'n' may not be a multiple of the vector width, and
67 the vectorizer has to execute the last few iterations as scalar code. Keeping
68 a scalar copy of the loop increases the code size.
72 void bar(float *A, float* B, float K, int start, int end) {
73 for (int i = start; i < end; ++i)
77 Runtime Checks of Pointers
78 ^^^^^^^^^^^^^^^^^^^^^^^^^^
80 In the example below, if the pointers A and B point to consecutive addresses,
81 then it is illegal to vectorize the code because some elements of A will be
82 written before they are read from array B.
84 Some programmers use the 'restrict' keyword to notify the compiler that the
85 pointers are disjointed, but in our example, the Loop Vectorizer has no way of
86 knowing that the pointers A and B are unique. The Loop Vectorizer handles this
87 loop by placing code that checks, at runtime, if the arrays A and B point to
88 disjointed memory locations. If arrays A and B overlap, then the scalar version
89 of the loop is executed.
93 void bar(float *A, float* B, float K, int n) {
94 for (int i = 0; i < n; ++i)
102 In this example the ``sum`` variable is used by consecutive iterations of
103 the loop. Normally, this would prevent vectorization, but the vectorizer can
104 detect that 'sum' is a reduction variable. The variable 'sum' becomes a vector
105 of integers, and at the end of the loop the elements of the array are added
106 together to create the correct result. We support a number of different
107 reduction operations, such as addition, multiplication, XOR, AND and OR.
111 int foo(int *A, int *B, int n) {
113 for (int i = 0; i < n; ++i)
118 We support floating point reduction operations when `-ffast-math` is used.
123 In this example the value of the induction variable ``i`` is saved into an
124 array. The Loop Vectorizer knows to vectorize induction variables.
128 void bar(float *A, float* B, float K, int n) {
129 for (int i = 0; i < n; ++i)
136 The Loop Vectorizer is able to "flatten" the IF statement in the code and
137 generate a single stream of instructions. The Loop Vectorizer supports any
138 control flow in the innermost loop. The innermost loop may contain complex
139 nesting of IFs, ELSEs and even GOTOs.
143 int foo(int *A, int *B, int n) {
145 for (int i = 0; i < n; ++i)
151 Pointer Induction Variables
152 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
154 This example uses the "accumulate" function of the standard c++ library. This
155 loop uses C++ iterators, which are pointers, and not integer indices.
156 The Loop Vectorizer detects pointer induction variables and can vectorize
157 this loop. This feature is important because many C++ programs use iterators.
161 int baz(int *A, int n) {
162 return std::accumulate(A, A + n, 0);
168 The Loop Vectorizer can vectorize loops that count backwards.
172 int foo(int *A, int *B, int n) {
173 for (int i = n; i > 0; --i)
180 The Loop Vectorizer can vectorize code that becomes a sequence of scalar instructions
181 that scatter/gathers memory.
185 int foo(int *A, int *B, int n, int k) {
186 for (int i = 0; i < n; ++i)
190 Vectorization of Mixed Types
191 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
193 The Loop Vectorizer can vectorize programs with mixed types. The Vectorizer
194 cost model can estimate the cost of the type conversion and decide if
195 vectorization is profitable.
199 int foo(int *A, char *B, int n, int k) {
200 for (int i = 0; i < n; ++i)
204 Global Structures Alias Analysis
205 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
207 Access to global structures can also be vectorized, with alias analysis being
208 used to make sure accesses don't alias. Run-time checks can also be added on
209 pointer access to structure members.
211 Many variations are supported, but some that rely on undefined behaviour being
212 ignored (as other compilers do) are still being left un-vectorized.
216 struct { int A[100], K, B[100]; } Foo;
219 for (int i = 0; i < 100; ++i)
220 Foo.A[i] = Foo.B[i] + 100;
223 Vectorization of function calls
224 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
226 The Loop Vectorize can vectorize intrinsic math functions.
227 See the table below for a list of these functions.
229 +-----+-----+---------+
231 +-----+-----+---------+
233 +-----+-----+---------+
234 | log |log2 | log10 |
235 +-----+-----+---------+
237 +-----+-----+---------+
238 |fma |trunc|nearbyint|
239 +-----+-----+---------+
241 +-----+-----+---------+
243 The loop vectorizer knows about special instructions on the target and will
244 vectorize a loop containing a function call that maps to the instructions. For
245 example, the loop below will be vectorized on Intel x86 if the SSE4.1 roundps
246 instruction is available.
251 for (int i = 0; i != 1024; ++i)
255 Partial unrolling during vectorization
256 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
258 Modern processors feature multiple execution units, and only programs that contain a
259 high degree of parallelism can fully utilize the entire width of the machine.
260 The Loop Vectorizer increases the instruction level parallelism (ILP) by
261 performing partial-unrolling of loops.
263 In the example below the entire array is accumulated into the variable 'sum'.
264 This is inefficient because only a single execution port can be used by the processor.
265 By unrolling the code the Loop Vectorizer allows two or more execution ports
266 to be used simultaneously.
270 int foo(int *A, int *B, int n) {
272 for (int i = 0; i < n; ++i)
277 The Loop Vectorizer uses a cost model to decide when it is profitable to unroll loops.
278 The decision to unroll the loop depends on the register pressure and the generated code size.
283 This section shows the the execution time of Clang on a simple benchmark:
284 `gcc-loops <http://llvm.org/viewvc/llvm-project/test-suite/trunk/SingleSource/UnitTests/Vectorizer/>`_.
285 This benchmarks is a collection of loops from the GCC autovectorization
286 `page <http://gcc.gnu.org/projects/tree-ssa/vectorization.html>`_ by Dorit Nuzman.
288 The chart below compares GCC-4.7, ICC-13, and Clang-SVN with and without loop vectorization at -O3, tuned for "corei7-avx", running on a Sandybridge iMac.
289 The Y-axis shows the time in msec. Lower is better. The last column shows the geomean of all the kernels.
291 .. image:: gcc-loops.png
293 And Linpack-pc with the same configuration. Result is Mflops, higher is better.
295 .. image:: linpack-pc.png
305 The goal of SLP vectorization (a.k.a. superword-level parallelism) is
306 to combine similar independent instructions
307 into vector instructions. Memory accesses, arithmetic operations, comparison
308 operations, PHI-nodes, can all be vectorized using this technique.
310 For example, the following function performs very similar operations on its
311 inputs (a1, b1) and (a2, b2). The basic-block vectorizer may combine these
312 into vector operations.
316 void foo(int a1, int a2, int b1, int b2, int *A) {
317 A[0] = a1*(a1 + b1)/b1 + 50*b1/a1;
318 A[1] = a2*(a2 + b2)/b2 + 50*b2/a2;
321 The SLP-vectorizer processes the code bottom-up, across basic blocks, in search of scalars to combine.
326 The SLP Vectorizer is enabled by default, but it can be disabled
327 through clang using the command line flag:
329 .. code-block:: console
331 $ clang -fno-slp-vectorize file.c
333 LLVM has a second basic block vectorization phase
334 which is more compile-time intensive (The BB vectorizer). This optimization
335 can be enabled through clang using the command line flag:
337 .. code-block:: console
339 $ clang -fslp-vectorize-aggressive file.c