1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should recognized various "overflow detection" idioms and translate them into
6 llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom:
8 unsigned int mul(unsigned int a,unsigned int b) {
9 if ((unsigned long long)a*b>0xffffffff)
14 The legalization code for mul-with-overflow needs to be made more robust before
15 this can be implemented though.
17 //===---------------------------------------------------------------------===//
19 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
20 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
21 safe in general, even on darwin. See the libm implementation of hypot for
22 examples (which special case when x/y are exactly zero to get signed zeros etc
25 //===---------------------------------------------------------------------===//
27 On targets with expensive 64-bit multiply, we could LSR this:
34 for (i = ...; ++i, tmp+=tmp)
37 This would be a win on ppc32, but not x86 or ppc64.
39 //===---------------------------------------------------------------------===//
41 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
43 //===---------------------------------------------------------------------===//
45 Reassociate should turn things like:
47 int factorial(int X) {
48 return X*X*X*X*X*X*X*X;
51 into llvm.powi calls, allowing the code generator to produce balanced
54 First, the intrinsic needs to be extended to support integers, and second the
55 code generator needs to be enhanced to lower these to multiplication trees.
57 //===---------------------------------------------------------------------===//
59 Interesting? testcase for add/shift/mul reassoc:
61 int bar(int x, int y) {
62 return x*x*x+y+x*x*x*x*x*y*y*y*y;
64 int foo(int z, int n) {
65 return bar(z, n) + bar(2*z, 2*n);
68 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
69 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
70 which is the same number of multiplies and is canonical, because the 2*X has
71 multiple uses. Here's a simple example:
73 define i32 @test15(i32 %X1) {
74 %B = mul i32 %X1, 47 ; X1*47
80 //===---------------------------------------------------------------------===//
82 Reassociate should handle the example in GCC PR16157:
84 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
85 void f () { /* this can be optimized to four additions... */
86 b4 = a4 + a3 + a2 + a1 + a0;
87 b3 = a3 + a2 + a1 + a0;
92 This requires reassociating to forms of expressions that are already available,
93 something that reassoc doesn't think about yet.
96 //===---------------------------------------------------------------------===//
98 These two functions should generate the same code on big-endian systems:
100 int g(int *j,int *l) { return memcmp(j,l,4); }
101 int h(int *j, int *l) { return *j - *l; }
103 this could be done in SelectionDAGISel.cpp, along with other special cases,
106 //===---------------------------------------------------------------------===//
108 It would be nice to revert this patch:
109 http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
111 And teach the dag combiner enough to simplify the code expanded before
112 legalize. It seems plausible that this knowledge would let it simplify other
115 //===---------------------------------------------------------------------===//
117 For vector types, DataLayout.cpp::getTypeInfo() returns alignment that is equal
118 to the type size. It works but can be overly conservative as the alignment of
119 specific vector types are target dependent.
121 //===---------------------------------------------------------------------===//
123 We should produce an unaligned load from code like this:
125 v4sf example(float *P) {
126 return (v4sf){P[0], P[1], P[2], P[3] };
129 //===---------------------------------------------------------------------===//
131 Add support for conditional increments, and other related patterns. Instead
136 je LBB16_2 #cond_next
147 //===---------------------------------------------------------------------===//
149 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
151 Expand these to calls of sin/cos and stores:
152 double sincos(double x, double *sin, double *cos);
153 float sincosf(float x, float *sin, float *cos);
154 long double sincosl(long double x, long double *sin, long double *cos);
156 Doing so could allow SROA of the destination pointers. See also:
157 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
159 This is now easily doable with MRVs. We could even make an intrinsic for this
160 if anyone cared enough about sincos.
162 //===---------------------------------------------------------------------===//
164 quantum_sigma_x in 462.libquantum contains the following loop:
166 for(i=0; i<reg->size; i++)
168 /* Flip the target bit of each basis state */
169 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
172 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
173 so cool to turn it into something like:
175 long long Res = ((MAX_UNSIGNED) 1 << target);
177 for(i=0; i<reg->size; i++)
178 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
180 for(i=0; i<reg->size; i++)
181 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
184 ... which would only do one 32-bit XOR per loop iteration instead of two.
186 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
189 //===---------------------------------------------------------------------===//
191 This isn't recognized as bswap by instcombine (yes, it really is bswap):
193 unsigned long reverse(unsigned v) {
195 t = v ^ ((v << 16) | (v >> 16));
197 v = (v << 24) | (v >> 8);
201 //===---------------------------------------------------------------------===//
205 We don't delete this output free loop, because trip count analysis doesn't
206 realize that it is finite (if it were infinite, it would be undefined). Not
207 having this blocks Loop Idiom from matching strlen and friends.
215 //===---------------------------------------------------------------------===//
219 These idioms should be recognized as popcount (see PR1488):
221 unsigned countbits_slow(unsigned v) {
223 for (c = 0; v; v >>= 1)
228 unsigned int popcount(unsigned int input) {
229 unsigned int count = 0;
230 for (unsigned int i = 0; i < 4 * 8; i++)
231 count += (input >> i) & i;
235 This should be recognized as CLZ: rdar://8459039
237 unsigned clz_a(unsigned a) {
245 This sort of thing should be added to the loop idiom pass.
247 //===---------------------------------------------------------------------===//
249 These should turn into single 16-bit (unaligned?) loads on little/big endian
252 unsigned short read_16_le(const unsigned char *adr) {
253 return adr[0] | (adr[1] << 8);
255 unsigned short read_16_be(const unsigned char *adr) {
256 return (adr[0] << 8) | adr[1];
259 //===---------------------------------------------------------------------===//
261 -instcombine should handle this transform:
262 icmp pred (sdiv X / C1 ), C2
263 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
265 Currently InstCombine avoids this transform but will do it when the signs of
266 the operands and the sign of the divide match. See the FIXME in
267 InstructionCombining.cpp in the visitSetCondInst method after the switch case
268 for Instruction::UDiv (around line 4447) for more details.
270 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
273 //===---------------------------------------------------------------------===//
277 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
278 opportunities in its double_array_divs_variable function: it needs loop
279 interchange, memory promotion (which LICM already does), vectorization and
280 variable trip count loop unrolling (since it has a constant trip count). ICC
281 apparently produces this very nice code with -ffast-math:
283 ..B1.70: # Preds ..B1.70 ..B1.69
284 mulpd %xmm0, %xmm1 #108.2
285 mulpd %xmm0, %xmm1 #108.2
286 mulpd %xmm0, %xmm1 #108.2
287 mulpd %xmm0, %xmm1 #108.2
289 cmpl $131072, %edx #108.2
290 jb ..B1.70 # Prob 99% #108.2
292 It would be better to count down to zero, but this is a lot better than what we
295 //===---------------------------------------------------------------------===//
299 typedef unsigned U32;
300 typedef unsigned long long U64;
301 int test (U32 *inst, U64 *regs) {
304 int r1 = (temp >> 20) & 0xf;
305 int b2 = (temp >> 16) & 0xf;
306 effective_addr2 = temp & 0xfff;
307 if (b2) effective_addr2 += regs[b2];
308 b2 = (temp >> 12) & 0xf;
309 if (b2) effective_addr2 += regs[b2];
310 effective_addr2 &= regs[4];
311 if ((effective_addr2 & 3) == 0)
316 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
317 we don't eliminate the computation of the top half of effective_addr2 because
318 we don't have whole-function selection dags. On x86, this means we use one
319 extra register for the function when effective_addr2 is declared as U64 than
320 when it is declared U32.
322 PHI Slicing could be extended to do this.
324 //===---------------------------------------------------------------------===//
326 Tail call elim should be more aggressive, checking to see if the call is
327 followed by an uncond branch to an exit block.
329 ; This testcase is due to tail-duplication not wanting to copy the return
330 ; instruction into the terminating blocks because there was other code
331 ; optimized out of the function after the taildup happened.
332 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
334 define i32 @t4(i32 %a) {
336 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
337 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
338 br i1 %tmp.2, label %then.0, label %else.0
340 then.0: ; preds = %entry
341 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
342 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
345 else.0: ; preds = %entry
346 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
347 br i1 %tmp.7, label %then.1, label %return
349 then.1: ; preds = %else.0
350 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
351 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
354 return: ; preds = %then.1, %else.0, %then.0
355 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
360 //===---------------------------------------------------------------------===//
362 Tail recursion elimination should handle:
367 return 2 * pow2m1 (n - 1) + 1;
370 Also, multiplies can be turned into SHL's, so they should be handled as if
371 they were associative. "return foo() << 1" can be tail recursion eliminated.
373 //===---------------------------------------------------------------------===//
375 Argument promotion should promote arguments for recursive functions, like
378 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
380 define internal i32 @foo(i32* %x) {
382 %tmp = load i32* %x ; <i32> [#uses=0]
383 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
387 define i32 @bar(i32* %x) {
389 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
393 //===---------------------------------------------------------------------===//
395 We should investigate an instruction sinking pass. Consider this silly
411 je LBB1_2 # cond_true
419 The PIC base computation (call+popl) is only used on one path through the
420 code, but is currently always computed in the entry block. It would be
421 better to sink the picbase computation down into the block for the
422 assertion, as it is the only one that uses it. This happens for a lot of
423 code with early outs.
425 Another example is loads of arguments, which are usually emitted into the
426 entry block on targets like x86. If not used in all paths through a
427 function, they should be sunk into the ones that do.
429 In this case, whole-function-isel would also handle this.
431 //===---------------------------------------------------------------------===//
433 Investigate lowering of sparse switch statements into perfect hash tables:
434 http://burtleburtle.net/bob/hash/perfect.html
436 //===---------------------------------------------------------------------===//
438 We should turn things like "load+fabs+store" and "load+fneg+store" into the
439 corresponding integer operations. On a yonah, this loop:
444 for (b = 0; b < 10000000; b++)
445 for (i = 0; i < 256; i++)
449 is twice as slow as this loop:
454 for (b = 0; b < 10000000; b++)
455 for (i = 0; i < 256; i++)
456 a[i] ^= (1ULL << 63);
459 and I suspect other processors are similar. On X86 in particular this is a
460 big win because doing this with integers allows the use of read/modify/write
463 //===---------------------------------------------------------------------===//
465 DAG Combiner should try to combine small loads into larger loads when
466 profitable. For example, we compile this C++ example:
468 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
469 extern THotKey m_HotKey;
470 THotKey GetHotKey () { return m_HotKey; }
472 into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
474 __Z9GetHotKeyv: ## @_Z9GetHotKeyv
475 movq _m_HotKey@GOTPCREL(%rip), %rax
488 //===---------------------------------------------------------------------===//
490 We should add an FRINT node to the DAG to model targets that have legal
491 implementations of ceil/floor/rint.
493 //===---------------------------------------------------------------------===//
498 long long input[8] = {1,0,1,0,1,0,1,0};
502 Clang compiles this into:
504 call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
505 %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
506 store i64 1, i64* %0, align 16
507 %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
508 store i64 1, i64* %1, align 16
509 %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
510 store i64 1, i64* %2, align 16
511 %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
512 store i64 1, i64* %3, align 16
514 Which gets codegen'd into:
517 movaps %xmm0, -16(%rbp)
518 movaps %xmm0, -32(%rbp)
519 movaps %xmm0, -48(%rbp)
520 movaps %xmm0, -64(%rbp)
526 It would be better to have 4 movq's of 0 instead of the movaps's.
528 //===---------------------------------------------------------------------===//
530 http://llvm.org/PR717:
532 The following code should compile into "ret int undef". Instead, LLVM
533 produces "ret int 0":
542 //===---------------------------------------------------------------------===//
544 The loop unroller should partially unroll loops (instead of peeling them)
545 when code growth isn't too bad and when an unroll count allows simplification
546 of some code within the loop. One trivial example is:
552 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
561 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
562 reduction in code size. The resultant code would then also be suitable for
563 exit value computation.
565 //===---------------------------------------------------------------------===//
567 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
568 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
569 matching code in dag combine doesn't look through truncates aggressively
570 enough. Here are some testcases reduces from GCC PR17886:
572 unsigned long long f5(unsigned long long x, unsigned long long y) {
573 return (x << 8) | ((y >> 48) & 0xffull);
575 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
578 return (x << 8) | ((y >> 48) & 0xffull);
580 return (x << 16) | ((y >> 40) & 0xffffull);
582 return (x << 24) | ((y >> 32) & 0xffffffull);
584 return (x << 32) | ((y >> 24) & 0xffffffffull);
586 return (x << 40) | ((y >> 16) & 0xffffffffffull);
590 //===---------------------------------------------------------------------===//
592 This (and similar related idioms):
594 unsigned int foo(unsigned char i) {
595 return i | (i<<8) | (i<<16) | (i<<24);
600 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
602 %conv = zext i8 %i to i32
603 %shl = shl i32 %conv, 8
604 %shl5 = shl i32 %conv, 16
605 %shl9 = shl i32 %conv, 24
606 %or = or i32 %shl9, %conv
607 %or6 = or i32 %or, %shl5
608 %or10 = or i32 %or6, %shl
612 it would be better as:
614 unsigned int bar(unsigned char i) {
615 unsigned int j=i | (i << 8);
621 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
623 %conv = zext i8 %i to i32
624 %shl = shl i32 %conv, 8
625 %or = or i32 %shl, %conv
626 %shl5 = shl i32 %or, 16
627 %or6 = or i32 %shl5, %or
631 or even i*0x01010101, depending on the speed of the multiplier. The best way to
632 handle this is to canonicalize it to a multiply in IR and have codegen handle
633 lowering multiplies to shifts on cpus where shifts are faster.
635 //===---------------------------------------------------------------------===//
637 We do a number of simplifications in simplify libcalls to strength reduce
638 standard library functions, but we don't currently merge them together. For
639 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
640 be done safely if "b" isn't modified between the strlen and memcpy of course.
642 //===---------------------------------------------------------------------===//
644 We compile this program: (from GCC PR11680)
645 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
647 Into code that runs the same speed in fast/slow modes, but both modes run 2x
648 slower than when compile with GCC (either 4.0 or 4.2):
650 $ llvm-g++ perf.cpp -O3 -fno-exceptions
652 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
654 $ g++ perf.cpp -O3 -fno-exceptions
656 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
658 It looks like we are making the same inlining decisions, so this may be raw
659 codegen badness or something else (haven't investigated).
661 //===---------------------------------------------------------------------===//
663 Divisibility by constant can be simplified (according to GCC PR12849) from
664 being a mulhi to being a mul lo (cheaper). Testcase:
666 void bar(unsigned n) {
671 This is equivalent to the following, where 2863311531 is the multiplicative
672 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
673 void bar(unsigned n) {
674 if (n * 2863311531U < 1431655766U)
678 The same transformation can work with an even modulo with the addition of a
679 rotate: rotate the result of the multiply to the right by the number of bits
680 which need to be zero for the condition to be true, and shrink the compare RHS
681 by the same amount. Unless the target supports rotates, though, that
682 transformation probably isn't worthwhile.
684 The transformation can also easily be made to work with non-zero equality
685 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
687 //===---------------------------------------------------------------------===//
689 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
690 bunch of other stuff from this example (see PR1604):
700 std::scanf("%d", &t.val);
701 std::printf("%d\n", t.val);
704 //===---------------------------------------------------------------------===//
706 These functions perform the same computation, but produce different assembly.
708 define i8 @select(i8 %x) readnone nounwind {
709 %A = icmp ult i8 %x, 250
710 %B = select i1 %A, i8 0, i8 1
714 define i8 @addshr(i8 %x) readnone nounwind {
715 %A = zext i8 %x to i9
716 %B = add i9 %A, 6 ;; 256 - 250 == 6
718 %D = trunc i9 %C to i8
722 //===---------------------------------------------------------------------===//
726 f (unsigned long a, unsigned long b, unsigned long c)
728 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
731 f (unsigned long a, unsigned long b, unsigned long c)
733 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
735 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
736 "clang -emit-llvm-bc | opt -O3".
738 //===---------------------------------------------------------------------===//
741 #define PMD_MASK (~((1UL << 23) - 1))
742 void clear_pmd_range(unsigned long start, unsigned long end)
744 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
747 The expression should optimize to something like
748 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
749 -emit-llvm-bc | opt -O3".
751 //===---------------------------------------------------------------------===//
753 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
755 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
756 These should combine to the same thing. Currently, the first function
757 produces better code on X86.
759 //===---------------------------------------------------------------------===//
762 #define abs(x) x>0?x:-x
765 return (abs(x)) >= 0;
767 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
768 optimized with "clang -emit-llvm-bc | opt -O3".
770 //===---------------------------------------------------------------------===//
774 rotate_cst (unsigned int a)
776 a = (a << 10) | (a >> 22);
781 minus_cst (unsigned int a)
790 mask_gt (unsigned int a)
792 /* This is equivalent to a > 15. */
797 rshift_gt (unsigned int a)
799 /* This is equivalent to a > 23. */
804 All should simplify to a single comparison. All of these are
805 currently not optimized with "clang -emit-llvm-bc | opt
808 //===---------------------------------------------------------------------===//
811 int c(int* x) {return (char*)x+2 == (char*)x;}
812 Should combine to 0. Currently not optimized with "clang
813 -emit-llvm-bc | opt -O3" (although llc can optimize it).
815 //===---------------------------------------------------------------------===//
817 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
818 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
819 with "clang -emit-llvm-bc | opt -O3".
821 //===---------------------------------------------------------------------===//
823 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
824 Should combine to "x | (y & 3)". Currently not optimized with "clang
825 -emit-llvm-bc | opt -O3".
827 //===---------------------------------------------------------------------===//
829 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
830 Should fold to "(~a & c) | (a & b)". Currently not optimized with
831 "clang -emit-llvm-bc | opt -O3".
833 //===---------------------------------------------------------------------===//
835 int a(int a,int b) {return (~(a|b))|a;}
836 Should fold to "a|~b". Currently not optimized with "clang
837 -emit-llvm-bc | opt -O3".
839 //===---------------------------------------------------------------------===//
841 int a(int a, int b) {return (a&&b) || (a&&!b);}
842 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
845 //===---------------------------------------------------------------------===//
847 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
848 Should fold to "a ? b : c", or at least something sane. Currently not
849 optimized with "clang -emit-llvm-bc | opt -O3".
851 //===---------------------------------------------------------------------===//
853 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
854 Should fold to a && (b || c). Currently not optimized with "clang
855 -emit-llvm-bc | opt -O3".
857 //===---------------------------------------------------------------------===//
859 int a(int x) {return x | ((x & 8) ^ 8);}
860 Should combine to x | 8. Currently not optimized with "clang
861 -emit-llvm-bc | opt -O3".
863 //===---------------------------------------------------------------------===//
865 int a(int x) {return x ^ ((x & 8) ^ 8);}
866 Should also combine to x | 8. Currently not optimized with "clang
867 -emit-llvm-bc | opt -O3".
869 //===---------------------------------------------------------------------===//
871 int a(int x) {return ((x | -9) ^ 8) & x;}
872 Should combine to x & -9. Currently not optimized with "clang
873 -emit-llvm-bc | opt -O3".
875 //===---------------------------------------------------------------------===//
877 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
878 Should combine to "a * 0x88888888 >> 31". Currently not optimized
879 with "clang -emit-llvm-bc | opt -O3".
881 //===---------------------------------------------------------------------===//
883 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
884 There's an unnecessary zext in the generated code with "clang
885 -emit-llvm-bc | opt -O3".
887 //===---------------------------------------------------------------------===//
889 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
890 Should combine to "20 * (((unsigned)x) & -2)". Currently not
891 optimized with "clang -emit-llvm-bc | opt -O3".
893 //===---------------------------------------------------------------------===//
895 int g(int x) { return (x - 10) < 0; }
896 Should combine to "x <= 9" (the sub has nsw). Currently not
897 optimized with "clang -emit-llvm-bc | opt -O3".
899 //===---------------------------------------------------------------------===//
901 int g(int x) { return (x + 10) < 0; }
902 Should combine to "x < -10" (the add has nsw). Currently not
903 optimized with "clang -emit-llvm-bc | opt -O3".
905 //===---------------------------------------------------------------------===//
907 int f(int i, int j) { return i < j + 1; }
908 int g(int i, int j) { return j > i - 1; }
909 Should combine to "i <= j" (the add/sub has nsw). Currently not
910 optimized with "clang -emit-llvm-bc | opt -O3".
912 //===---------------------------------------------------------------------===//
914 unsigned f(unsigned x) { return ((x & 7) + 1) & 15; }
915 The & 15 part should be optimized away, it doesn't change the result. Currently
916 not optimized with "clang -emit-llvm-bc | opt -O3".
918 //===---------------------------------------------------------------------===//
920 This was noticed in the entryblock for grokdeclarator in 403.gcc:
922 %tmp = icmp eq i32 %decl_context, 4
923 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
924 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
925 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
927 tmp1 should be simplified to something like:
928 (!tmp || decl_context == 1)
930 This allows recursive simplifications, tmp1 is used all over the place in
931 the function, e.g. by:
933 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
934 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
935 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
939 //===---------------------------------------------------------------------===//
943 Store sinking: This code:
945 void f (int n, int *cond, int *res) {
948 for (i = 0; i < n; i++)
950 *res ^= 234; /* (*) */
953 On this function GVN hoists the fully redundant value of *res, but nothing
954 moves the store out. This gives us this code:
956 bb: ; preds = %bb2, %entry
957 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
958 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
959 %1 = load i32* %cond, align 4
960 %2 = icmp eq i32 %1, 0
961 br i1 %2, label %bb2, label %bb1
964 %3 = xor i32 %.rle, 234
965 store i32 %3, i32* %res, align 4
968 bb2: ; preds = %bb, %bb1
969 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
970 %indvar.next = add i32 %i.05, 1
971 %exitcond = icmp eq i32 %indvar.next, %n
972 br i1 %exitcond, label %return, label %bb
974 DSE should sink partially dead stores to get the store out of the loop.
976 Here's another partial dead case:
977 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
979 //===---------------------------------------------------------------------===//
981 Scalar PRE hoists the mul in the common block up to the else:
983 int test (int a, int b, int c, int g) {
993 It would be better to do the mul once to reduce codesize above the if.
997 //===---------------------------------------------------------------------===//
998 This simple function from 179.art:
1001 struct { double y; int reset; } *Y;
1006 for (i=0;i<numf2s;i++)
1007 if (Y[i].y > Y[winner].y)
1011 Compiles into (with clang TBAA):
1013 for.body: ; preds = %for.inc, %bb.nph
1014 %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
1015 %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
1016 %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
1017 %tmp5 = load double* %tmp4, align 8, !tbaa !4
1018 %idxprom7 = sext i32 %i.01718 to i64
1019 %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
1020 %tmp11 = load double* %tmp10, align 8, !tbaa !4
1021 %cmp12 = fcmp ogt double %tmp5, %tmp11
1022 br i1 %cmp12, label %if.then, label %for.inc
1024 if.then: ; preds = %for.body
1025 %i.017 = trunc i64 %indvar to i32
1028 for.inc: ; preds = %for.body, %if.then
1029 %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
1030 %indvar.next = add i64 %indvar, 1
1031 %exitcond = icmp eq i64 %indvar.next, %tmp22
1032 br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
1035 It is good that we hoisted the reloads of numf2's, and Y out of the loop and
1036 sunk the store to winner out.
1038 However, this is awful on several levels: the conditional truncate in the loop
1039 (-indvars at fault? why can't we completely promote the IV to i64?).
1041 Beyond that, we have a partially redundant load in the loop: if "winner" (aka
1042 %i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
1043 Similarly, the addressing that feeds it (including the sext) is redundant. In
1044 the end we get this generated assembly:
1046 LBB0_2: ## %for.body
1047 ## =>This Inner Loop Header: Depth=1
1051 ucomisd (%rcx,%r8), %xmm0
1060 All things considered this isn't too bad, but we shouldn't need the movslq or
1061 the shlq instruction, or the load folded into ucomisd every time through the
1064 On an x86-specific topic, if the loop can't be restructure, the movl should be a
1067 //===---------------------------------------------------------------------===//
1071 GCC PR37810 is an interesting case where we should sink load/store reload
1072 into the if block and outside the loop, so we don't reload/store it on the
1093 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1094 we don't sink the store. We need partially dead store sinking.
1096 //===---------------------------------------------------------------------===//
1098 [LOAD PRE CRIT EDGE SPLITTING]
1100 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1101 leading to excess stack traffic. This could be handled by GVN with some crazy
1102 symbolic phi translation. The code we get looks like (g is on the stack):
1106 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1107 store i32 %8, i32* %9, align bel %bb3
1109 bb3: ; preds = %bb1, %bb2, %bb
1110 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1111 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1112 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1113 %11 = load i32* %10, align 4
1115 %11 is partially redundant, an in BB2 it should have the value %8.
1117 GCC PR33344 and PR35287 are similar cases.
1120 //===---------------------------------------------------------------------===//
1124 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1125 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1127 [CRIT EDGE BREAKING]
1130 [PRE OF READONLY CALL]
1133 [TURN SELECT INTO BRANCH]
1134 loadpre14.c loadpre15.c
1136 actually a conditional increment: loadpre18.c loadpre19.c
1138 //===---------------------------------------------------------------------===//
1140 [LOAD PRE / STORE SINKING / SPEC HACK]
1142 This is a chunk of code from 456.hmmer:
1144 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1145 int *tpdm, int xmb, int *bp, int *ms) {
1147 for (k = 1; k <= M; k++) {
1148 mc[k] = mpp[k-1] + tpmm[k-1];
1149 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1150 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1151 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1156 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1157 into a conditional move (select instr in IR) and allow the final store to do the
1158 store. See GCC PR27313 for more details. Note that this is valid to xform even
1159 with the new C++ memory model, since mc[k] is previously loaded and later
1162 //===---------------------------------------------------------------------===//
1165 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1168 //===---------------------------------------------------------------------===//
1170 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1171 GCC testsuite. For example, we get the first example in predcom-1.c, but
1172 miss the second one:
1177 __attribute__ ((noinline))
1178 void count_averages(int n) {
1180 for (i = 1; i < n; i++)
1181 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1184 which compiles into two loads instead of one in the loop.
1186 predcom-2.c is the same as predcom-1.c
1188 predcom-3.c is very similar but needs loads feeding each other instead of
1192 //===---------------------------------------------------------------------===//
1196 Type based alias analysis:
1197 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1199 We should do better analysis of posix_memalign. At the least it should
1200 no-capture its pointer argument, at best, we should know that the out-value
1201 result doesn't point to anything (like malloc). One example of this is in
1202 SingleSource/Benchmarks/Misc/dt.c
1204 //===---------------------------------------------------------------------===//
1206 Interesting missed case because of control flow flattening (should be 2 loads):
1207 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1208 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1209 opt -mem2reg -gvn -instcombine | llvm-dis
1210 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1211 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1213 //===---------------------------------------------------------------------===//
1215 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1216 We could eliminate the branch condition here, loading from null is undefined:
1218 struct S { int w, x, y, z; };
1219 struct T { int r; struct S s; };
1220 void bar (struct S, int);
1221 void foo (int a, struct T b)
1229 //===---------------------------------------------------------------------===//
1231 simplifylibcalls should do several optimizations for strspn/strcspn:
1233 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1235 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1237 register size_t __result = 0;
1238 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1239 __s[__result] != __reject2 && __s[__result] != __reject3)
1244 This should turn into a switch on the character. See PR3253 for some notes on
1247 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1249 //===---------------------------------------------------------------------===//
1251 simplifylibcalls should turn these snprintf idioms into memcpy (GCC PR47917)
1253 char buf1[6], buf2[6], buf3[4], buf4[4];
1257 int ret = snprintf (buf1, sizeof buf1, "abcde");
1258 ret += snprintf (buf2, sizeof buf2, "abcdef") * 16;
1259 ret += snprintf (buf3, sizeof buf3, "%s", i++ < 6 ? "abc" : "def") * 256;
1260 ret += snprintf (buf4, sizeof buf4, "%s", i++ > 10 ? "abcde" : "defgh")*4096;
1264 //===---------------------------------------------------------------------===//
1266 "gas" uses this idiom:
1267 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1269 else if (strchr ("<>", *intel_parser.op_string)
1271 Those should be turned into a switch. SimplifyLibCalls only gets the second
1274 //===---------------------------------------------------------------------===//
1276 252.eon contains this interesting code:
1278 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1279 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1280 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1281 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1282 call void @llvm.memcpy.i32(i8* %endptr,
1283 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1284 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1286 This is interesting for a couple reasons. First, in this:
1288 The memcpy+strlen strlen can be replaced with:
1290 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1292 Because the destination was just copied into the specified memory buffer. This,
1293 in turn, can be constant folded to "4".
1295 In other code, it contains:
1297 %endptr6978 = bitcast i8* %endptr69 to i32*
1298 store i32 7107374, i32* %endptr6978, align 1
1299 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1301 Which could also be constant folded. Whatever is producing this should probably
1302 be fixed to leave this as a memcpy from a string.
1304 Further, eon also has an interesting partially redundant strlen call:
1306 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1307 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1308 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1309 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1310 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1311 br i1 %685, label %bb10, label %bb9
1314 %686 = call i32 @strlen(i8* %683) nounwind readonly
1315 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1316 br i1 %687, label %bb10, label %bb11
1318 bb10: ; preds = %bb9, %bb8
1319 %688 = call i32 @strlen(i8* %683) nounwind readonly
1321 This could be eliminated by doing the strlen once in bb8, saving code size and
1322 improving perf on the bb8->9->10 path.
1324 //===---------------------------------------------------------------------===//
1326 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1328 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1331 bb62: ; preds = %bb55, %bb53
1332 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1333 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1334 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1335 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1338 br i1 %or.cond, label %bb65, label %bb72
1340 bb65: ; preds = %bb62
1341 store i8 0, i8* %173, align 1
1344 bb72: ; preds = %bb65, %bb62
1345 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1346 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1348 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1349 redundant with the %171 call. At worst, we could shove the %177 strlen call
1350 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1351 that bb65 stores to the string, zeroing out the last byte. This means that on
1352 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1355 This pattern repeats several times, basically doing:
1360 where it is "obvious" that B = A-1.
1362 //===---------------------------------------------------------------------===//
1364 186.crafty has this interesting pattern with the "out.4543" variable:
1366 call void @llvm.memcpy.i32(
1367 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1368 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1369 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1371 It is basically doing:
1373 memcpy(globalarray, "string");
1374 printf(..., globalarray);
1376 Anyway, by knowing that printf just reads the memory and forward substituting
1377 the string directly into the printf, this eliminates reads from globalarray.
1378 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1379 other similar functions) there are many stores to "out". Once all the printfs
1380 stop using "out", all that is left is the memcpy's into it. This should allow
1381 globalopt to remove the "stored only" global.
1383 //===---------------------------------------------------------------------===//
1387 define inreg i32 @foo(i8* inreg %p) nounwind {
1389 %tmp1 = ashr i8 %tmp0, 5
1390 %tmp2 = sext i8 %tmp1 to i32
1394 could be dagcombine'd to a sign-extending load with a shift.
1395 For example, on x86 this currently gets this:
1401 while it could get this:
1406 //===---------------------------------------------------------------------===//
1410 int test(int x) { return 1-x == x; } // --> return false
1411 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1413 Always foldable for odd constants, what is the rule for even?
1415 //===---------------------------------------------------------------------===//
1417 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1418 for next field in struct (which is at same address).
1420 For example: store of float into { {{}}, float } could be turned into a store to
1423 //===---------------------------------------------------------------------===//
1425 The arg promotion pass should make use of nocapture to make its alias analysis
1426 stuff much more precise.
1428 //===---------------------------------------------------------------------===//
1430 The following functions should be optimized to use a select instead of a
1431 branch (from gcc PR40072):
1433 char char_int(int m) {if(m>7) return 0; return m;}
1434 int int_char(char m) {if(m>7) return 0; return m;}
1436 //===---------------------------------------------------------------------===//
1438 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1442 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1444 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1445 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1446 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1447 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1448 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1452 However, it's functionally equivalent to:
1454 b = (b & ~0x80) | (a & 0x80);
1456 Which generates this:
1458 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1460 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1461 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1462 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1466 This can be generalized for other forms:
1468 b = (b & ~0x80) | (a & 0x40) << 1;
1470 //===---------------------------------------------------------------------===//
1472 These two functions produce different code. They shouldn't:
1476 uint8_t p1(uint8_t b, uint8_t a) {
1477 b = (b & ~0xc0) | (a & 0xc0);
1481 uint8_t p2(uint8_t b, uint8_t a) {
1482 b = (b & ~0x40) | (a & 0x40);
1483 b = (b & ~0x80) | (a & 0x80);
1487 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1489 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1490 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1491 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1495 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1497 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1498 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1499 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1500 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1501 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1505 //===---------------------------------------------------------------------===//
1507 IPSCCP does not currently propagate argument dependent constants through
1508 functions where it does not not all of the callers. This includes functions
1509 with normal external linkage as well as templates, C99 inline functions etc.
1510 Specifically, it does nothing to:
1512 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1514 %0 = add nsw i32 %y, %z
1517 %3 = add nsw i32 %1, %2
1521 define i32 @test2() nounwind {
1523 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1527 It would be interesting extend IPSCCP to be able to handle simple cases like
1528 this, where all of the arguments to a call are constant. Because IPSCCP runs
1529 before inlining, trivial templates and inline functions are not yet inlined.
1530 The results for a function + set of constant arguments should be memoized in a
1533 //===---------------------------------------------------------------------===//
1535 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1536 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1537 handle simple things like this:
1539 static int foo(const char *X) { return strlen(X); }
1540 int bar() { return foo("abcd"); }
1542 //===---------------------------------------------------------------------===//
1544 functionattrs doesn't know much about memcpy/memset. This function should be
1545 marked readnone rather than readonly, since it only twiddles local memory, but
1546 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1548 struct X { int *p; int *q; };
1555 p = __builtin_memcpy (&x, &y, sizeof (int *));
1559 This can be seen at:
1560 $ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
1563 //===---------------------------------------------------------------------===//
1565 Missed instcombine transformation:
1566 define i1 @a(i32 %x) nounwind readnone {
1568 %cmp = icmp eq i32 %x, 30
1569 %sub = add i32 %x, -30
1570 %cmp2 = icmp ugt i32 %sub, 9
1571 %or = or i1 %cmp, %cmp2
1574 This should be optimized to a single compare. Testcase derived from gcc.
1576 //===---------------------------------------------------------------------===//
1578 Missed instcombine or reassociate transformation:
1579 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1581 The sgt and slt should be combined into a single comparison. Testcase derived
1584 //===---------------------------------------------------------------------===//
1586 Missed instcombine transformation:
1588 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1589 %383 = zext i32 %382 to i64 ; [#uses=1]
1590 %384 = shl i64 %381, %383 ; [#uses=1]
1591 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1593 The srem can be transformed to an and because if %tmp14.i is negative, the
1594 shift is undefined. Testcase derived from 403.gcc.
1596 //===---------------------------------------------------------------------===//
1598 This is a range comparison on a divided result (from 403.gcc):
1600 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1601 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1602 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1604 We already catch this (removing the sdiv) if there isn't an add, we should
1605 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1608 int a(int x) { return (unsigned)(x/16+7) < 15; }
1610 Another similar case involves truncations on 64-bit targets:
1612 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1613 %362 = trunc i64 %361 to i32 ; [#uses=2]
1615 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1617 //===---------------------------------------------------------------------===//
1619 Missed instcombine/dagcombine transformation:
1620 define void @lshift_lt(i8 zeroext %a) nounwind {
1622 %conv = zext i8 %a to i32
1623 %shl = shl i32 %conv, 3
1624 %cmp = icmp ult i32 %shl, 33
1625 br i1 %cmp, label %if.then, label %if.end
1628 tail call void @bar() nounwind
1634 declare void @bar() nounwind
1636 The shift should be eliminated. Testcase derived from gcc.
1638 //===---------------------------------------------------------------------===//
1640 These compile into different code, one gets recognized as a switch and the
1641 other doesn't due to phase ordering issues (PR6212):
1643 int test1(int mainType, int subType) {
1646 else if (mainType == 9)
1648 else if (mainType == 11)
1653 int test2(int mainType, int subType) {
1663 //===---------------------------------------------------------------------===//
1665 The following test case (from PR6576):
1667 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1669 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1670 br i1 %cond1, label %exit, label %bb.nph
1671 bb.nph: ; preds = %entry
1672 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1674 exit: ; preds = %entry
1678 could be reduced to:
1680 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1682 %tmp = mul i32 %b, %a
1686 //===---------------------------------------------------------------------===//
1688 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1691 Another interesting case is that something related could be used for variables
1692 that go const after their ctor has finished. In these cases, globalopt (which
1693 can statically run the constructor) could mark the global const (so it gets put
1694 in the readonly section). A testcase would be:
1697 using namespace std;
1698 const complex<char> should_be_in_rodata (42,-42);
1699 complex<char> should_be_in_data (42,-42);
1700 complex<char> should_be_in_bss;
1702 Where we currently evaluate the ctors but the globals don't become const because
1703 the optimizer doesn't know they "become const" after the ctor is done. See
1704 GCC PR4131 for more examples.
1706 //===---------------------------------------------------------------------===//
1711 return x > 1 ? x : 1;
1714 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1715 and cheaper on most targets.
1717 LLVM prefers comparisons with zero over non-zero in general, but in this
1718 case it choses instead to keep the max operation obvious.
1720 //===---------------------------------------------------------------------===//
1722 define void @a(i32 %x) nounwind {
1724 switch i32 %x, label %if.end [
1725 i32 0, label %if.then
1726 i32 1, label %if.then
1727 i32 2, label %if.then
1728 i32 3, label %if.then
1729 i32 5, label %if.then
1732 tail call void @foo() nounwind
1739 Generated code on x86-64 (other platforms give similar results):
1750 If we wanted to be really clever, we could simplify the whole thing to
1751 something like the following, which eliminates a branch:
1759 //===---------------------------------------------------------------------===//
1763 int foo(int a) { return (a & (~15)) / 16; }
1767 define i32 @foo(i32 %a) nounwind readnone ssp {
1769 %and = and i32 %a, -16
1770 %div = sdiv i32 %and, 16
1774 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
1775 should be instcombined into just "a >> 4".
1777 We do get this at the codegen level, so something knows about it, but
1778 instcombine should catch it earlier:
1786 //===---------------------------------------------------------------------===//
1788 This code (from GCC PR28685):
1790 int test(int a, int b) {
1800 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
1802 %cmp = icmp slt i32 %a, %b
1803 br i1 %cmp, label %return, label %if.end
1805 if.end: ; preds = %entry
1806 %cmp5 = icmp eq i32 %a, %b
1807 %conv6 = zext i1 %cmp5 to i32
1810 return: ; preds = %entry
1816 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
1818 %0 = icmp sle i32 %a, %b
1819 %retval = zext i1 %0 to i32
1823 //===---------------------------------------------------------------------===//
1825 This code can be seen in viterbi:
1827 %64 = call noalias i8* @malloc(i64 %62) nounwind
1829 %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
1830 %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
1832 llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
1833 fold to %62. This is a security win (overflows of malloc will get caught)
1834 and also a performance win by exposing more memsets to the optimizer.
1836 This occurs several times in viterbi.
1838 Note that this would change the semantics of @llvm.objectsize which by its
1839 current definition always folds to a constant. We also should make sure that
1840 we remove checking in code like
1842 char *p = malloc(strlen(s)+1);
1843 __strcpy_chk(p, s, __builtin_objectsize(p, 0));
1845 //===---------------------------------------------------------------------===//
1847 clang -O3 currently compiles this code
1849 int g(unsigned int a) {
1850 unsigned int c[100];
1853 unsigned int b = c[10] + c[11];
1861 define i32 @g(i32 a) nounwind readnone {
1862 %add = shl i32 %a, 1
1863 %mul = shl i32 %a, 1
1864 %cmp = icmp ugt i32 %add, %mul
1865 %a.addr.0 = select i1 %cmp, i32 11, i32 15
1869 The icmp should fold to false. This CSE opportunity is only available
1870 after GVN and InstCombine have run.
1872 //===---------------------------------------------------------------------===//
1874 memcpyopt should turn this:
1876 define i8* @test10(i32 %x) {
1877 %alloc = call noalias i8* @malloc(i32 %x) nounwind
1878 call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
1882 into a call to calloc. We should make sure that we analyze calloc as
1883 aggressively as malloc though.
1885 //===---------------------------------------------------------------------===//
1887 clang -O3 doesn't optimize this:
1889 void f1(int* begin, int* end) {
1890 std::fill(begin, end, 0);
1893 into a memset. This is PR8942.
1895 //===---------------------------------------------------------------------===//
1897 clang -O3 -fno-exceptions currently compiles this code:
1900 std::vector<int> v(N);
1902 extern void sink(void*); sink(&v);
1907 define void @_Z1fi(i32 %N) nounwind {
1909 %v2 = alloca [3 x i32*], align 8
1910 %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
1911 %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
1912 %conv = sext i32 %N to i64
1913 store i32* null, i32** %v2.sub, align 8, !tbaa !0
1914 %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
1915 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
1916 %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
1917 store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
1918 %cmp.i.i.i.i = icmp eq i32 %N, 0
1919 br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
1921 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
1922 store i32* null, i32** %v2.sub, align 8, !tbaa !0
1923 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
1924 %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
1925 store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
1926 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
1928 cond.true.i.i.i.i: ; preds = %entry
1929 %cmp.i.i.i.i.i = icmp slt i32 %N, 0
1930 br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
1932 if.then.i.i.i.i.i: ; preds = %cond.true.i.i.i.i
1933 call void @_ZSt17__throw_bad_allocv() noreturn nounwind
1936 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i: ; preds = %cond.true.i.i.i.i
1937 %mul.i.i.i.i.i = shl i64 %conv, 2
1938 %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
1939 %0 = bitcast i8* %call3.i.i.i.i.i to i32*
1940 store i32* %0, i32** %v2.sub, align 8, !tbaa !0
1941 store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
1942 %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
1943 store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
1944 call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
1945 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
1947 This is just the handling the construction of the vector. Most surprising here
1948 is the fact that all three null stores in %entry are dead (because we do no
1951 Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
1952 This is a because the client of LazyValueInfo doesn't simplify all instruction
1953 operands, just selected ones.
1955 //===---------------------------------------------------------------------===//
1957 clang -O3 -fno-exceptions currently compiles this code:
1959 void f(char* a, int n) {
1960 __builtin_memset(a, 0, n);
1961 for (int i = 0; i < n; ++i)
1967 define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
1969 %conv = sext i32 %n to i64
1970 tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
1971 %cmp8 = icmp sgt i32 %n, 0
1972 br i1 %cmp8, label %for.body.lr.ph, label %for.end
1974 for.body.lr.ph: ; preds = %entry
1975 %tmp10 = add i32 %n, -1
1976 %tmp11 = zext i32 %tmp10 to i64
1977 %tmp12 = add i64 %tmp11, 1
1978 call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
1981 for.end: ; preds = %entry
1985 This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
1986 the two memset's together.
1988 The issue with the addition only occurs in 64-bit mode, and appears to be at
1989 least partially caused by Scalar Evolution not keeping its cache updated: it
1990 returns the "wrong" result immediately after indvars runs, but figures out the
1991 expected result if it is run from scratch on IR resulting from running indvars.
1993 //===---------------------------------------------------------------------===//
1995 clang -O3 -fno-exceptions currently compiles this code:
1998 unsigned short m1, m2;
1999 unsigned char m3, m4;
2003 std::vector<S> v(N);
2004 extern void sink(void*); sink(&v);
2007 into poor code for zero-initializing 'v' when N is >0. The problem is that
2008 S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
2009 4 stores on each iteration. If the struct were 8 bytes, this gets turned into
2012 In order to handle this we have to:
2013 A) Teach clang to generate metadata for memsets of structs that have holes in
2015 B) Teach clang to use such a memset for zero init of this struct (since it has
2016 a hole), instead of doing elementwise zeroing.
2018 //===---------------------------------------------------------------------===//
2020 clang -O3 currently compiles this code:
2022 extern const int magic;
2023 double f() { return 0.0 * magic; }
2027 @magic = external constant i32
2029 define double @_Z1fv() nounwind readnone {
2031 %tmp = load i32* @magic, align 4, !tbaa !0
2032 %conv = sitofp i32 %tmp to double
2033 %mul = fmul double %conv, 0.000000e+00
2037 We should be able to fold away this fmul to 0.0. More generally, fmul(x,0.0)
2038 can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
2039 not an INF. The CannotBeNegativeZero predicate in value tracking should be
2040 extended to support general "fpclassify" operations that can return
2041 yes/no/unknown for each of these predicates.
2043 In this predicate, we know that uitofp is trivially never NaN or -0.0, and
2044 we know that it isn't +/-Inf if the floating point type has enough exponent bits
2045 to represent the largest integer value as < inf.
2047 //===---------------------------------------------------------------------===//
2049 When optimizing a transformation that can change the sign of 0.0 (such as the
2050 0.0*val -> 0.0 transformation above), it might be provable that the sign of the
2051 expression doesn't matter. For example, by the above rules, we can't transform
2052 fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
2053 expression is defined to be -0.0.
2055 If we look at the uses of the fmul for example, we might be able to prove that
2056 all uses don't care about the sign of zero. For example, if we have:
2058 fadd(fmul(sitofp(x), 0.0), 2.0)
2060 Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
2061 transform the fmul to 0.0, and then the fadd to 2.0.
2063 //===---------------------------------------------------------------------===//
2065 We should enhance memcpy/memcpy/memset to allow a metadata node on them
2066 indicating that some bytes of the transfer are undefined. This is useful for
2067 frontends like clang when lowering struct copies, when some elements of the
2068 struct are undefined. Consider something like this:
2074 void foo(struct x*P);
2075 struct x testfunc() {
2083 We currently compile this to:
2084 $ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
2087 %struct.x = type { i8, [4 x i32] }
2089 define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
2091 %V1 = alloca %struct.x, align 4
2092 call void @foo(%struct.x* %V1)
2093 %tmp1 = bitcast %struct.x* %V1 to i8*
2094 %0 = bitcast %struct.x* %V1 to i160*
2095 %srcval1 = load i160* %0, align 4
2096 %tmp2 = bitcast %struct.x* %agg.result to i8*
2097 %1 = bitcast %struct.x* %agg.result to i160*
2098 store i160 %srcval1, i160* %1, align 4
2102 This happens because SRoA sees that the temp alloca has is being memcpy'd into
2103 and out of and it has holes and it has to be conservative. If we knew about the
2104 holes, then this could be much much better.
2106 Having information about these holes would also improve memcpy (etc) lowering at
2107 llc time when it gets inlined, because we can use smaller transfers. This also
2108 avoids partial register stalls in some important cases.
2110 //===---------------------------------------------------------------------===//
2112 We don't fold (icmp (add) (add)) unless the two adds only have a single use.
2113 There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for
2116 %indvar.next90 = add i64 %indvar89, 1 ;; Has 2 uses
2117 %tmp96 = add i64 %tmp95, 1 ;; Has 1 use
2118 %exitcond97 = icmp eq i64 %indvar.next90, %tmp96
2120 We don't fold this because we don't want to introduce an overlapped live range
2121 of the ivar. However if we can make this more aggressive without causing
2122 performance issues in two ways:
2124 1. If *either* the LHS or RHS has a single use, we can definitely do the
2125 transformation. In the overlapping liverange case we're trading one register
2126 use for one fewer operation, which is a reasonable trade. Before doing this
2127 we should verify that the llc output actually shrinks for some benchmarks.
2128 2. If both ops have multiple uses, we can still fold it if the operations are
2129 both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't
2130 increase register pressure.
2132 There are a ton of icmp's we aren't simplifying because of the reg pressure
2133 concern. Care is warranted here though because many of these are induction
2134 variables and other cases that matter a lot to performance, like the above.
2135 Here's a blob of code that you can drop into the bottom of visitICmp to see some
2138 { Value *A, *B, *C, *D;
2139 if (match(Op0, m_Add(m_Value(A), m_Value(B))) &&
2140 match(Op1, m_Add(m_Value(C), m_Value(D))) &&
2141 (A == C || A == D || B == C || B == D)) {
2142 errs() << "OP0 = " << *Op0 << " U=" << Op0->getNumUses() << "\n";
2143 errs() << "OP1 = " << *Op1 << " U=" << Op1->getNumUses() << "\n";
2144 errs() << "CMP = " << I << "\n\n";
2148 //===---------------------------------------------------------------------===//
2150 define i1 @test1(i32 %x) nounwind {
2151 %and = and i32 %x, 3
2152 %cmp = icmp ult i32 %and, 2
2156 Can be folded to (x & 2) == 0.
2158 define i1 @test2(i32 %x) nounwind {
2159 %and = and i32 %x, 3
2160 %cmp = icmp ugt i32 %and, 1
2164 Can be folded to (x & 2) != 0.
2166 SimplifyDemandedBits shrinks the "and" constant to 2 but instcombine misses the
2169 //===---------------------------------------------------------------------===//
2195 Compiles into this IR (on x86-64 at least):
2197 %struct.t1 = type { i8, [3 x i8] }
2198 @s2 = global %struct.t1 zeroinitializer, align 4
2199 @s1 = global %struct.t1 zeroinitializer, align 4
2200 define void @func1() nounwind ssp noredzone {
2202 %0 = load i32* bitcast (%struct.t1* @s2 to i32*), align 4
2203 %bf.val.sext5 = and i32 %0, 1
2204 %1 = load i32* bitcast (%struct.t1* @s1 to i32*), align 4
2206 %3 = or i32 %2, %bf.val.sext5
2207 %bf.val.sext26 = and i32 %0, 2
2208 %4 = or i32 %3, %bf.val.sext26
2209 store i32 %4, i32* bitcast (%struct.t1* @s1 to i32*), align 4
2213 The two or/and's should be merged into one each.
2215 //===---------------------------------------------------------------------===//
2217 Machine level code hoisting can be useful in some cases. For example, PR9408
2225 void foo(funcs f, int which) {
2234 which we compile to:
2254 Note that bb1 and bb2 are the same. This doesn't happen at the IR level
2255 because one call is passing an i32 and the other is passing an i64.
2257 //===---------------------------------------------------------------------===//
2259 I see this sort of pattern in 176.gcc in a few places (e.g. the start of
2260 store_bit_field). The rem should be replaced with a multiply and subtract:
2262 %3 = sdiv i32 %A, %B
2263 %4 = srem i32 %A, %B
2265 Similarly for udiv/urem. Note that this shouldn't be done on X86 or ARM,
2266 which can do this in a single operation (instruction or libcall). It is
2267 probably best to do this in the code generator.
2269 //===---------------------------------------------------------------------===//
2271 unsigned foo(unsigned x, unsigned y) { return (x & y) == 0 || x == 0; }
2272 should fold to (x & y) == 0.
2274 //===---------------------------------------------------------------------===//
2276 unsigned foo(unsigned x, unsigned y) { return x > y && x != 0; }
2277 should fold to x > y.
2279 //===---------------------------------------------------------------------===//