1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 Dead argument elimination should be enhanced to handle cases when an argument is
6 dead to an externally visible function. Though the argument can't be removed
7 from the externally visible function, the caller doesn't need to pass it in.
8 For example in this testcase:
10 void foo(int X) __attribute__((noinline));
11 void foo(int X) { sideeffect(); }
12 void bar(int A) { foo(A+1); }
16 define void @bar(i32 %A) nounwind ssp {
17 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
18 tail call void @foo(i32 %0) nounwind noinline ssp
22 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
23 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
26 //===---------------------------------------------------------------------===//
28 With the recent changes to make the implicit def/use set explicit in
29 machineinstrs, we should change the target descriptions for 'call' instructions
30 so that the .td files don't list all the call-clobbered registers as implicit
31 defs. Instead, these should be added by the code generator (e.g. on the dag).
33 This has a number of uses:
35 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
36 for their different impdef sets.
37 2. Targets with multiple calling convs (e.g. x86) which have different clobber
38 sets don't need copies of call instructions.
39 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
42 //===---------------------------------------------------------------------===//
44 We should recognized various "overflow detection" idioms and translate them into
45 llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom:
47 unsigned int mul(unsigned int a,unsigned int b) {
48 if ((unsigned long long)a*b>0xffffffff)
53 The legalization code for mul-with-overflow needs to be made more robust before
54 this can be implemented though.
56 //===---------------------------------------------------------------------===//
58 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
59 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
60 safe in general, even on darwin. See the libm implementation of hypot for
61 examples (which special case when x/y are exactly zero to get signed zeros etc
64 //===---------------------------------------------------------------------===//
66 On targets with expensive 64-bit multiply, we could LSR this:
73 for (i = ...; ++i, tmp+=tmp)
76 This would be a win on ppc32, but not x86 or ppc64.
78 //===---------------------------------------------------------------------===//
80 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
82 //===---------------------------------------------------------------------===//
84 Reassociate should turn things like:
86 int factorial(int X) {
87 return X*X*X*X*X*X*X*X;
90 into llvm.powi calls, allowing the code generator to produce balanced
93 First, the intrinsic needs to be extended to support integers, and second the
94 code generator needs to be enhanced to lower these to multiplication trees.
96 //===---------------------------------------------------------------------===//
98 Interesting? testcase for add/shift/mul reassoc:
100 int bar(int x, int y) {
101 return x*x*x+y+x*x*x*x*x*y*y*y*y;
103 int foo(int z, int n) {
104 return bar(z, n) + bar(2*z, 2*n);
107 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
108 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
109 which is the same number of multiplies and is canonical, because the 2*X has
110 multiple uses. Here's a simple example:
112 define i32 @test15(i32 %X1) {
113 %B = mul i32 %X1, 47 ; X1*47
119 //===---------------------------------------------------------------------===//
121 Reassociate should handle the example in GCC PR16157:
123 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
124 void f () { /* this can be optimized to four additions... */
125 b4 = a4 + a3 + a2 + a1 + a0;
126 b3 = a3 + a2 + a1 + a0;
131 This requires reassociating to forms of expressions that are already available,
132 something that reassoc doesn't think about yet.
135 //===---------------------------------------------------------------------===//
137 This function: (derived from GCC PR19988)
138 double foo(double x, double y) {
139 return ((x + 0.1234 * y) * (x + -0.1234 * y));
145 mulsd LCPI1_1(%rip), %xmm1
146 mulsd LCPI1_0(%rip), %xmm2
153 Reassociate should be able to turn it into:
155 double foo(double x, double y) {
156 return ((x + 0.1234 * y) * (x - 0.1234 * y));
159 Which allows the multiply by constant to be CSE'd, producing:
162 mulsd LCPI1_0(%rip), %xmm1
169 This doesn't need -ffast-math support at all. This is particularly bad because
170 the llvm-gcc frontend is canonicalizing the later into the former, but clang
171 doesn't have this problem.
173 //===---------------------------------------------------------------------===//
175 These two functions should generate the same code on big-endian systems:
177 int g(int *j,int *l) { return memcmp(j,l,4); }
178 int h(int *j, int *l) { return *j - *l; }
180 this could be done in SelectionDAGISel.cpp, along with other special cases,
183 //===---------------------------------------------------------------------===//
185 It would be nice to revert this patch:
186 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
188 And teach the dag combiner enough to simplify the code expanded before
189 legalize. It seems plausible that this knowledge would let it simplify other
192 //===---------------------------------------------------------------------===//
194 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
195 to the type size. It works but can be overly conservative as the alignment of
196 specific vector types are target dependent.
198 //===---------------------------------------------------------------------===//
200 We should produce an unaligned load from code like this:
202 v4sf example(float *P) {
203 return (v4sf){P[0], P[1], P[2], P[3] };
206 //===---------------------------------------------------------------------===//
208 Add support for conditional increments, and other related patterns. Instead
213 je LBB16_2 #cond_next
224 //===---------------------------------------------------------------------===//
226 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
228 Expand these to calls of sin/cos and stores:
229 double sincos(double x, double *sin, double *cos);
230 float sincosf(float x, float *sin, float *cos);
231 long double sincosl(long double x, long double *sin, long double *cos);
233 Doing so could allow SROA of the destination pointers. See also:
234 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
236 This is now easily doable with MRVs. We could even make an intrinsic for this
237 if anyone cared enough about sincos.
239 //===---------------------------------------------------------------------===//
241 quantum_sigma_x in 462.libquantum contains the following loop:
243 for(i=0; i<reg->size; i++)
245 /* Flip the target bit of each basis state */
246 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
249 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
250 so cool to turn it into something like:
252 long long Res = ((MAX_UNSIGNED) 1 << target);
254 for(i=0; i<reg->size; i++)
255 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
257 for(i=0; i<reg->size; i++)
258 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
261 ... which would only do one 32-bit XOR per loop iteration instead of two.
263 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
266 //===---------------------------------------------------------------------===//
268 This isn't recognized as bswap by instcombine (yes, it really is bswap):
270 unsigned long reverse(unsigned v) {
272 t = v ^ ((v << 16) | (v >> 16));
274 v = (v << 24) | (v >> 8);
278 //===---------------------------------------------------------------------===//
282 These idioms should be recognized as popcount (see PR1488):
284 unsigned countbits_slow(unsigned v) {
286 for (c = 0; v; v >>= 1)
290 unsigned countbits_fast(unsigned v){
293 v &= v - 1; // clear the least significant bit set
297 BITBOARD = unsigned long long
298 int PopCnt(register BITBOARD a) {
306 unsigned int popcount(unsigned int input) {
307 unsigned int count = 0;
308 for (unsigned int i = 0; i < 4 * 8; i++)
309 count += (input >> i) & i;
313 This sort of thing should be added to the loop idiom pass.
315 //===---------------------------------------------------------------------===//
317 These should turn into single 16-bit (unaligned?) loads on little/big endian
320 unsigned short read_16_le(const unsigned char *adr) {
321 return adr[0] | (adr[1] << 8);
323 unsigned short read_16_be(const unsigned char *adr) {
324 return (adr[0] << 8) | adr[1];
327 //===---------------------------------------------------------------------===//
329 -instcombine should handle this transform:
330 icmp pred (sdiv X / C1 ), C2
331 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
333 Currently InstCombine avoids this transform but will do it when the signs of
334 the operands and the sign of the divide match. See the FIXME in
335 InstructionCombining.cpp in the visitSetCondInst method after the switch case
336 for Instruction::UDiv (around line 4447) for more details.
338 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
341 //===---------------------------------------------------------------------===//
345 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
346 opportunities in its double_array_divs_variable function: it needs loop
347 interchange, memory promotion (which LICM already does), vectorization and
348 variable trip count loop unrolling (since it has a constant trip count). ICC
349 apparently produces this very nice code with -ffast-math:
351 ..B1.70: # Preds ..B1.70 ..B1.69
352 mulpd %xmm0, %xmm1 #108.2
353 mulpd %xmm0, %xmm1 #108.2
354 mulpd %xmm0, %xmm1 #108.2
355 mulpd %xmm0, %xmm1 #108.2
357 cmpl $131072, %edx #108.2
358 jb ..B1.70 # Prob 99% #108.2
360 It would be better to count down to zero, but this is a lot better than what we
363 //===---------------------------------------------------------------------===//
367 typedef unsigned U32;
368 typedef unsigned long long U64;
369 int test (U32 *inst, U64 *regs) {
372 int r1 = (temp >> 20) & 0xf;
373 int b2 = (temp >> 16) & 0xf;
374 effective_addr2 = temp & 0xfff;
375 if (b2) effective_addr2 += regs[b2];
376 b2 = (temp >> 12) & 0xf;
377 if (b2) effective_addr2 += regs[b2];
378 effective_addr2 &= regs[4];
379 if ((effective_addr2 & 3) == 0)
384 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
385 we don't eliminate the computation of the top half of effective_addr2 because
386 we don't have whole-function selection dags. On x86, this means we use one
387 extra register for the function when effective_addr2 is declared as U64 than
388 when it is declared U32.
390 PHI Slicing could be extended to do this.
392 //===---------------------------------------------------------------------===//
394 LSR should know what GPR types a target has from TargetData. This code:
396 volatile short X, Y; // globals
400 for (i = 0; i < N; i++) { X = i; Y = i*4; }
403 produces two near identical IV's (after promotion) on PPC/ARM:
413 add r2, r2, #1 <- [0,+,1]
414 sub r0, r0, #1 <- [0,-,1]
418 LSR should reuse the "+" IV for the exit test.
420 //===---------------------------------------------------------------------===//
422 Tail call elim should be more aggressive, checking to see if the call is
423 followed by an uncond branch to an exit block.
425 ; This testcase is due to tail-duplication not wanting to copy the return
426 ; instruction into the terminating blocks because there was other code
427 ; optimized out of the function after the taildup happened.
428 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
430 define i32 @t4(i32 %a) {
432 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
433 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
434 br i1 %tmp.2, label %then.0, label %else.0
436 then.0: ; preds = %entry
437 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
438 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
441 else.0: ; preds = %entry
442 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
443 br i1 %tmp.7, label %then.1, label %return
445 then.1: ; preds = %else.0
446 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
447 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
450 return: ; preds = %then.1, %else.0, %then.0
451 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
456 //===---------------------------------------------------------------------===//
458 Tail recursion elimination should handle:
463 return 2 * pow2m1 (n - 1) + 1;
466 Also, multiplies can be turned into SHL's, so they should be handled as if
467 they were associative. "return foo() << 1" can be tail recursion eliminated.
469 //===---------------------------------------------------------------------===//
471 Argument promotion should promote arguments for recursive functions, like
474 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
476 define internal i32 @foo(i32* %x) {
478 %tmp = load i32* %x ; <i32> [#uses=0]
479 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
483 define i32 @bar(i32* %x) {
485 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
489 //===---------------------------------------------------------------------===//
491 We should investigate an instruction sinking pass. Consider this silly
507 je LBB1_2 # cond_true
515 The PIC base computation (call+popl) is only used on one path through the
516 code, but is currently always computed in the entry block. It would be
517 better to sink the picbase computation down into the block for the
518 assertion, as it is the only one that uses it. This happens for a lot of
519 code with early outs.
521 Another example is loads of arguments, which are usually emitted into the
522 entry block on targets like x86. If not used in all paths through a
523 function, they should be sunk into the ones that do.
525 In this case, whole-function-isel would also handle this.
527 //===---------------------------------------------------------------------===//
529 Investigate lowering of sparse switch statements into perfect hash tables:
530 http://burtleburtle.net/bob/hash/perfect.html
532 //===---------------------------------------------------------------------===//
534 We should turn things like "load+fabs+store" and "load+fneg+store" into the
535 corresponding integer operations. On a yonah, this loop:
540 for (b = 0; b < 10000000; b++)
541 for (i = 0; i < 256; i++)
545 is twice as slow as this loop:
550 for (b = 0; b < 10000000; b++)
551 for (i = 0; i < 256; i++)
552 a[i] ^= (1ULL << 63);
555 and I suspect other processors are similar. On X86 in particular this is a
556 big win because doing this with integers allows the use of read/modify/write
559 //===---------------------------------------------------------------------===//
561 DAG Combiner should try to combine small loads into larger loads when
562 profitable. For example, we compile this C++ example:
564 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
565 extern THotKey m_HotKey;
566 THotKey GetHotKey () { return m_HotKey; }
568 into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
570 __Z9GetHotKeyv: ## @_Z9GetHotKeyv
571 movq _m_HotKey@GOTPCREL(%rip), %rax
584 //===---------------------------------------------------------------------===//
586 We should add an FRINT node to the DAG to model targets that have legal
587 implementations of ceil/floor/rint.
589 //===---------------------------------------------------------------------===//
594 long long input[8] = {1,0,1,0,1,0,1,0};
598 Clang compiles this into:
600 call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
601 %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
602 store i64 1, i64* %0, align 16
603 %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
604 store i64 1, i64* %1, align 16
605 %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
606 store i64 1, i64* %2, align 16
607 %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
608 store i64 1, i64* %3, align 16
610 Which gets codegen'd into:
613 movaps %xmm0, -16(%rbp)
614 movaps %xmm0, -32(%rbp)
615 movaps %xmm0, -48(%rbp)
616 movaps %xmm0, -64(%rbp)
622 It would be better to have 4 movq's of 0 instead of the movaps's.
624 //===---------------------------------------------------------------------===//
626 http://llvm.org/PR717:
628 The following code should compile into "ret int undef". Instead, LLVM
629 produces "ret int 0":
638 //===---------------------------------------------------------------------===//
640 The loop unroller should partially unroll loops (instead of peeling them)
641 when code growth isn't too bad and when an unroll count allows simplification
642 of some code within the loop. One trivial example is:
648 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
657 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
658 reduction in code size. The resultant code would then also be suitable for
659 exit value computation.
661 //===---------------------------------------------------------------------===//
663 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
664 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
665 matching code in dag combine doesn't look through truncates aggressively
666 enough. Here are some testcases reduces from GCC PR17886:
668 unsigned long long f5(unsigned long long x, unsigned long long y) {
669 return (x << 8) | ((y >> 48) & 0xffull);
671 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
674 return (x << 8) | ((y >> 48) & 0xffull);
676 return (x << 16) | ((y >> 40) & 0xffffull);
678 return (x << 24) | ((y >> 32) & 0xffffffull);
680 return (x << 32) | ((y >> 24) & 0xffffffffull);
682 return (x << 40) | ((y >> 16) & 0xffffffffffull);
686 //===---------------------------------------------------------------------===//
688 This (and similar related idioms):
690 unsigned int foo(unsigned char i) {
691 return i | (i<<8) | (i<<16) | (i<<24);
696 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
698 %conv = zext i8 %i to i32
699 %shl = shl i32 %conv, 8
700 %shl5 = shl i32 %conv, 16
701 %shl9 = shl i32 %conv, 24
702 %or = or i32 %shl9, %conv
703 %or6 = or i32 %or, %shl5
704 %or10 = or i32 %or6, %shl
708 it would be better as:
710 unsigned int bar(unsigned char i) {
711 unsigned int j=i | (i << 8);
717 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
719 %conv = zext i8 %i to i32
720 %shl = shl i32 %conv, 8
721 %or = or i32 %shl, %conv
722 %shl5 = shl i32 %or, 16
723 %or6 = or i32 %shl5, %or
727 or even i*0x01010101, depending on the speed of the multiplier. The best way to
728 handle this is to canonicalize it to a multiply in IR and have codegen handle
729 lowering multiplies to shifts on cpus where shifts are faster.
731 //===---------------------------------------------------------------------===//
733 We do a number of simplifications in simplify libcalls to strength reduce
734 standard library functions, but we don't currently merge them together. For
735 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
736 be done safely if "b" isn't modified between the strlen and memcpy of course.
738 //===---------------------------------------------------------------------===//
740 We compile this program: (from GCC PR11680)
741 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
743 Into code that runs the same speed in fast/slow modes, but both modes run 2x
744 slower than when compile with GCC (either 4.0 or 4.2):
746 $ llvm-g++ perf.cpp -O3 -fno-exceptions
748 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
750 $ g++ perf.cpp -O3 -fno-exceptions
752 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
754 It looks like we are making the same inlining decisions, so this may be raw
755 codegen badness or something else (haven't investigated).
757 //===---------------------------------------------------------------------===//
759 We miss some instcombines for stuff like this:
761 void foo (unsigned int a) {
762 /* This one is equivalent to a >= (3 << 2). */
767 A few other related ones are in GCC PR14753.
769 //===---------------------------------------------------------------------===//
771 Divisibility by constant can be simplified (according to GCC PR12849) from
772 being a mulhi to being a mul lo (cheaper). Testcase:
774 void bar(unsigned n) {
779 This is equivalent to the following, where 2863311531 is the multiplicative
780 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
781 void bar(unsigned n) {
782 if (n * 2863311531U < 1431655766U)
786 The same transformation can work with an even modulo with the addition of a
787 rotate: rotate the result of the multiply to the right by the number of bits
788 which need to be zero for the condition to be true, and shrink the compare RHS
789 by the same amount. Unless the target supports rotates, though, that
790 transformation probably isn't worthwhile.
792 The transformation can also easily be made to work with non-zero equality
793 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
795 //===---------------------------------------------------------------------===//
797 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
798 bunch of other stuff from this example (see PR1604):
808 std::scanf("%d", &t.val);
809 std::printf("%d\n", t.val);
812 //===---------------------------------------------------------------------===//
814 These functions perform the same computation, but produce different assembly.
816 define i8 @select(i8 %x) readnone nounwind {
817 %A = icmp ult i8 %x, 250
818 %B = select i1 %A, i8 0, i8 1
822 define i8 @addshr(i8 %x) readnone nounwind {
823 %A = zext i8 %x to i9
824 %B = add i9 %A, 6 ;; 256 - 250 == 6
826 %D = trunc i9 %C to i8
830 //===---------------------------------------------------------------------===//
834 f (unsigned long a, unsigned long b, unsigned long c)
836 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
839 f (unsigned long a, unsigned long b, unsigned long c)
841 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
843 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
844 "clang -emit-llvm-bc | opt -std-compile-opts".
846 //===---------------------------------------------------------------------===//
849 #define PMD_MASK (~((1UL << 23) - 1))
850 void clear_pmd_range(unsigned long start, unsigned long end)
852 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
855 The expression should optimize to something like
856 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
857 -emit-llvm-bc | opt -std-compile-opts".
859 //===---------------------------------------------------------------------===//
861 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
863 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
864 These should combine to the same thing. Currently, the first function
865 produces better code on X86.
867 //===---------------------------------------------------------------------===//
870 #define abs(x) x>0?x:-x
873 return (abs(x)) >= 0;
875 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
876 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
878 //===---------------------------------------------------------------------===//
882 rotate_cst (unsigned int a)
884 a = (a << 10) | (a >> 22);
889 minus_cst (unsigned int a)
898 mask_gt (unsigned int a)
900 /* This is equivalent to a > 15. */
905 rshift_gt (unsigned int a)
907 /* This is equivalent to a > 23. */
911 All should simplify to a single comparison. All of these are
912 currently not optimized with "clang -emit-llvm-bc | opt
915 //===---------------------------------------------------------------------===//
918 int c(int* x) {return (char*)x+2 == (char*)x;}
919 Should combine to 0. Currently not optimized with "clang
920 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
922 //===---------------------------------------------------------------------===//
924 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
925 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
926 with "clang -emit-llvm-bc | opt -std-compile-opts".
928 //===---------------------------------------------------------------------===//
930 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
931 Should combine to "x | (y & 3)". Currently not optimized with "clang
932 -emit-llvm-bc | opt -std-compile-opts".
934 //===---------------------------------------------------------------------===//
936 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
937 Should fold to "(~a & c) | (a & b)". Currently not optimized with
938 "clang -emit-llvm-bc | opt -std-compile-opts".
940 //===---------------------------------------------------------------------===//
942 int a(int a,int b) {return (~(a|b))|a;}
943 Should fold to "a|~b". Currently not optimized with "clang
944 -emit-llvm-bc | opt -std-compile-opts".
946 //===---------------------------------------------------------------------===//
948 int a(int a, int b) {return (a&&b) || (a&&!b);}
949 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
950 | opt -std-compile-opts".
952 //===---------------------------------------------------------------------===//
954 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
955 Should fold to "a ? b : c", or at least something sane. Currently not
956 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
958 //===---------------------------------------------------------------------===//
960 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
961 Should fold to a && (b || c). Currently not optimized with "clang
962 -emit-llvm-bc | opt -std-compile-opts".
964 //===---------------------------------------------------------------------===//
966 int a(int x) {return x | ((x & 8) ^ 8);}
967 Should combine to x | 8. Currently not optimized with "clang
968 -emit-llvm-bc | opt -std-compile-opts".
970 //===---------------------------------------------------------------------===//
972 int a(int x) {return x ^ ((x & 8) ^ 8);}
973 Should also combine to x | 8. Currently not optimized with "clang
974 -emit-llvm-bc | opt -std-compile-opts".
976 //===---------------------------------------------------------------------===//
978 int a(int x) {return ((x | -9) ^ 8) & x;}
979 Should combine to x & -9. Currently not optimized with "clang
980 -emit-llvm-bc | opt -std-compile-opts".
982 //===---------------------------------------------------------------------===//
984 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
985 Should combine to "a * 0x88888888 >> 31". Currently not optimized
986 with "clang -emit-llvm-bc | opt -std-compile-opts".
988 //===---------------------------------------------------------------------===//
990 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
991 There's an unnecessary zext in the generated code with "clang
992 -emit-llvm-bc | opt -std-compile-opts".
994 //===---------------------------------------------------------------------===//
996 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
997 Should combine to "20 * (((unsigned)x) & -2)". Currently not
998 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1000 //===---------------------------------------------------------------------===//
1002 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1004 %tmp = icmp eq i32 %decl_context, 4
1005 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1006 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1007 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1009 tmp1 should be simplified to something like:
1010 (!tmp || decl_context == 1)
1012 This allows recursive simplifications, tmp1 is used all over the place in
1013 the function, e.g. by:
1015 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1016 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1017 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1021 //===---------------------------------------------------------------------===//
1025 Store sinking: This code:
1027 void f (int n, int *cond, int *res) {
1030 for (i = 0; i < n; i++)
1032 *res ^= 234; /* (*) */
1035 On this function GVN hoists the fully redundant value of *res, but nothing
1036 moves the store out. This gives us this code:
1038 bb: ; preds = %bb2, %entry
1039 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1040 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1041 %1 = load i32* %cond, align 4
1042 %2 = icmp eq i32 %1, 0
1043 br i1 %2, label %bb2, label %bb1
1046 %3 = xor i32 %.rle, 234
1047 store i32 %3, i32* %res, align 4
1050 bb2: ; preds = %bb, %bb1
1051 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1052 %indvar.next = add i32 %i.05, 1
1053 %exitcond = icmp eq i32 %indvar.next, %n
1054 br i1 %exitcond, label %return, label %bb
1056 DSE should sink partially dead stores to get the store out of the loop.
1058 Here's another partial dead case:
1059 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1061 //===---------------------------------------------------------------------===//
1063 Scalar PRE hoists the mul in the common block up to the else:
1065 int test (int a, int b, int c, int g) {
1075 It would be better to do the mul once to reduce codesize above the if.
1076 This is GCC PR38204.
1078 //===---------------------------------------------------------------------===//
1082 GCC PR37810 is an interesting case where we should sink load/store reload
1083 into the if block and outside the loop, so we don't reload/store it on the
1104 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1105 we don't sink the store. We need partially dead store sinking.
1107 //===---------------------------------------------------------------------===//
1109 [LOAD PRE CRIT EDGE SPLITTING]
1111 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1112 leading to excess stack traffic. This could be handled by GVN with some crazy
1113 symbolic phi translation. The code we get looks like (g is on the stack):
1117 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1118 store i32 %8, i32* %9, align bel %bb3
1120 bb3: ; preds = %bb1, %bb2, %bb
1121 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1122 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1123 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1124 %11 = load i32* %10, align 4
1126 %11 is partially redundant, an in BB2 it should have the value %8.
1128 GCC PR33344 and PR35287 are similar cases.
1131 //===---------------------------------------------------------------------===//
1135 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1136 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1138 [CRIT EDGE BREAKING]
1139 loadpre3.c predcom-4.c
1141 [PRE OF READONLY CALL]
1144 [TURN SELECT INTO BRANCH]
1145 loadpre14.c loadpre15.c
1147 actually a conditional increment: loadpre18.c loadpre19.c
1149 //===---------------------------------------------------------------------===//
1151 [LOAD PRE / STORE SINKING / SPEC HACK]
1153 This is a chunk of code from 456.hmmer:
1155 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1156 int *tpdm, int xmb, int *bp, int *ms) {
1158 for (k = 1; k <= M; k++) {
1159 mc[k] = mpp[k-1] + tpmm[k-1];
1160 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1161 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1162 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1167 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1168 into a conditional move (select instr in IR) and allow the final store to do the
1169 store. See GCC PR27313 for more details. Note that this is valid to xform even
1170 with the new C++ memory model, since mc[k] is previously loaded and later
1173 //===---------------------------------------------------------------------===//
1176 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1179 //===---------------------------------------------------------------------===//
1181 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1182 GCC testsuite. For example, we get the first example in predcom-1.c, but
1183 miss the second one:
1188 __attribute__ ((noinline))
1189 void count_averages(int n) {
1191 for (i = 1; i < n; i++)
1192 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1195 which compiles into two loads instead of one in the loop.
1197 predcom-2.c is the same as predcom-1.c
1199 predcom-3.c is very similar but needs loads feeding each other instead of
1203 //===---------------------------------------------------------------------===//
1207 Type based alias analysis:
1208 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1210 We should do better analysis of posix_memalign. At the least it should
1211 no-capture its pointer argument, at best, we should know that the out-value
1212 result doesn't point to anything (like malloc). One example of this is in
1213 SingleSource/Benchmarks/Misc/dt.c
1215 //===---------------------------------------------------------------------===//
1217 A/B get pinned to the stack because we turn an if/then into a select instead
1218 of PRE'ing the load/store. This may be fixable in instcombine:
1219 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1221 struct X { int i; };
1235 //===---------------------------------------------------------------------===//
1237 Interesting missed case because of control flow flattening (should be 2 loads):
1238 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1239 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1240 opt -mem2reg -gvn -instcombine | llvm-dis
1241 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1242 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1244 //===---------------------------------------------------------------------===//
1246 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1247 We could eliminate the branch condition here, loading from null is undefined:
1249 struct S { int w, x, y, z; };
1250 struct T { int r; struct S s; };
1251 void bar (struct S, int);
1252 void foo (int a, struct T b)
1260 //===---------------------------------------------------------------------===//
1262 simplifylibcalls should do several optimizations for strspn/strcspn:
1264 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1266 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1268 register size_t __result = 0;
1269 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1270 __s[__result] != __reject2 && __s[__result] != __reject3)
1275 This should turn into a switch on the character. See PR3253 for some notes on
1278 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1280 //===---------------------------------------------------------------------===//
1282 "gas" uses this idiom:
1283 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1285 else if (strchr ("<>", *intel_parser.op_string)
1287 Those should be turned into a switch.
1289 //===---------------------------------------------------------------------===//
1291 252.eon contains this interesting code:
1293 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1294 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1295 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1296 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1297 call void @llvm.memcpy.i32(i8* %endptr,
1298 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1299 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1301 This is interesting for a couple reasons. First, in this:
1303 The memcpy+strlen strlen can be replaced with:
1305 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1307 Because the destination was just copied into the specified memory buffer. This,
1308 in turn, can be constant folded to "4".
1310 In other code, it contains:
1312 %endptr6978 = bitcast i8* %endptr69 to i32*
1313 store i32 7107374, i32* %endptr6978, align 1
1314 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1316 Which could also be constant folded. Whatever is producing this should probably
1317 be fixed to leave this as a memcpy from a string.
1319 Further, eon also has an interesting partially redundant strlen call:
1321 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1322 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1323 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1324 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1325 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1326 br i1 %685, label %bb10, label %bb9
1329 %686 = call i32 @strlen(i8* %683) nounwind readonly
1330 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1331 br i1 %687, label %bb10, label %bb11
1333 bb10: ; preds = %bb9, %bb8
1334 %688 = call i32 @strlen(i8* %683) nounwind readonly
1336 This could be eliminated by doing the strlen once in bb8, saving code size and
1337 improving perf on the bb8->9->10 path.
1339 //===---------------------------------------------------------------------===//
1341 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1343 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1346 bb62: ; preds = %bb55, %bb53
1347 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1348 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1349 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1350 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1353 br i1 %or.cond, label %bb65, label %bb72
1355 bb65: ; preds = %bb62
1356 store i8 0, i8* %173, align 1
1359 bb72: ; preds = %bb65, %bb62
1360 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1361 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1363 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1364 redundant with the %171 call. At worst, we could shove the %177 strlen call
1365 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1366 that bb65 stores to the string, zeroing out the last byte. This means that on
1367 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1370 This pattern repeats several times, basically doing:
1375 where it is "obvious" that B = A-1.
1377 //===---------------------------------------------------------------------===//
1379 186.crafty has this interesting pattern with the "out.4543" variable:
1381 call void @llvm.memcpy.i32(
1382 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1383 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1384 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1386 It is basically doing:
1388 memcpy(globalarray, "string");
1389 printf(..., globalarray);
1391 Anyway, by knowing that printf just reads the memory and forward substituting
1392 the string directly into the printf, this eliminates reads from globalarray.
1393 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1394 other similar functions) there are many stores to "out". Once all the printfs
1395 stop using "out", all that is left is the memcpy's into it. This should allow
1396 globalopt to remove the "stored only" global.
1398 //===---------------------------------------------------------------------===//
1402 define inreg i32 @foo(i8* inreg %p) nounwind {
1404 %tmp1 = ashr i8 %tmp0, 5
1405 %tmp2 = sext i8 %tmp1 to i32
1409 could be dagcombine'd to a sign-extending load with a shift.
1410 For example, on x86 this currently gets this:
1416 while it could get this:
1421 //===---------------------------------------------------------------------===//
1425 int test(int x) { return 1-x == x; } // --> return false
1426 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1428 Always foldable for odd constants, what is the rule for even?
1430 //===---------------------------------------------------------------------===//
1432 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1433 for next field in struct (which is at same address).
1435 For example: store of float into { {{}}, float } could be turned into a store to
1438 //===---------------------------------------------------------------------===//
1440 The arg promotion pass should make use of nocapture to make its alias analysis
1441 stuff much more precise.
1443 //===---------------------------------------------------------------------===//
1445 The following functions should be optimized to use a select instead of a
1446 branch (from gcc PR40072):
1448 char char_int(int m) {if(m>7) return 0; return m;}
1449 int int_char(char m) {if(m>7) return 0; return m;}
1451 //===---------------------------------------------------------------------===//
1453 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1457 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1459 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1460 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1461 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1462 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1463 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1467 However, it's functionally equivalent to:
1469 b = (b & ~0x80) | (a & 0x80);
1471 Which generates this:
1473 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1475 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1476 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1477 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1481 This can be generalized for other forms:
1483 b = (b & ~0x80) | (a & 0x40) << 1;
1485 //===---------------------------------------------------------------------===//
1487 These two functions produce different code. They shouldn't:
1491 uint8_t p1(uint8_t b, uint8_t a) {
1492 b = (b & ~0xc0) | (a & 0xc0);
1496 uint8_t p2(uint8_t b, uint8_t a) {
1497 b = (b & ~0x40) | (a & 0x40);
1498 b = (b & ~0x80) | (a & 0x80);
1502 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1504 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1505 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1506 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1510 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1512 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1513 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1514 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1515 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1516 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1520 //===---------------------------------------------------------------------===//
1522 IPSCCP does not currently propagate argument dependent constants through
1523 functions where it does not not all of the callers. This includes functions
1524 with normal external linkage as well as templates, C99 inline functions etc.
1525 Specifically, it does nothing to:
1527 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1529 %0 = add nsw i32 %y, %z
1532 %3 = add nsw i32 %1, %2
1536 define i32 @test2() nounwind {
1538 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1542 It would be interesting extend IPSCCP to be able to handle simple cases like
1543 this, where all of the arguments to a call are constant. Because IPSCCP runs
1544 before inlining, trivial templates and inline functions are not yet inlined.
1545 The results for a function + set of constant arguments should be memoized in a
1548 //===---------------------------------------------------------------------===//
1550 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1551 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1552 handle simple things like this:
1554 static int foo(const char *X) { return strlen(X); }
1555 int bar() { return foo("abcd"); }
1557 //===---------------------------------------------------------------------===//
1559 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1561 define i1 @test(i8 %x, i8 %y) {
1563 %B = icmp ugt i8 %A, 3
1567 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1568 the sign bit, if the comparison is obviously a sign test. In this case, we only
1569 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1570 would delete the or instruction for us.
1572 //===---------------------------------------------------------------------===//
1574 functionattrs doesn't know much about memcpy/memset. This function should be
1575 marked readnone rather than readonly, since it only twiddles local memory, but
1576 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1578 struct X { int *p; int *q; };
1585 p = __builtin_memcpy (&x, &y, sizeof (int *));
1589 This can be seen at:
1590 $ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
1593 //===---------------------------------------------------------------------===//
1595 Missed instcombine transformation:
1596 define i1 @a(i32 %x) nounwind readnone {
1598 %cmp = icmp eq i32 %x, 30
1599 %sub = add i32 %x, -30
1600 %cmp2 = icmp ugt i32 %sub, 9
1601 %or = or i1 %cmp, %cmp2
1604 This should be optimized to a single compare. Testcase derived from gcc.
1606 //===---------------------------------------------------------------------===//
1608 Missed instcombine or reassociate transformation:
1609 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1611 The sgt and slt should be combined into a single comparison. Testcase derived
1614 //===---------------------------------------------------------------------===//
1616 Missed instcombine transformation:
1618 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1619 %383 = zext i32 %382 to i64 ; [#uses=1]
1620 %384 = shl i64 %381, %383 ; [#uses=1]
1621 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1623 The srem can be transformed to an and because if %tmp14.i is negative, the
1624 shift is undefined. Testcase derived from 403.gcc.
1626 //===---------------------------------------------------------------------===//
1628 This is a range comparison on a divided result (from 403.gcc):
1630 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1631 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1632 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1634 We already catch this (removing the sdiv) if there isn't an add, we should
1635 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1638 int a(int x) { return (unsigned)(x/16+7) < 15; }
1640 Another similar case involves truncations on 64-bit targets:
1642 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1643 %362 = trunc i64 %361 to i32 ; [#uses=2]
1645 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1647 //===---------------------------------------------------------------------===//
1649 Missed instcombine/dagcombine transformation:
1650 define void @lshift_lt(i8 zeroext %a) nounwind {
1652 %conv = zext i8 %a to i32
1653 %shl = shl i32 %conv, 3
1654 %cmp = icmp ult i32 %shl, 33
1655 br i1 %cmp, label %if.then, label %if.end
1658 tail call void @bar() nounwind
1664 declare void @bar() nounwind
1666 The shift should be eliminated. Testcase derived from gcc.
1668 //===---------------------------------------------------------------------===//
1670 These compile into different code, one gets recognized as a switch and the
1671 other doesn't due to phase ordering issues (PR6212):
1673 int test1(int mainType, int subType) {
1676 else if (mainType == 9)
1678 else if (mainType == 11)
1683 int test2(int mainType, int subType) {
1693 //===---------------------------------------------------------------------===//
1695 The following test case (from PR6576):
1697 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1699 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1700 br i1 %cond1, label %exit, label %bb.nph
1701 bb.nph: ; preds = %entry
1702 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1704 exit: ; preds = %entry
1708 could be reduced to:
1710 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1712 %tmp = mul i32 %b, %a
1716 //===---------------------------------------------------------------------===//
1718 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1721 Another interesting case is that something related could be used for variables
1722 that go const after their ctor has finished. In these cases, globalopt (which
1723 can statically run the constructor) could mark the global const (so it gets put
1724 in the readonly section). A testcase would be:
1727 using namespace std;
1728 const complex<char> should_be_in_rodata (42,-42);
1729 complex<char> should_be_in_data (42,-42);
1730 complex<char> should_be_in_bss;
1732 Where we currently evaluate the ctors but the globals don't become const because
1733 the optimizer doesn't know they "become const" after the ctor is done. See
1734 GCC PR4131 for more examples.
1736 //===---------------------------------------------------------------------===//
1741 return x > 1 ? x : 1;
1744 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1745 and cheaper on most targets.
1747 LLVM prefers comparisons with zero over non-zero in general, but in this
1748 case it choses instead to keep the max operation obvious.
1750 //===---------------------------------------------------------------------===//
1752 Take the following testcase on x86-64 (similar testcases exist for all targets
1755 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1758 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1759 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1760 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1761 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1762 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1763 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1764 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1765 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1766 store i64 %7, i64* %s, align 8
1767 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1768 store i64 %8, i64* %t, align 8
1788 The generated SelectionDAG has an ADD of an ADDE, where both operands of the
1789 ADDE are zero. Replacing one of the operands of the ADDE with the other operand
1790 of the ADD, and replacing the ADD with the ADDE, should give the desired result.
1792 (That said, we are doing a lot better than gcc on this testcase. :) )
1794 //===---------------------------------------------------------------------===//
1796 Switch lowering generates less than ideal code for the following switch:
1797 define void @a(i32 %x) nounwind {
1799 switch i32 %x, label %if.end [
1800 i32 0, label %if.then
1801 i32 1, label %if.then
1802 i32 2, label %if.then
1803 i32 3, label %if.then
1804 i32 5, label %if.then
1807 tail call void @foo() nounwind
1814 Generated code on x86-64 (other platforms give similar results):
1827 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1829 Or, if we wanted to be really clever, we could simplify the whole thing to
1830 something like the following, which eliminates a branch:
1837 //===---------------------------------------------------------------------===//
1838 Given a branch where the two target blocks are identical ("ret i32 %b" in
1839 both), simplifycfg will simplify them away. But not so for a switch statement:
1841 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1843 switch i32 %a, label %bb3 [
1848 bb: ; preds = %entry, %entry
1851 bb3: ; preds = %entry
1854 //===---------------------------------------------------------------------===//
1856 clang -O3 fails to devirtualize this virtual inheritance case: (GCC PR45875)
1857 Looks related to PR3100
1861 virtual void foo ();
1863 struct c11 : c10, c1{
1866 struct c28 : virtual c11{
1875 //===---------------------------------------------------------------------===//
1879 int foo(int a) { return (a & (~15)) / 16; }
1883 define i32 @foo(i32 %a) nounwind readnone ssp {
1885 %and = and i32 %a, -16
1886 %div = sdiv i32 %and, 16
1890 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
1891 should be instcombined into just "a >> 4".
1893 We do get this at the codegen level, so something knows about it, but
1894 instcombine should catch it earlier:
1902 //===---------------------------------------------------------------------===//
1904 This code (from GCC PR28685):
1906 int test(int a, int b) {
1916 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
1918 %cmp = icmp slt i32 %a, %b
1919 br i1 %cmp, label %return, label %if.end
1921 if.end: ; preds = %entry
1922 %cmp5 = icmp eq i32 %a, %b
1923 %conv6 = zext i1 %cmp5 to i32
1926 return: ; preds = %entry
1932 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
1934 %0 = icmp sle i32 %a, %b
1935 %retval = zext i1 %0 to i32
1939 //===---------------------------------------------------------------------===//
1941 This compare could fold to false:
1943 define i1 @g(i32 a) nounwind readnone {
1944 %add = shl i32 %a, 1
1945 %mul = shl i32 %a, 1
1946 %cmp = icmp ugt i32 %add, %mul
1950 //===---------------------------------------------------------------------===//
1952 This code can be seen in viterbi:
1954 %64 = call noalias i8* @malloc(i64 %62) nounwind
1956 %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
1957 %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
1959 llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
1960 fold to %62. This is a security win (overflows of malloc will get caught)
1961 and also a performance win by exposing more memsets to the optimizer.
1963 This occurs several times in viterbi.
1965 //===---------------------------------------------------------------------===//