1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 Make the PPC branch selector target independant
23 //===---------------------------------------------------------------------===//
25 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
26 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
27 safe in general, even on darwin. See the libm implementation of hypot for
28 examples (which special case when x/y are exactly zero to get signed zeros etc
31 //===---------------------------------------------------------------------===//
33 Solve this DAG isel folding deficiency:
51 The problem is the store's chain operand is not the load X but rather
52 a TokenFactor of the load X and load Y, which prevents the folding.
54 There are two ways to fix this:
56 1. The dag combiner can start using alias analysis to realize that y/x
57 don't alias, making the store to X not dependent on the load from Y.
58 2. The generated isel could be made smarter in the case it can't
59 disambiguate the pointers.
61 Number 1 is the preferred solution.
63 This has been "fixed" by a TableGen hack. But that is a short term workaround
64 which will be removed once the proper fix is made.
66 //===---------------------------------------------------------------------===//
68 On targets with expensive 64-bit multiply, we could LSR this:
75 for (i = ...; ++i, tmp+=tmp)
78 This would be a win on ppc32, but not x86 or ppc64.
80 //===---------------------------------------------------------------------===//
82 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
84 //===---------------------------------------------------------------------===//
86 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
88 //===---------------------------------------------------------------------===//
90 Interesting? testcase for add/shift/mul reassoc:
92 int bar(int x, int y) {
93 return x*x*x+y+x*x*x*x*x*y*y*y*y;
95 int foo(int z, int n) {
96 return bar(z, n) + bar(2*z, 2*n);
99 Reassociate should handle the example in GCC PR16157.
101 //===---------------------------------------------------------------------===//
103 These two functions should generate the same code on big-endian systems:
105 int g(int *j,int *l) { return memcmp(j,l,4); }
106 int h(int *j, int *l) { return *j - *l; }
108 this could be done in SelectionDAGISel.cpp, along with other special cases,
111 //===---------------------------------------------------------------------===//
113 It would be nice to revert this patch:
114 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
116 And teach the dag combiner enough to simplify the code expanded before
117 legalize. It seems plausible that this knowledge would let it simplify other
120 //===---------------------------------------------------------------------===//
122 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
123 to the type size. It works but can be overly conservative as the alignment of
124 specific vector types are target dependent.
126 //===---------------------------------------------------------------------===//
128 We should produce an unaligned load from code like this:
130 v4sf example(float *P) {
131 return (v4sf){P[0], P[1], P[2], P[3] };
134 //===---------------------------------------------------------------------===//
136 Add support for conditional increments, and other related patterns. Instead
141 je LBB16_2 #cond_next
152 //===---------------------------------------------------------------------===//
154 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
156 Expand these to calls of sin/cos and stores:
157 double sincos(double x, double *sin, double *cos);
158 float sincosf(float x, float *sin, float *cos);
159 long double sincosl(long double x, long double *sin, long double *cos);
161 Doing so could allow SROA of the destination pointers. See also:
162 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
164 This is now easily doable with MRVs. We could even make an intrinsic for this
165 if anyone cared enough about sincos.
167 //===---------------------------------------------------------------------===//
169 Turn this into a single byte store with no load (the other 3 bytes are
172 define void @test(i32* %P) {
174 %tmp14 = or i32 %tmp, 3305111552
175 %tmp15 = and i32 %tmp14, 3321888767
176 store i32 %tmp15, i32* %P
180 //===---------------------------------------------------------------------===//
182 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
188 int t = __builtin_clz(x);
198 //===---------------------------------------------------------------------===//
200 quantum_sigma_x in 462.libquantum contains the following loop:
202 for(i=0; i<reg->size; i++)
204 /* Flip the target bit of each basis state */
205 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
208 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
209 so cool to turn it into something like:
211 long long Res = ((MAX_UNSIGNED) 1 << target);
213 for(i=0; i<reg->size; i++)
214 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
216 for(i=0; i<reg->size; i++)
217 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
220 ... which would only do one 32-bit XOR per loop iteration instead of two.
222 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
225 //===---------------------------------------------------------------------===//
227 This isn't recognized as bswap by instcombine (yes, it really is bswap):
229 unsigned long reverse(unsigned v) {
231 t = v ^ ((v << 16) | (v >> 16));
233 v = (v << 24) | (v >> 8);
237 //===---------------------------------------------------------------------===//
239 These idioms should be recognized as popcount (see PR1488):
241 unsigned countbits_slow(unsigned v) {
243 for (c = 0; v; v >>= 1)
247 unsigned countbits_fast(unsigned v){
250 v &= v - 1; // clear the least significant bit set
254 BITBOARD = unsigned long long
255 int PopCnt(register BITBOARD a) {
263 unsigned int popcount(unsigned int input) {
264 unsigned int count = 0;
265 for (unsigned int i = 0; i < 4 * 8; i++)
266 count += (input >> i) & i;
270 //===---------------------------------------------------------------------===//
272 These should turn into single 16-bit (unaligned?) loads on little/big endian
275 unsigned short read_16_le(const unsigned char *adr) {
276 return adr[0] | (adr[1] << 8);
278 unsigned short read_16_be(const unsigned char *adr) {
279 return (adr[0] << 8) | adr[1];
282 //===---------------------------------------------------------------------===//
284 -instcombine should handle this transform:
285 icmp pred (sdiv X / C1 ), C2
286 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
288 Currently InstCombine avoids this transform but will do it when the signs of
289 the operands and the sign of the divide match. See the FIXME in
290 InstructionCombining.cpp in the visitSetCondInst method after the switch case
291 for Instruction::UDiv (around line 4447) for more details.
293 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
296 //===---------------------------------------------------------------------===//
298 viterbi speeds up *significantly* if the various "history" related copy loops
299 are turned into memcpy calls at the source level. We need a "loops to memcpy"
302 //===---------------------------------------------------------------------===//
306 typedef unsigned U32;
307 typedef unsigned long long U64;
308 int test (U32 *inst, U64 *regs) {
311 int r1 = (temp >> 20) & 0xf;
312 int b2 = (temp >> 16) & 0xf;
313 effective_addr2 = temp & 0xfff;
314 if (b2) effective_addr2 += regs[b2];
315 b2 = (temp >> 12) & 0xf;
316 if (b2) effective_addr2 += regs[b2];
317 effective_addr2 &= regs[4];
318 if ((effective_addr2 & 3) == 0)
323 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
324 we don't eliminate the computation of the top half of effective_addr2 because
325 we don't have whole-function selection dags. On x86, this means we use one
326 extra register for the function when effective_addr2 is declared as U64 than
327 when it is declared U32.
329 //===---------------------------------------------------------------------===//
331 LSR should know what GPR types a target has. This code:
333 volatile short X, Y; // globals
337 for (i = 0; i < N; i++) { X = i; Y = i*4; }
340 produces two near identical IV's (after promotion) on PPC/ARM:
350 add r2, r2, #1 <- [0,+,1]
351 sub r0, r0, #1 <- [0,-,1]
355 LSR should reuse the "+" IV for the exit test.
358 //===---------------------------------------------------------------------===//
360 Tail call elim should be more aggressive, checking to see if the call is
361 followed by an uncond branch to an exit block.
363 ; This testcase is due to tail-duplication not wanting to copy the return
364 ; instruction into the terminating blocks because there was other code
365 ; optimized out of the function after the taildup happened.
366 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
368 define i32 @t4(i32 %a) {
370 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
371 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
372 br i1 %tmp.2, label %then.0, label %else.0
374 then.0: ; preds = %entry
375 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
376 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
379 else.0: ; preds = %entry
380 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
381 br i1 %tmp.7, label %then.1, label %return
383 then.1: ; preds = %else.0
384 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
385 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
388 return: ; preds = %then.1, %else.0, %then.0
389 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
394 //===---------------------------------------------------------------------===//
396 Tail recursion elimination is not transforming this function, because it is
397 returning n, which fails the isDynamicConstant check in the accumulator
400 long long fib(const long long n) {
406 return fib(n-1) + fib(n-2);
410 //===---------------------------------------------------------------------===//
412 Tail recursion elimination should handle:
417 return 2 * pow2m1 (n - 1) + 1;
420 Also, multiplies can be turned into SHL's, so they should be handled as if
421 they were associative. "return foo() << 1" can be tail recursion eliminated.
423 //===---------------------------------------------------------------------===//
425 Argument promotion should promote arguments for recursive functions, like
428 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
430 define internal i32 @foo(i32* %x) {
432 %tmp = load i32* %x ; <i32> [#uses=0]
433 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
437 define i32 @bar(i32* %x) {
439 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
443 //===---------------------------------------------------------------------===//
445 "basicaa" should know how to look through "or" instructions that act like add
446 instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
447 basicaa can't analyze the array subscript, leading to duplicated loads in the
450 void test(int X, int Y, int a[]) {
452 for (i=2; i<1000; i+=4) {
453 a[i+0] = a[i-1+0]*a[i-2+0];
454 a[i+1] = a[i-1+1]*a[i-2+1];
455 a[i+2] = a[i-1+2]*a[i-2+2];
456 a[i+3] = a[i-1+3]*a[i-2+3];
460 BasicAA also doesn't do this for add. It needs to know that &A[i+1] != &A[i].
462 //===---------------------------------------------------------------------===//
464 We should investigate an instruction sinking pass. Consider this silly
480 je LBB1_2 # cond_true
488 The PIC base computation (call+popl) is only used on one path through the
489 code, but is currently always computed in the entry block. It would be
490 better to sink the picbase computation down into the block for the
491 assertion, as it is the only one that uses it. This happens for a lot of
492 code with early outs.
494 Another example is loads of arguments, which are usually emitted into the
495 entry block on targets like x86. If not used in all paths through a
496 function, they should be sunk into the ones that do.
498 In this case, whole-function-isel would also handle this.
500 //===---------------------------------------------------------------------===//
502 Investigate lowering of sparse switch statements into perfect hash tables:
503 http://burtleburtle.net/bob/hash/perfect.html
505 //===---------------------------------------------------------------------===//
507 We should turn things like "load+fabs+store" and "load+fneg+store" into the
508 corresponding integer operations. On a yonah, this loop:
513 for (b = 0; b < 10000000; b++)
514 for (i = 0; i < 256; i++)
518 is twice as slow as this loop:
523 for (b = 0; b < 10000000; b++)
524 for (i = 0; i < 256; i++)
525 a[i] ^= (1ULL << 63);
528 and I suspect other processors are similar. On X86 in particular this is a
529 big win because doing this with integers allows the use of read/modify/write
532 //===---------------------------------------------------------------------===//
534 DAG Combiner should try to combine small loads into larger loads when
535 profitable. For example, we compile this C++ example:
537 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
538 extern THotKey m_HotKey;
539 THotKey GetHotKey () { return m_HotKey; }
541 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
546 movb _m_HotKey+3, %cl
547 movb _m_HotKey+4, %dl
548 movb _m_HotKey+2, %ch
563 movzwl _m_HotKey+4, %edx
567 The LLVM IR contains the needed alignment info, so we should be able to
568 merge the loads and stores into 4-byte loads:
570 %struct.THotKey = type { i16, i8, i8, i8 }
571 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
573 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
574 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
575 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
576 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
578 Alternatively, we should use a small amount of base-offset alias analysis
579 to make it so the scheduler doesn't need to hold all the loads in regs at
582 //===---------------------------------------------------------------------===//
584 We should add an FRINT node to the DAG to model targets that have legal
585 implementations of ceil/floor/rint.
587 //===---------------------------------------------------------------------===//
592 long long input[8] = {1,1,1,1,1,1,1,1};
596 We currently compile this into a memcpy from a global array since the
597 initializer is fairly large and not memset'able. This is good, but the memcpy
598 gets lowered to load/stores in the code generator. This is also ok, except
599 that the codegen lowering for memcpy doesn't handle the case when the source
600 is a constant global. This gives us atrocious code like this:
605 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
607 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
609 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
611 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
613 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
615 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
617 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
629 //===---------------------------------------------------------------------===//
631 http://llvm.org/PR717:
633 The following code should compile into "ret int undef". Instead, LLVM
634 produces "ret int 0":
643 //===---------------------------------------------------------------------===//
645 The loop unroller should partially unroll loops (instead of peeling them)
646 when code growth isn't too bad and when an unroll count allows simplification
647 of some code within the loop. One trivial example is:
653 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
662 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
663 reduction in code size. The resultant code would then also be suitable for
664 exit value computation.
666 //===---------------------------------------------------------------------===//
668 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
669 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
670 matching code in dag combine doesn't look through truncates aggressively
671 enough. Here are some testcases reduces from GCC PR17886:
673 unsigned long long f(unsigned long long x, int y) {
674 return (x << y) | (x >> 64-y);
676 unsigned f2(unsigned x, int y){
677 return (x << y) | (x >> 32-y);
679 unsigned long long f3(unsigned long long x){
681 return (x << y) | (x >> 64-y);
683 unsigned f4(unsigned x){
685 return (x << y) | (x >> 32-y);
687 unsigned long long f5(unsigned long long x, unsigned long long y) {
688 return (x << 8) | ((y >> 48) & 0xffull);
690 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
693 return (x << 8) | ((y >> 48) & 0xffull);
695 return (x << 16) | ((y >> 40) & 0xffffull);
697 return (x << 24) | ((y >> 32) & 0xffffffull);
699 return (x << 32) | ((y >> 24) & 0xffffffffull);
701 return (x << 40) | ((y >> 16) & 0xffffffffffull);
705 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
706 generate truly horrible code, instead of using shld and friends. On
707 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
708 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
710 //===---------------------------------------------------------------------===//
712 We do a number of simplifications in simplify libcalls to strength reduce
713 standard library functions, but we don't currently merge them together. For
714 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
715 be done safely if "b" isn't modified between the strlen and memcpy of course.
717 //===---------------------------------------------------------------------===//
719 Reassociate should turn things like:
721 int factorial(int X) {
722 return X*X*X*X*X*X*X*X;
725 into llvm.powi calls, allowing the code generator to produce balanced
726 multiplication trees.
728 //===---------------------------------------------------------------------===//
730 We generate a horrible libcall for llvm.powi. For example, we compile:
733 double f(double a) { return std::pow(a, 4); }
739 movsd 16(%esp), %xmm0
742 call L___powidf2$stub
750 movsd 16(%esp), %xmm0
758 //===---------------------------------------------------------------------===//
760 We compile this program: (from GCC PR11680)
761 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
763 Into code that runs the same speed in fast/slow modes, but both modes run 2x
764 slower than when compile with GCC (either 4.0 or 4.2):
766 $ llvm-g++ perf.cpp -O3 -fno-exceptions
768 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
770 $ g++ perf.cpp -O3 -fno-exceptions
772 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
774 It looks like we are making the same inlining decisions, so this may be raw
775 codegen badness or something else (haven't investigated).
777 //===---------------------------------------------------------------------===//
779 We miss some instcombines for stuff like this:
781 void foo (unsigned int a) {
782 /* This one is equivalent to a >= (3 << 2). */
787 A few other related ones are in GCC PR14753.
789 //===---------------------------------------------------------------------===//
791 Divisibility by constant can be simplified (according to GCC PR12849) from
792 being a mulhi to being a mul lo (cheaper). Testcase:
794 void bar(unsigned n) {
799 I think this basically amounts to a dag combine to simplify comparisons against
800 multiply hi's into a comparison against the mullo.
802 //===---------------------------------------------------------------------===//
804 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
805 bunch of other stuff from this example (see PR1604):
815 std::scanf("%d", &t.val);
816 std::printf("%d\n", t.val);
819 //===---------------------------------------------------------------------===//
821 Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
822 10) u< 10, but only when the comparisons have matching sign.
824 This could be converted with a similiar technique. (PR1941)
826 define i1 @test(i8 %x) {
827 %A = icmp uge i8 %x, 5
828 %B = icmp slt i8 %x, 20
833 //===---------------------------------------------------------------------===//
835 These functions perform the same computation, but produce different assembly.
837 define i8 @select(i8 %x) readnone nounwind {
838 %A = icmp ult i8 %x, 250
839 %B = select i1 %A, i8 0, i8 1
843 define i8 @addshr(i8 %x) readnone nounwind {
844 %A = zext i8 %x to i9
845 %B = add i9 %A, 6 ;; 256 - 250 == 6
847 %D = trunc i9 %C to i8
851 //===---------------------------------------------------------------------===//
855 f (unsigned long a, unsigned long b, unsigned long c)
857 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
860 f (unsigned long a, unsigned long b, unsigned long c)
862 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
864 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
865 "clang -emit-llvm-bc | opt -std-compile-opts".
867 //===---------------------------------------------------------------------===//
870 #define PMD_MASK (~((1UL << 23) - 1))
871 void clear_pmd_range(unsigned long start, unsigned long end)
873 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
876 The expression should optimize to something like
877 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
878 -emit-llvm-bc | opt -std-compile-opts".
880 //===---------------------------------------------------------------------===//
884 foo (unsigned int a, unsigned int b)
886 if (a <= 7 && b <= 7)
889 Should combine to "(a|b) <= 7". Currently not optimized with "clang
890 -emit-llvm-bc | opt -std-compile-opts".
892 //===---------------------------------------------------------------------===//
898 return (n >= 0 ? 1 : -1);
900 Should combine to (n >> 31) | 1. Currently not optimized with "clang
901 -emit-llvm-bc | opt -std-compile-opts | llc".
903 //===---------------------------------------------------------------------===//
906 int test(int a, int b)
913 Should combine to "a <= b". Currently not optimized with "clang
914 -emit-llvm-bc | opt -std-compile-opts | llc".
916 //===---------------------------------------------------------------------===//
920 if (variable == 4 || variable == 6)
923 This should optimize to "if ((variable | 2) == 6)". Currently not
924 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
926 //===---------------------------------------------------------------------===//
928 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
930 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
931 These should combine to the same thing. Currently, the first function
932 produces better code on X86.
934 //===---------------------------------------------------------------------===//
937 #define abs(x) x>0?x:-x
940 return (abs(x)) >= 0;
942 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
943 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
945 //===---------------------------------------------------------------------===//
949 rotate_cst (unsigned int a)
951 a = (a << 10) | (a >> 22);
956 minus_cst (unsigned int a)
965 mask_gt (unsigned int a)
967 /* This is equivalent to a > 15. */
972 rshift_gt (unsigned int a)
974 /* This is equivalent to a > 23. */
978 All should simplify to a single comparison. All of these are
979 currently not optimized with "clang -emit-llvm-bc | opt
982 //===---------------------------------------------------------------------===//
985 int c(int* x) {return (char*)x+2 == (char*)x;}
986 Should combine to 0. Currently not optimized with "clang
987 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
989 //===---------------------------------------------------------------------===//
991 int a(unsigned char* b) {return *b > 99;}
992 There's an unnecessary zext in the generated code with "clang
993 -emit-llvm-bc | opt -std-compile-opts".
995 //===---------------------------------------------------------------------===//
997 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
998 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
999 with "clang -emit-llvm-bc | opt -std-compile-opts".
1001 //===---------------------------------------------------------------------===//
1003 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1004 Should combine to "x | (y & 3)". Currently not optimized with "clang
1005 -emit-llvm-bc | opt -std-compile-opts".
1007 //===---------------------------------------------------------------------===//
1009 unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
1010 Should combine to "a | 1". Currently not optimized with "clang
1011 -emit-llvm-bc | opt -std-compile-opts".
1013 //===---------------------------------------------------------------------===//
1015 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1016 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1017 "clang -emit-llvm-bc | opt -std-compile-opts".
1019 //===---------------------------------------------------------------------===//
1021 int a(int a,int b) {return (~(a|b))|a;}
1022 Should fold to "a|~b". Currently not optimized with "clang
1023 -emit-llvm-bc | opt -std-compile-opts".
1025 //===---------------------------------------------------------------------===//
1027 int a(int a, int b) {return (a&&b) || (a&&!b);}
1028 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1029 | opt -std-compile-opts".
1031 //===---------------------------------------------------------------------===//
1033 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1034 Should fold to "a ? b : c", or at least something sane. Currently not
1035 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1037 //===---------------------------------------------------------------------===//
1039 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1040 Should fold to a && (b || c). Currently not optimized with "clang
1041 -emit-llvm-bc | opt -std-compile-opts".
1043 //===---------------------------------------------------------------------===//
1045 int a(int x) {return x | ((x & 8) ^ 8);}
1046 Should combine to x | 8. Currently not optimized with "clang
1047 -emit-llvm-bc | opt -std-compile-opts".
1049 //===---------------------------------------------------------------------===//
1051 int a(int x) {return x ^ ((x & 8) ^ 8);}
1052 Should also combine to x | 8. Currently not optimized with "clang
1053 -emit-llvm-bc | opt -std-compile-opts".
1055 //===---------------------------------------------------------------------===//
1057 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1058 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1059 -emit-llvm-bc | opt -std-compile-opts".
1061 //===---------------------------------------------------------------------===//
1063 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1064 Should combine to x | -9. Currently not optimized with "clang
1065 -emit-llvm-bc | opt -std-compile-opts".
1067 //===---------------------------------------------------------------------===//
1069 int a(int x) {return ((x | -9) ^ 8) & x;}
1070 Should combine to x & -9. Currently not optimized with "clang
1071 -emit-llvm-bc | opt -std-compile-opts".
1073 //===---------------------------------------------------------------------===//
1075 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1076 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1077 with "clang -emit-llvm-bc | opt -std-compile-opts".
1079 //===---------------------------------------------------------------------===//
1081 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1082 There's an unnecessary zext in the generated code with "clang
1083 -emit-llvm-bc | opt -std-compile-opts".
1085 //===---------------------------------------------------------------------===//
1087 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1088 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1089 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1091 //===---------------------------------------------------------------------===//
1093 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1095 %tmp = icmp eq i32 %decl_context, 4
1096 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1097 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1098 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1100 tmp1 should be simplified to something like:
1101 (!tmp || decl_context == 1)
1103 This allows recursive simplifications, tmp1 is used all over the place in
1104 the function, e.g. by:
1106 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1107 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1108 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1112 //===---------------------------------------------------------------------===//
1114 Store sinking: This code:
1116 void f (int n, int *cond, int *res) {
1119 for (i = 0; i < n; i++)
1121 *res ^= 234; /* (*) */
1124 On this function GVN hoists the fully redundant value of *res, but nothing
1125 moves the store out. This gives us this code:
1127 bb: ; preds = %bb2, %entry
1128 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1129 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1130 %1 = load i32* %cond, align 4
1131 %2 = icmp eq i32 %1, 0
1132 br i1 %2, label %bb2, label %bb1
1135 %3 = xor i32 %.rle, 234
1136 store i32 %3, i32* %res, align 4
1139 bb2: ; preds = %bb, %bb1
1140 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1141 %indvar.next = add i32 %i.05, 1
1142 %exitcond = icmp eq i32 %indvar.next, %n
1143 br i1 %exitcond, label %return, label %bb
1145 DSE should sink partially dead stores to get the store out of the loop.
1147 Here's another partial dead case:
1148 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1150 //===---------------------------------------------------------------------===//
1152 Scalar PRE hoists the mul in the common block up to the else:
1154 int test (int a, int b, int c, int g) {
1164 It would be better to do the mul once to reduce codesize above the if.
1165 This is GCC PR38204.
1167 //===---------------------------------------------------------------------===//
1169 GCC PR37810 is an interesting case where we should sink load/store reload
1170 into the if block and outside the loop, so we don't reload/store it on the
1191 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1192 we don't sink the store. We need partially dead store sinking.
1194 //===---------------------------------------------------------------------===//
1196 [PHI TRANSLATE GEPs]
1198 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1199 leading to excess stack traffic. This could be handled by GVN with some crazy
1200 symbolic phi translation. The code we get looks like (g is on the stack):
1204 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1205 store i32 %8, i32* %9, align bel %bb3
1207 bb3: ; preds = %bb1, %bb2, %bb
1208 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1209 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1210 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1211 %11 = load i32* %10, align 4
1213 %11 is fully redundant, an in BB2 it should have the value %8.
1215 GCC PR33344 is a similar case.
1217 //===---------------------------------------------------------------------===//
1219 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1220 GCC testsuite. There are many pre testcases as ssa-pre-*.c
1222 //===---------------------------------------------------------------------===//
1224 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1225 GCC testsuite. For example, predcom-1.c is:
1227 for (i = 2; i < 1000; i++)
1228 fib[i] = (fib[i-1] + fib[i - 2]) & 0xffff;
1230 which compiles into:
1232 bb1: ; preds = %bb1, %bb1.thread
1233 %indvar = phi i32 [ 0, %bb1.thread ], [ %0, %bb1 ]
1234 %i.0.reg2mem.0 = add i32 %indvar, 2
1235 %0 = add i32 %indvar, 1 ; <i32> [#uses=3]
1236 %1 = getelementptr [1000 x i32]* @fib, i32 0, i32 %0
1237 %2 = load i32* %1, align 4 ; <i32> [#uses=1]
1238 %3 = getelementptr [1000 x i32]* @fib, i32 0, i32 %indvar
1239 %4 = load i32* %3, align 4 ; <i32> [#uses=1]
1240 %5 = add i32 %4, %2 ; <i32> [#uses=1]
1241 %6 = and i32 %5, 65535 ; <i32> [#uses=1]
1242 %7 = getelementptr [1000 x i32]* @fib, i32 0, i32 %i.0.reg2mem.0
1243 store i32 %6, i32* %7, align 4
1244 %exitcond = icmp eq i32 %0, 998 ; <i1> [#uses=1]
1245 br i1 %exitcond, label %return, label %bb1
1252 instead of handling this as a loop or other xform, all we'd need to do is teach
1253 load PRE to phi translate the %0 add (i+1) into the predecessor as (i'+1+1) =
1254 (i'+2) (where i' is the previous iteration of i). This would find the store
1257 predcom-2.c is apparently the same as predcom-1.c
1258 predcom-3.c is very similar but needs loads feeding each other instead of
1260 predcom-4.c seems the same as the rest.
1263 //===---------------------------------------------------------------------===//
1265 Other simple load PRE cases:
1266 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
1268 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
1269 llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
1271 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16799 [BITCAST PHI TRANS]
1273 //===---------------------------------------------------------------------===//
1275 Type based alias analysis:
1276 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1278 //===---------------------------------------------------------------------===//
1280 A/B get pinned to the stack because we turn an if/then into a select instead
1281 of PRE'ing the load/store. This may be fixable in instcombine:
1282 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1284 struct X { int i; };
1298 //===---------------------------------------------------------------------===//
1300 Interesting missed case because of control flow flattening (should be 2 loads):
1301 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1302 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1303 opt -mem2reg -gvn -instcombine | llvm-dis
1304 we miss it because we need 1) GEP PHI TRAN, 2) CRIT EDGE 3) MULTIPLE DIFFERENT
1305 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1307 //===---------------------------------------------------------------------===//
1309 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1310 We could eliminate the branch condition here, loading from null is undefined:
1312 struct S { int w, x, y, z; };
1313 struct T { int r; struct S s; };
1314 void bar (struct S, int);
1315 void foo (int a, struct T b)
1323 //===---------------------------------------------------------------------===//
1325 simplifylibcalls should do several optimizations for strspn/strcspn:
1327 strcspn(x, "") -> strlen(x)
1330 strspn(x, "") -> strlen(x)
1331 strspn(x, "a") -> strchr(x, 'a')-x
1333 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1335 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1337 register size_t __result = 0;
1338 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1339 __s[__result] != __reject2 && __s[__result] != __reject3)
1344 This should turn into a switch on the character. See PR3253 for some notes on
1347 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1349 //===---------------------------------------------------------------------===//
1351 "gas" uses this idiom:
1352 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1354 else if (strchr ("<>", *intel_parser.op_string)
1356 Those should be turned into a switch.
1358 //===---------------------------------------------------------------------===//
1360 252.eon contains this interesting code:
1362 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1363 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1364 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1365 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1366 call void @llvm.memcpy.i32(i8* %endptr,
1367 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1368 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1370 This is interesting for a couple reasons. First, in this:
1372 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1373 %strlen = call i32 @strlen(i8* %3072)
1375 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1376 strcpy call returns a pointer to the end of the string. Based on that, the
1377 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1379 Second, the memcpy+strlen strlen can be replaced with:
1381 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1383 Because the destination was just copied into the specified memory buffer. This,
1384 in turn, can be constant folded to "4".
1386 In other code, it contains:
1388 %endptr6978 = bitcast i8* %endptr69 to i32*
1389 store i32 7107374, i32* %endptr6978, align 1
1390 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1392 Which could also be constant folded. Whatever is producing this should probably
1393 be fixed to leave this as a memcpy from a string.
1395 Further, eon also has an interesting partially redundant strlen call:
1397 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1398 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1399 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1400 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1401 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1402 br i1 %685, label %bb10, label %bb9
1405 %686 = call i32 @strlen(i8* %683) nounwind readonly
1406 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1407 br i1 %687, label %bb10, label %bb11
1409 bb10: ; preds = %bb9, %bb8
1410 %688 = call i32 @strlen(i8* %683) nounwind readonly
1412 This could be eliminated by doing the strlen once in bb8, saving code size and
1413 improving perf on the bb8->9->10 path.
1415 //===---------------------------------------------------------------------===//
1417 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1419 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1422 bb62: ; preds = %bb55, %bb53
1423 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1424 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1425 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1426 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1429 br i1 %or.cond, label %bb65, label %bb72
1431 bb65: ; preds = %bb62
1432 store i8 0, i8* %173, align 1
1435 bb72: ; preds = %bb65, %bb62
1436 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1437 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1439 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1440 redundant with the %171 call. At worst, we could shove the %177 strlen call
1441 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1442 that bb65 stores to the string, zeroing out the last byte. This means that on
1443 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1446 This pattern repeats several times, basically doing:
1451 where it is "obvious" that B = A-1.
1453 //===---------------------------------------------------------------------===//
1455 186.crafty contains this interesting pattern:
1457 %77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
1459 %phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
1460 br i1 %phitmp648, label %bb70, label %bb76
1462 bb70: ; preds = %OptionMatch.exit91, %bb69
1463 %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
1467 if (strstr(cststr, P) == cststr) {
1471 The strstr call would be significantly cheaper written as:
1474 if (memcmp(P, str, strlen(P)))
1477 This is memcmp+strlen instead of strstr. This also makes the strlen fully
1480 //===---------------------------------------------------------------------===//
1482 186.crafty also contains this code:
1484 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1485 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1486 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1487 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1488 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1490 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1492 //===---------------------------------------------------------------------===//
1494 186.crafty has this interesting pattern with the "out.4543" variable:
1496 call void @llvm.memcpy.i32(
1497 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1498 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1499 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1501 It is basically doing:
1503 memcpy(globalarray, "string");
1504 printf(..., globalarray);
1506 Anyway, by knowing that printf just reads the memory and forward substituting
1507 the string directly into the printf, this eliminates reads from globalarray.
1508 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1509 other similar functions) there are many stores to "out". Once all the printfs
1510 stop using "out", all that is left is the memcpy's into it. This should allow
1511 globalopt to remove the "stored only" global.
1513 //===---------------------------------------------------------------------===//
1517 define inreg i32 @foo(i8* inreg %p) nounwind {
1519 %tmp1 = ashr i8 %tmp0, 5
1520 %tmp2 = sext i8 %tmp1 to i32
1524 could be dagcombine'd to a sign-extending load with a shift.
1525 For example, on x86 this currently gets this:
1531 while it could get this:
1536 //===---------------------------------------------------------------------===//
1540 int test(int x) { return 1-x == x; } // --> return false
1541 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1543 Always foldable for odd constants, what is the rule for even?
1545 //===---------------------------------------------------------------------===//
1547 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1548 for next field in struct (which is at same address).
1550 For example: store of float into { {{}}, float } could be turned into a store to
1553 //===---------------------------------------------------------------------===//
1556 double foo(double a) { return sin(a); }
1558 This compiles into this on x86-64 Linux:
1569 //===---------------------------------------------------------------------===//
1571 The arg promotion pass should make use of nocapture to make its alias analysis
1572 stuff much more precise.
1574 //===---------------------------------------------------------------------===//
1576 The following functions should be optimized to use a select instead of a
1577 branch (from gcc PR40072):
1579 char char_int(int m) {if(m>7) return 0; return m;}
1580 int int_char(char m) {if(m>7) return 0; return m;}
1582 //===---------------------------------------------------------------------===//
1584 Instcombine should replace the load with a constant in:
1586 static const char x[4] = {'a', 'b', 'c', 'd'};
1588 unsigned int y(void) {
1589 return *(unsigned int *)x;
1592 It currently only does this transformation when the size of the constant
1593 is the same as the size of the integer (so, try x[5]) and the last byte
1594 is a null (making it a C string). There's no need for these restrictions.
1596 //===---------------------------------------------------------------------===//
1598 InstCombine's "turn load from constant into constant" optimization should be
1599 more aggressive in the presence of bitcasts. For example, because of unions,
1604 double v __attribute__((vector_size(16)));
1606 typedef union vec2d vec2d;
1608 static vec2d a={{1,2}}, b={{3,4}};
1611 return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
1616 @a = internal constant %0 { [2 x double]
1617 [double 1.000000e+00, double 2.000000e+00] }, align 16
1618 @b = internal constant %0 { [2 x double]
1619 [double 3.000000e+00, double 4.000000e+00] }, align 16
1621 define void @foo(%struct.vec2d* noalias nocapture sret %agg.result) nounwind {
1623 %0 = load <2 x double>* getelementptr (%struct.vec2d*
1624 bitcast (%0* @a to %struct.vec2d*), i32 0, i32 0), align 16
1625 %1 = load <2 x double>* getelementptr (%struct.vec2d*
1626 bitcast (%0* @b to %struct.vec2d*), i32 0, i32 0), align 16
1629 Instcombine should be able to optimize away the loads (and thus the globals).
1633 //===---------------------------------------------------------------------===//