1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should recognize idioms for add-with-carry and turn it into the appropriate
6 intrinsics. This example:
8 unsigned add32carry(unsigned sum, unsigned x) {
15 Compiles to: clang t.c -S -o - -O3 -fomit-frame-pointer -m64 -mkernel
17 _add32carry: ## @add32carry
28 leal (%rsi,%rdi), %eax
35 //===---------------------------------------------------------------------===//
37 Dead argument elimination should be enhanced to handle cases when an argument is
38 dead to an externally visible function. Though the argument can't be removed
39 from the externally visible function, the caller doesn't need to pass it in.
40 For example in this testcase:
42 void foo(int X) __attribute__((noinline));
43 void foo(int X) { sideeffect(); }
44 void bar(int A) { foo(A+1); }
48 define void @bar(i32 %A) nounwind ssp {
49 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
50 tail call void @foo(i32 %0) nounwind noinline ssp
54 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
55 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
58 //===---------------------------------------------------------------------===//
60 With the recent changes to make the implicit def/use set explicit in
61 machineinstrs, we should change the target descriptions for 'call' instructions
62 so that the .td files don't list all the call-clobbered registers as implicit
63 defs. Instead, these should be added by the code generator (e.g. on the dag).
65 This has a number of uses:
67 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
68 for their different impdef sets.
69 2. Targets with multiple calling convs (e.g. x86) which have different clobber
70 sets don't need copies of call instructions.
71 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
74 //===---------------------------------------------------------------------===//
76 We should recognized various "overflow detection" idioms and translate them into
77 llvm.uadd.with.overflow and similar intrinsics. For example, we compile this:
79 size_t add(size_t a,size_t b) {
91 when it would be better to generate:
96 //===---------------------------------------------------------------------===//
98 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
99 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
100 safe in general, even on darwin. See the libm implementation of hypot for
101 examples (which special case when x/y are exactly zero to get signed zeros etc
104 //===---------------------------------------------------------------------===//
106 Solve this DAG isel folding deficiency:
124 The problem is the store's chain operand is not the load X but rather
125 a TokenFactor of the load X and load Y, which prevents the folding.
127 There are two ways to fix this:
129 1. The dag combiner can start using alias analysis to realize that y/x
130 don't alias, making the store to X not dependent on the load from Y.
131 2. The generated isel could be made smarter in the case it can't
132 disambiguate the pointers.
134 Number 1 is the preferred solution.
136 This has been "fixed" by a TableGen hack. But that is a short term workaround
137 which will be removed once the proper fix is made.
139 //===---------------------------------------------------------------------===//
141 On targets with expensive 64-bit multiply, we could LSR this:
148 for (i = ...; ++i, tmp+=tmp)
151 This would be a win on ppc32, but not x86 or ppc64.
153 //===---------------------------------------------------------------------===//
155 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
157 //===---------------------------------------------------------------------===//
159 Reassociate should turn things like:
161 int factorial(int X) {
162 return X*X*X*X*X*X*X*X;
165 into llvm.powi calls, allowing the code generator to produce balanced
166 multiplication trees.
168 First, the intrinsic needs to be extended to support integers, and second the
169 code generator needs to be enhanced to lower these to multiplication trees.
171 //===---------------------------------------------------------------------===//
173 Interesting? testcase for add/shift/mul reassoc:
175 int bar(int x, int y) {
176 return x*x*x+y+x*x*x*x*x*y*y*y*y;
178 int foo(int z, int n) {
179 return bar(z, n) + bar(2*z, 2*n);
182 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
183 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
184 which is the same number of multiplies and is canonical, because the 2*X has
185 multiple uses. Here's a simple example:
187 define i32 @test15(i32 %X1) {
188 %B = mul i32 %X1, 47 ; X1*47
194 //===---------------------------------------------------------------------===//
196 Reassociate should handle the example in GCC PR16157:
198 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
199 void f () { /* this can be optimized to four additions... */
200 b4 = a4 + a3 + a2 + a1 + a0;
201 b3 = a3 + a2 + a1 + a0;
206 This requires reassociating to forms of expressions that are already available,
207 something that reassoc doesn't think about yet.
210 //===---------------------------------------------------------------------===//
212 This function: (derived from GCC PR19988)
213 double foo(double x, double y) {
214 return ((x + 0.1234 * y) * (x + -0.1234 * y));
220 mulsd LCPI1_1(%rip), %xmm1
221 mulsd LCPI1_0(%rip), %xmm2
228 Reassociate should be able to turn it into:
230 double foo(double x, double y) {
231 return ((x + 0.1234 * y) * (x - 0.1234 * y));
234 Which allows the multiply by constant to be CSE'd, producing:
237 mulsd LCPI1_0(%rip), %xmm1
244 This doesn't need -ffast-math support at all. This is particularly bad because
245 the llvm-gcc frontend is canonicalizing the later into the former, but clang
246 doesn't have this problem.
248 //===---------------------------------------------------------------------===//
250 These two functions should generate the same code on big-endian systems:
252 int g(int *j,int *l) { return memcmp(j,l,4); }
253 int h(int *j, int *l) { return *j - *l; }
255 this could be done in SelectionDAGISel.cpp, along with other special cases,
258 //===---------------------------------------------------------------------===//
260 It would be nice to revert this patch:
261 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
263 And teach the dag combiner enough to simplify the code expanded before
264 legalize. It seems plausible that this knowledge would let it simplify other
267 //===---------------------------------------------------------------------===//
269 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
270 to the type size. It works but can be overly conservative as the alignment of
271 specific vector types are target dependent.
273 //===---------------------------------------------------------------------===//
275 We should produce an unaligned load from code like this:
277 v4sf example(float *P) {
278 return (v4sf){P[0], P[1], P[2], P[3] };
281 //===---------------------------------------------------------------------===//
283 Add support for conditional increments, and other related patterns. Instead
288 je LBB16_2 #cond_next
299 //===---------------------------------------------------------------------===//
301 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
303 Expand these to calls of sin/cos and stores:
304 double sincos(double x, double *sin, double *cos);
305 float sincosf(float x, float *sin, float *cos);
306 long double sincosl(long double x, long double *sin, long double *cos);
308 Doing so could allow SROA of the destination pointers. See also:
309 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
311 This is now easily doable with MRVs. We could even make an intrinsic for this
312 if anyone cared enough about sincos.
314 //===---------------------------------------------------------------------===//
316 quantum_sigma_x in 462.libquantum contains the following loop:
318 for(i=0; i<reg->size; i++)
320 /* Flip the target bit of each basis state */
321 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
324 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
325 so cool to turn it into something like:
327 long long Res = ((MAX_UNSIGNED) 1 << target);
329 for(i=0; i<reg->size; i++)
330 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
332 for(i=0; i<reg->size; i++)
333 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
336 ... which would only do one 32-bit XOR per loop iteration instead of two.
338 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
341 //===---------------------------------------------------------------------===//
343 This isn't recognized as bswap by instcombine (yes, it really is bswap):
345 unsigned long reverse(unsigned v) {
347 t = v ^ ((v << 16) | (v >> 16));
349 v = (v << 24) | (v >> 8);
353 Neither is this (very standard idiom):
357 return (((n) << 24) | (((n) & 0xff00) << 8)
358 | (((n) >> 8) & 0xff00) | ((n) >> 24));
361 //===---------------------------------------------------------------------===//
365 These idioms should be recognized as popcount (see PR1488):
367 unsigned countbits_slow(unsigned v) {
369 for (c = 0; v; v >>= 1)
373 unsigned countbits_fast(unsigned v){
376 v &= v - 1; // clear the least significant bit set
380 BITBOARD = unsigned long long
381 int PopCnt(register BITBOARD a) {
389 unsigned int popcount(unsigned int input) {
390 unsigned int count = 0;
391 for (unsigned int i = 0; i < 4 * 8; i++)
392 count += (input >> i) & i;
396 This is a form of idiom recognition for loops, the same thing that could be
397 useful for recognizing memset/memcpy.
399 //===---------------------------------------------------------------------===//
401 These should turn into single 16-bit (unaligned?) loads on little/big endian
404 unsigned short read_16_le(const unsigned char *adr) {
405 return adr[0] | (adr[1] << 8);
407 unsigned short read_16_be(const unsigned char *adr) {
408 return (adr[0] << 8) | adr[1];
411 //===---------------------------------------------------------------------===//
413 -instcombine should handle this transform:
414 icmp pred (sdiv X / C1 ), C2
415 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
417 Currently InstCombine avoids this transform but will do it when the signs of
418 the operands and the sign of the divide match. See the FIXME in
419 InstructionCombining.cpp in the visitSetCondInst method after the switch case
420 for Instruction::UDiv (around line 4447) for more details.
422 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
425 //===---------------------------------------------------------------------===//
429 viterbi speeds up *significantly* if the various "history" related copy loops
430 are turned into memcpy calls at the source level. We need a "loops to memcpy"
433 //===---------------------------------------------------------------------===//
437 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
438 opportunities in its double_array_divs_variable function: it needs loop
439 interchange, memory promotion (which LICM already does), vectorization and
440 variable trip count loop unrolling (since it has a constant trip count). ICC
441 apparently produces this very nice code with -ffast-math:
443 ..B1.70: # Preds ..B1.70 ..B1.69
444 mulpd %xmm0, %xmm1 #108.2
445 mulpd %xmm0, %xmm1 #108.2
446 mulpd %xmm0, %xmm1 #108.2
447 mulpd %xmm0, %xmm1 #108.2
449 cmpl $131072, %edx #108.2
450 jb ..B1.70 # Prob 99% #108.2
452 It would be better to count down to zero, but this is a lot better than what we
455 //===---------------------------------------------------------------------===//
459 typedef unsigned U32;
460 typedef unsigned long long U64;
461 int test (U32 *inst, U64 *regs) {
464 int r1 = (temp >> 20) & 0xf;
465 int b2 = (temp >> 16) & 0xf;
466 effective_addr2 = temp & 0xfff;
467 if (b2) effective_addr2 += regs[b2];
468 b2 = (temp >> 12) & 0xf;
469 if (b2) effective_addr2 += regs[b2];
470 effective_addr2 &= regs[4];
471 if ((effective_addr2 & 3) == 0)
476 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
477 we don't eliminate the computation of the top half of effective_addr2 because
478 we don't have whole-function selection dags. On x86, this means we use one
479 extra register for the function when effective_addr2 is declared as U64 than
480 when it is declared U32.
482 PHI Slicing could be extended to do this.
484 //===---------------------------------------------------------------------===//
486 LSR should know what GPR types a target has from TargetData. This code:
488 volatile short X, Y; // globals
492 for (i = 0; i < N; i++) { X = i; Y = i*4; }
495 produces two near identical IV's (after promotion) on PPC/ARM:
505 add r2, r2, #1 <- [0,+,1]
506 sub r0, r0, #1 <- [0,-,1]
510 LSR should reuse the "+" IV for the exit test.
512 //===---------------------------------------------------------------------===//
514 Tail call elim should be more aggressive, checking to see if the call is
515 followed by an uncond branch to an exit block.
517 ; This testcase is due to tail-duplication not wanting to copy the return
518 ; instruction into the terminating blocks because there was other code
519 ; optimized out of the function after the taildup happened.
520 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
522 define i32 @t4(i32 %a) {
524 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
525 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
526 br i1 %tmp.2, label %then.0, label %else.0
528 then.0: ; preds = %entry
529 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
530 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
533 else.0: ; preds = %entry
534 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
535 br i1 %tmp.7, label %then.1, label %return
537 then.1: ; preds = %else.0
538 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
539 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
542 return: ; preds = %then.1, %else.0, %then.0
543 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
548 //===---------------------------------------------------------------------===//
550 Tail recursion elimination should handle:
555 return 2 * pow2m1 (n - 1) + 1;
558 Also, multiplies can be turned into SHL's, so they should be handled as if
559 they were associative. "return foo() << 1" can be tail recursion eliminated.
561 //===---------------------------------------------------------------------===//
563 Argument promotion should promote arguments for recursive functions, like
566 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
568 define internal i32 @foo(i32* %x) {
570 %tmp = load i32* %x ; <i32> [#uses=0]
571 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
575 define i32 @bar(i32* %x) {
577 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
581 //===---------------------------------------------------------------------===//
583 We should investigate an instruction sinking pass. Consider this silly
599 je LBB1_2 # cond_true
607 The PIC base computation (call+popl) is only used on one path through the
608 code, but is currently always computed in the entry block. It would be
609 better to sink the picbase computation down into the block for the
610 assertion, as it is the only one that uses it. This happens for a lot of
611 code with early outs.
613 Another example is loads of arguments, which are usually emitted into the
614 entry block on targets like x86. If not used in all paths through a
615 function, they should be sunk into the ones that do.
617 In this case, whole-function-isel would also handle this.
619 //===---------------------------------------------------------------------===//
621 Investigate lowering of sparse switch statements into perfect hash tables:
622 http://burtleburtle.net/bob/hash/perfect.html
624 //===---------------------------------------------------------------------===//
626 We should turn things like "load+fabs+store" and "load+fneg+store" into the
627 corresponding integer operations. On a yonah, this loop:
632 for (b = 0; b < 10000000; b++)
633 for (i = 0; i < 256; i++)
637 is twice as slow as this loop:
642 for (b = 0; b < 10000000; b++)
643 for (i = 0; i < 256; i++)
644 a[i] ^= (1ULL << 63);
647 and I suspect other processors are similar. On X86 in particular this is a
648 big win because doing this with integers allows the use of read/modify/write
651 //===---------------------------------------------------------------------===//
653 DAG Combiner should try to combine small loads into larger loads when
654 profitable. For example, we compile this C++ example:
656 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
657 extern THotKey m_HotKey;
658 THotKey GetHotKey () { return m_HotKey; }
660 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
665 movb _m_HotKey+3, %cl
666 movb _m_HotKey+4, %dl
667 movb _m_HotKey+2, %ch
682 movzwl _m_HotKey+4, %edx
686 The LLVM IR contains the needed alignment info, so we should be able to
687 merge the loads and stores into 4-byte loads:
689 %struct.THotKey = type { i16, i8, i8, i8 }
690 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
692 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
693 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
694 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
695 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
697 Alternatively, we should use a small amount of base-offset alias analysis
698 to make it so the scheduler doesn't need to hold all the loads in regs at
701 //===---------------------------------------------------------------------===//
703 We should add an FRINT node to the DAG to model targets that have legal
704 implementations of ceil/floor/rint.
706 //===---------------------------------------------------------------------===//
711 long long input[8] = {1,1,1,1,1,1,1,1};
715 We currently compile this into a memcpy from a global array since the
716 initializer is fairly large and not memset'able. This is good, but the memcpy
717 gets lowered to load/stores in the code generator. This is also ok, except
718 that the codegen lowering for memcpy doesn't handle the case when the source
719 is a constant global. This gives us atrocious code like this:
724 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
726 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
728 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
730 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
732 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
734 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
736 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
748 //===---------------------------------------------------------------------===//
750 http://llvm.org/PR717:
752 The following code should compile into "ret int undef". Instead, LLVM
753 produces "ret int 0":
762 //===---------------------------------------------------------------------===//
764 The loop unroller should partially unroll loops (instead of peeling them)
765 when code growth isn't too bad and when an unroll count allows simplification
766 of some code within the loop. One trivial example is:
772 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
781 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
782 reduction in code size. The resultant code would then also be suitable for
783 exit value computation.
785 //===---------------------------------------------------------------------===//
787 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
788 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
789 matching code in dag combine doesn't look through truncates aggressively
790 enough. Here are some testcases reduces from GCC PR17886:
792 unsigned long long f(unsigned long long x, int y) {
793 return (x << y) | (x >> 64-y);
795 unsigned f2(unsigned x, int y){
796 return (x << y) | (x >> 32-y);
798 unsigned long long f3(unsigned long long x){
800 return (x << y) | (x >> 64-y);
802 unsigned f4(unsigned x){
804 return (x << y) | (x >> 32-y);
806 unsigned long long f5(unsigned long long x, unsigned long long y) {
807 return (x << 8) | ((y >> 48) & 0xffull);
809 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
812 return (x << 8) | ((y >> 48) & 0xffull);
814 return (x << 16) | ((y >> 40) & 0xffffull);
816 return (x << 24) | ((y >> 32) & 0xffffffull);
818 return (x << 32) | ((y >> 24) & 0xffffffffull);
820 return (x << 40) | ((y >> 16) & 0xffffffffffull);
824 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
825 generate truly horrible code, instead of using shld and friends. On
826 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
827 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
829 //===---------------------------------------------------------------------===//
831 This (and similar related idioms):
833 unsigned int foo(unsigned char i) {
834 return i | (i<<8) | (i<<16) | (i<<24);
839 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
841 %conv = zext i8 %i to i32
842 %shl = shl i32 %conv, 8
843 %shl5 = shl i32 %conv, 16
844 %shl9 = shl i32 %conv, 24
845 %or = or i32 %shl9, %conv
846 %or6 = or i32 %or, %shl5
847 %or10 = or i32 %or6, %shl
851 it would be better as:
853 unsigned int bar(unsigned char i) {
854 unsigned int j=i | (i << 8);
860 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
862 %conv = zext i8 %i to i32
863 %shl = shl i32 %conv, 8
864 %or = or i32 %shl, %conv
865 %shl5 = shl i32 %or, 16
866 %or6 = or i32 %shl5, %or
870 or even i*0x01010101, depending on the speed of the multiplier. The best way to
871 handle this is to canonicalize it to a multiply in IR and have codegen handle
872 lowering multiplies to shifts on cpus where shifts are faster.
874 //===---------------------------------------------------------------------===//
876 We do a number of simplifications in simplify libcalls to strength reduce
877 standard library functions, but we don't currently merge them together. For
878 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
879 be done safely if "b" isn't modified between the strlen and memcpy of course.
881 //===---------------------------------------------------------------------===//
883 We compile this program: (from GCC PR11680)
884 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
886 Into code that runs the same speed in fast/slow modes, but both modes run 2x
887 slower than when compile with GCC (either 4.0 or 4.2):
889 $ llvm-g++ perf.cpp -O3 -fno-exceptions
891 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
893 $ g++ perf.cpp -O3 -fno-exceptions
895 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
897 It looks like we are making the same inlining decisions, so this may be raw
898 codegen badness or something else (haven't investigated).
900 //===---------------------------------------------------------------------===//
902 We miss some instcombines for stuff like this:
904 void foo (unsigned int a) {
905 /* This one is equivalent to a >= (3 << 2). */
910 A few other related ones are in GCC PR14753.
912 //===---------------------------------------------------------------------===//
914 Divisibility by constant can be simplified (according to GCC PR12849) from
915 being a mulhi to being a mul lo (cheaper). Testcase:
917 void bar(unsigned n) {
922 This is equivalent to the following, where 2863311531 is the multiplicative
923 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
924 void bar(unsigned n) {
925 if (n * 2863311531U < 1431655766U)
929 The same transformation can work with an even modulo with the addition of a
930 rotate: rotate the result of the multiply to the right by the number of bits
931 which need to be zero for the condition to be true, and shrink the compare RHS
932 by the same amount. Unless the target supports rotates, though, that
933 transformation probably isn't worthwhile.
935 The transformation can also easily be made to work with non-zero equality
936 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
938 //===---------------------------------------------------------------------===//
940 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
941 bunch of other stuff from this example (see PR1604):
951 std::scanf("%d", &t.val);
952 std::printf("%d\n", t.val);
955 //===---------------------------------------------------------------------===//
957 These functions perform the same computation, but produce different assembly.
959 define i8 @select(i8 %x) readnone nounwind {
960 %A = icmp ult i8 %x, 250
961 %B = select i1 %A, i8 0, i8 1
965 define i8 @addshr(i8 %x) readnone nounwind {
966 %A = zext i8 %x to i9
967 %B = add i9 %A, 6 ;; 256 - 250 == 6
969 %D = trunc i9 %C to i8
973 //===---------------------------------------------------------------------===//
977 f (unsigned long a, unsigned long b, unsigned long c)
979 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
982 f (unsigned long a, unsigned long b, unsigned long c)
984 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
986 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
987 "clang -emit-llvm-bc | opt -std-compile-opts".
989 //===---------------------------------------------------------------------===//
992 #define PMD_MASK (~((1UL << 23) - 1))
993 void clear_pmd_range(unsigned long start, unsigned long end)
995 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
998 The expression should optimize to something like
999 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
1000 -emit-llvm-bc | opt -std-compile-opts".
1002 //===---------------------------------------------------------------------===//
1004 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
1006 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
1007 These should combine to the same thing. Currently, the first function
1008 produces better code on X86.
1010 //===---------------------------------------------------------------------===//
1013 #define abs(x) x>0?x:-x
1016 return (abs(x)) >= 0;
1018 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
1019 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1021 //===---------------------------------------------------------------------===//
1025 rotate_cst (unsigned int a)
1027 a = (a << 10) | (a >> 22);
1032 minus_cst (unsigned int a)
1041 mask_gt (unsigned int a)
1043 /* This is equivalent to a > 15. */
1048 rshift_gt (unsigned int a)
1050 /* This is equivalent to a > 23. */
1054 All should simplify to a single comparison. All of these are
1055 currently not optimized with "clang -emit-llvm-bc | opt
1058 //===---------------------------------------------------------------------===//
1061 int c(int* x) {return (char*)x+2 == (char*)x;}
1062 Should combine to 0. Currently not optimized with "clang
1063 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
1065 //===---------------------------------------------------------------------===//
1067 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1068 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1069 with "clang -emit-llvm-bc | opt -std-compile-opts".
1071 //===---------------------------------------------------------------------===//
1073 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1074 Should combine to "x | (y & 3)". Currently not optimized with "clang
1075 -emit-llvm-bc | opt -std-compile-opts".
1077 //===---------------------------------------------------------------------===//
1079 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1080 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1081 "clang -emit-llvm-bc | opt -std-compile-opts".
1083 //===---------------------------------------------------------------------===//
1085 int a(int a,int b) {return (~(a|b))|a;}
1086 Should fold to "a|~b". Currently not optimized with "clang
1087 -emit-llvm-bc | opt -std-compile-opts".
1089 //===---------------------------------------------------------------------===//
1091 int a(int a, int b) {return (a&&b) || (a&&!b);}
1092 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1093 | opt -std-compile-opts".
1095 //===---------------------------------------------------------------------===//
1097 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1098 Should fold to "a ? b : c", or at least something sane. Currently not
1099 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1101 //===---------------------------------------------------------------------===//
1103 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1104 Should fold to a && (b || c). Currently not optimized with "clang
1105 -emit-llvm-bc | opt -std-compile-opts".
1107 //===---------------------------------------------------------------------===//
1109 int a(int x) {return x | ((x & 8) ^ 8);}
1110 Should combine to x | 8. Currently not optimized with "clang
1111 -emit-llvm-bc | opt -std-compile-opts".
1113 //===---------------------------------------------------------------------===//
1115 int a(int x) {return x ^ ((x & 8) ^ 8);}
1116 Should also combine to x | 8. Currently not optimized with "clang
1117 -emit-llvm-bc | opt -std-compile-opts".
1119 //===---------------------------------------------------------------------===//
1121 int a(int x) {return ((x | -9) ^ 8) & x;}
1122 Should combine to x & -9. Currently not optimized with "clang
1123 -emit-llvm-bc | opt -std-compile-opts".
1125 //===---------------------------------------------------------------------===//
1127 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1128 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1129 with "clang -emit-llvm-bc | opt -std-compile-opts".
1131 //===---------------------------------------------------------------------===//
1133 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1134 There's an unnecessary zext in the generated code with "clang
1135 -emit-llvm-bc | opt -std-compile-opts".
1137 //===---------------------------------------------------------------------===//
1139 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1140 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1141 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1143 //===---------------------------------------------------------------------===//
1145 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1147 %tmp = icmp eq i32 %decl_context, 4
1148 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1149 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1150 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1152 tmp1 should be simplified to something like:
1153 (!tmp || decl_context == 1)
1155 This allows recursive simplifications, tmp1 is used all over the place in
1156 the function, e.g. by:
1158 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1159 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1160 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1164 //===---------------------------------------------------------------------===//
1168 Store sinking: This code:
1170 void f (int n, int *cond, int *res) {
1173 for (i = 0; i < n; i++)
1175 *res ^= 234; /* (*) */
1178 On this function GVN hoists the fully redundant value of *res, but nothing
1179 moves the store out. This gives us this code:
1181 bb: ; preds = %bb2, %entry
1182 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1183 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1184 %1 = load i32* %cond, align 4
1185 %2 = icmp eq i32 %1, 0
1186 br i1 %2, label %bb2, label %bb1
1189 %3 = xor i32 %.rle, 234
1190 store i32 %3, i32* %res, align 4
1193 bb2: ; preds = %bb, %bb1
1194 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1195 %indvar.next = add i32 %i.05, 1
1196 %exitcond = icmp eq i32 %indvar.next, %n
1197 br i1 %exitcond, label %return, label %bb
1199 DSE should sink partially dead stores to get the store out of the loop.
1201 Here's another partial dead case:
1202 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1204 //===---------------------------------------------------------------------===//
1206 Scalar PRE hoists the mul in the common block up to the else:
1208 int test (int a, int b, int c, int g) {
1218 It would be better to do the mul once to reduce codesize above the if.
1219 This is GCC PR38204.
1221 //===---------------------------------------------------------------------===//
1225 GCC PR37810 is an interesting case where we should sink load/store reload
1226 into the if block and outside the loop, so we don't reload/store it on the
1247 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1248 we don't sink the store. We need partially dead store sinking.
1250 //===---------------------------------------------------------------------===//
1252 [LOAD PRE CRIT EDGE SPLITTING]
1254 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1255 leading to excess stack traffic. This could be handled by GVN with some crazy
1256 symbolic phi translation. The code we get looks like (g is on the stack):
1260 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1261 store i32 %8, i32* %9, align bel %bb3
1263 bb3: ; preds = %bb1, %bb2, %bb
1264 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1265 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1266 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1267 %11 = load i32* %10, align 4
1269 %11 is partially redundant, an in BB2 it should have the value %8.
1271 GCC PR33344 and PR35287 are similar cases.
1274 //===---------------------------------------------------------------------===//
1278 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1279 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1281 [CRIT EDGE BREAKING]
1282 loadpre3.c predcom-4.c
1284 [PRE OF READONLY CALL]
1287 [TURN SELECT INTO BRANCH]
1288 loadpre14.c loadpre15.c
1290 actually a conditional increment: loadpre18.c loadpre19.c
1292 //===---------------------------------------------------------------------===//
1294 [LOAD PRE / STORE SINKING / SPEC HACK]
1296 This is a chunk of code from 456.hmmer:
1298 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1299 int *tpdm, int xmb, int *bp, int *ms) {
1301 for (k = 1; k <= M; k++) {
1302 mc[k] = mpp[k-1] + tpmm[k-1];
1303 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1304 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1305 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1310 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1311 into a conditional move (select instr in IR) and allow the final store to do the
1312 store. See GCC PR27313 for more details. Note that this is valid to xform even
1313 with the new C++ memory model, since mc[k] is previously loaded and later
1316 //===---------------------------------------------------------------------===//
1319 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1322 //===---------------------------------------------------------------------===//
1324 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1325 GCC testsuite. For example, we get the first example in predcom-1.c, but
1326 miss the second one:
1331 __attribute__ ((noinline))
1332 void count_averages(int n) {
1334 for (i = 1; i < n; i++)
1335 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1338 which compiles into two loads instead of one in the loop.
1340 predcom-2.c is the same as predcom-1.c
1342 predcom-3.c is very similar but needs loads feeding each other instead of
1346 //===---------------------------------------------------------------------===//
1350 Type based alias analysis:
1351 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1353 We should do better analysis of posix_memalign. At the least it should
1354 no-capture its pointer argument, at best, we should know that the out-value
1355 result doesn't point to anything (like malloc). One example of this is in
1356 SingleSource/Benchmarks/Misc/dt.c
1358 //===---------------------------------------------------------------------===//
1360 A/B get pinned to the stack because we turn an if/then into a select instead
1361 of PRE'ing the load/store. This may be fixable in instcombine:
1362 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1364 struct X { int i; };
1378 //===---------------------------------------------------------------------===//
1380 Interesting missed case because of control flow flattening (should be 2 loads):
1381 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1382 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1383 opt -mem2reg -gvn -instcombine | llvm-dis
1384 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1385 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1387 //===---------------------------------------------------------------------===//
1389 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1390 We could eliminate the branch condition here, loading from null is undefined:
1392 struct S { int w, x, y, z; };
1393 struct T { int r; struct S s; };
1394 void bar (struct S, int);
1395 void foo (int a, struct T b)
1403 //===---------------------------------------------------------------------===//
1405 simplifylibcalls should do several optimizations for strspn/strcspn:
1407 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1409 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1411 register size_t __result = 0;
1412 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1413 __s[__result] != __reject2 && __s[__result] != __reject3)
1418 This should turn into a switch on the character. See PR3253 for some notes on
1421 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1423 //===---------------------------------------------------------------------===//
1425 "gas" uses this idiom:
1426 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1428 else if (strchr ("<>", *intel_parser.op_string)
1430 Those should be turned into a switch.
1432 //===---------------------------------------------------------------------===//
1434 252.eon contains this interesting code:
1436 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1437 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1438 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1439 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1440 call void @llvm.memcpy.i32(i8* %endptr,
1441 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1442 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1444 This is interesting for a couple reasons. First, in this:
1446 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1447 %strlen = call i32 @strlen(i8* %3072)
1449 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1450 strcpy call returns a pointer to the end of the string. Based on that, the
1451 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1453 Second, the memcpy+strlen strlen can be replaced with:
1455 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1457 Because the destination was just copied into the specified memory buffer. This,
1458 in turn, can be constant folded to "4".
1460 In other code, it contains:
1462 %endptr6978 = bitcast i8* %endptr69 to i32*
1463 store i32 7107374, i32* %endptr6978, align 1
1464 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1466 Which could also be constant folded. Whatever is producing this should probably
1467 be fixed to leave this as a memcpy from a string.
1469 Further, eon also has an interesting partially redundant strlen call:
1471 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1472 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1473 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1474 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1475 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1476 br i1 %685, label %bb10, label %bb9
1479 %686 = call i32 @strlen(i8* %683) nounwind readonly
1480 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1481 br i1 %687, label %bb10, label %bb11
1483 bb10: ; preds = %bb9, %bb8
1484 %688 = call i32 @strlen(i8* %683) nounwind readonly
1486 This could be eliminated by doing the strlen once in bb8, saving code size and
1487 improving perf on the bb8->9->10 path.
1489 //===---------------------------------------------------------------------===//
1491 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1493 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1496 bb62: ; preds = %bb55, %bb53
1497 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1498 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1499 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1500 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1503 br i1 %or.cond, label %bb65, label %bb72
1505 bb65: ; preds = %bb62
1506 store i8 0, i8* %173, align 1
1509 bb72: ; preds = %bb65, %bb62
1510 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1511 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1513 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1514 redundant with the %171 call. At worst, we could shove the %177 strlen call
1515 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1516 that bb65 stores to the string, zeroing out the last byte. This means that on
1517 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1520 This pattern repeats several times, basically doing:
1525 where it is "obvious" that B = A-1.
1527 //===---------------------------------------------------------------------===//
1529 186.crafty also contains this code:
1531 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1532 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1533 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1534 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1535 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1537 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1539 //===---------------------------------------------------------------------===//
1541 186.crafty has this interesting pattern with the "out.4543" variable:
1543 call void @llvm.memcpy.i32(
1544 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1545 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1546 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1548 It is basically doing:
1550 memcpy(globalarray, "string");
1551 printf(..., globalarray);
1553 Anyway, by knowing that printf just reads the memory and forward substituting
1554 the string directly into the printf, this eliminates reads from globalarray.
1555 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1556 other similar functions) there are many stores to "out". Once all the printfs
1557 stop using "out", all that is left is the memcpy's into it. This should allow
1558 globalopt to remove the "stored only" global.
1560 //===---------------------------------------------------------------------===//
1564 define inreg i32 @foo(i8* inreg %p) nounwind {
1566 %tmp1 = ashr i8 %tmp0, 5
1567 %tmp2 = sext i8 %tmp1 to i32
1571 could be dagcombine'd to a sign-extending load with a shift.
1572 For example, on x86 this currently gets this:
1578 while it could get this:
1583 //===---------------------------------------------------------------------===//
1587 int test(int x) { return 1-x == x; } // --> return false
1588 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1590 Always foldable for odd constants, what is the rule for even?
1592 //===---------------------------------------------------------------------===//
1594 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1595 for next field in struct (which is at same address).
1597 For example: store of float into { {{}}, float } could be turned into a store to
1600 //===---------------------------------------------------------------------===//
1602 The arg promotion pass should make use of nocapture to make its alias analysis
1603 stuff much more precise.
1605 //===---------------------------------------------------------------------===//
1607 The following functions should be optimized to use a select instead of a
1608 branch (from gcc PR40072):
1610 char char_int(int m) {if(m>7) return 0; return m;}
1611 int int_char(char m) {if(m>7) return 0; return m;}
1613 //===---------------------------------------------------------------------===//
1615 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1619 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1621 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1622 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1623 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1624 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1625 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1629 However, it's functionally equivalent to:
1631 b = (b & ~0x80) | (a & 0x80);
1633 Which generates this:
1635 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1637 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1638 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1639 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1643 This can be generalized for other forms:
1645 b = (b & ~0x80) | (a & 0x40) << 1;
1647 //===---------------------------------------------------------------------===//
1649 These two functions produce different code. They shouldn't:
1653 uint8_t p1(uint8_t b, uint8_t a) {
1654 b = (b & ~0xc0) | (a & 0xc0);
1658 uint8_t p2(uint8_t b, uint8_t a) {
1659 b = (b & ~0x40) | (a & 0x40);
1660 b = (b & ~0x80) | (a & 0x80);
1664 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1666 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1667 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1668 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1672 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1674 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1675 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1676 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1677 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1678 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1682 //===---------------------------------------------------------------------===//
1684 IPSCCP does not currently propagate argument dependent constants through
1685 functions where it does not not all of the callers. This includes functions
1686 with normal external linkage as well as templates, C99 inline functions etc.
1687 Specifically, it does nothing to:
1689 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1691 %0 = add nsw i32 %y, %z
1694 %3 = add nsw i32 %1, %2
1698 define i32 @test2() nounwind {
1700 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1704 It would be interesting extend IPSCCP to be able to handle simple cases like
1705 this, where all of the arguments to a call are constant. Because IPSCCP runs
1706 before inlining, trivial templates and inline functions are not yet inlined.
1707 The results for a function + set of constant arguments should be memoized in a
1710 //===---------------------------------------------------------------------===//
1712 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1713 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1714 handle simple things like this:
1716 static int foo(const char *X) { return strlen(X); }
1717 int bar() { return foo("abcd"); }
1719 //===---------------------------------------------------------------------===//
1721 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1723 define i1 @test(i8 %x, i8 %y) {
1725 %B = icmp ugt i8 %A, 3
1729 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1730 the sign bit, if the comparison is obviously a sign test. In this case, we only
1731 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1732 would delete the or instruction for us.
1734 //===---------------------------------------------------------------------===//
1736 functionattrs doesn't know much about memcpy/memset. This function should be
1737 marked readnone rather than readonly, since it only twiddles local memory, but
1738 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1740 struct X { int *p; int *q; };
1747 p = __builtin_memcpy (&x, &y, sizeof (int *));
1751 //===---------------------------------------------------------------------===//
1753 Missed instcombine transformation:
1754 define i1 @a(i32 %x) nounwind readnone {
1756 %cmp = icmp eq i32 %x, 30
1757 %sub = add i32 %x, -30
1758 %cmp2 = icmp ugt i32 %sub, 9
1759 %or = or i1 %cmp, %cmp2
1762 This should be optimized to a single compare. Testcase derived from gcc.
1764 //===---------------------------------------------------------------------===//
1766 Missed instcombine or reassociate transformation:
1767 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1769 The sgt and slt should be combined into a single comparison. Testcase derived
1772 //===---------------------------------------------------------------------===//
1774 Missed instcombine transformation:
1776 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1777 %383 = zext i32 %382 to i64 ; [#uses=1]
1778 %384 = shl i64 %381, %383 ; [#uses=1]
1779 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1781 The srem can be transformed to an and because if %tmp14.i is negative, the
1782 shift is undefined. Testcase derived from 403.gcc.
1784 //===---------------------------------------------------------------------===//
1786 This is a range comparison on a divided result (from 403.gcc):
1788 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1789 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1790 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1792 We already catch this (removing the sdiv) if there isn't an add, we should
1793 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1796 int a(int x) { return (unsigned)(x/16+7) < 15; }
1798 Another similar case involves truncations on 64-bit targets:
1800 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1801 %362 = trunc i64 %361 to i32 ; [#uses=2]
1803 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1805 //===---------------------------------------------------------------------===//
1807 Missed instcombine/dagcombine transformation:
1808 define void @lshift_lt(i8 zeroext %a) nounwind {
1810 %conv = zext i8 %a to i32
1811 %shl = shl i32 %conv, 3
1812 %cmp = icmp ult i32 %shl, 33
1813 br i1 %cmp, label %if.then, label %if.end
1816 tail call void @bar() nounwind
1822 declare void @bar() nounwind
1824 The shift should be eliminated. Testcase derived from gcc.
1826 //===---------------------------------------------------------------------===//
1828 These compile into different code, one gets recognized as a switch and the
1829 other doesn't due to phase ordering issues (PR6212):
1831 int test1(int mainType, int subType) {
1834 else if (mainType == 9)
1836 else if (mainType == 11)
1841 int test2(int mainType, int subType) {
1851 //===---------------------------------------------------------------------===//
1853 The following test case (from PR6576):
1855 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1857 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1858 br i1 %cond1, label %exit, label %bb.nph
1859 bb.nph: ; preds = %entry
1860 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1862 exit: ; preds = %entry
1866 could be reduced to:
1868 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1870 %tmp = mul i32 %b, %a
1874 //===---------------------------------------------------------------------===//
1876 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1879 Another interesting case is that something related could be used for variables
1880 that go const after their ctor has finished. In these cases, globalopt (which
1881 can statically run the constructor) could mark the global const (so it gets put
1882 in the readonly section). A testcase would be:
1885 using namespace std;
1886 const complex<char> should_be_in_rodata (42,-42);
1887 complex<char> should_be_in_data (42,-42);
1888 complex<char> should_be_in_bss;
1890 Where we currently evaluate the ctors but the globals don't become const because
1891 the optimizer doesn't know they "become const" after the ctor is done. See
1892 GCC PR4131 for more examples.
1894 //===---------------------------------------------------------------------===//
1899 return x > 1 ? x : 1;
1902 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1903 and cheaper on most targets.
1905 LLVM prefers comparisons with zero over non-zero in general, but in this
1906 case it choses instead to keep the max operation obvious.
1908 //===---------------------------------------------------------------------===//
1910 Take the following testcase on x86-64 (similar testcases exist for all targets
1913 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1916 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1917 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1918 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1919 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1920 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1921 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1922 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1923 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1924 store i64 %7, i64* %s, align 8
1925 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1926 store i64 %8, i64* %t, align 8
1946 The generated SelectionDAG has an ADD of an ADDE, where both operands of the
1947 ADDE are zero. Replacing one of the operands of the ADDE with the other operand
1948 of the ADD, and replacing the ADD with the ADDE, should give the desired result.
1950 (That said, we are doing a lot better than gcc on this testcase. :) )
1952 //===---------------------------------------------------------------------===//
1954 Switch lowering generates less than ideal code for the following switch:
1955 define void @a(i32 %x) nounwind {
1957 switch i32 %x, label %if.end [
1958 i32 0, label %if.then
1959 i32 1, label %if.then
1960 i32 2, label %if.then
1961 i32 3, label %if.then
1962 i32 5, label %if.then
1965 tail call void @foo() nounwind
1972 Generated code on x86-64 (other platforms give similar results):
1985 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1987 Or, if we wanted to be really clever, we could simplify the whole thing to
1988 something like the following, which eliminates a branch:
1995 //===---------------------------------------------------------------------===//
1996 Given a branch where the two target blocks are identical ("ret i32 %b" in
1997 both), simplifycfg will simplify them away. But not so for a switch statement:
1999 define i32 @f(i32 %a, i32 %b) nounwind readnone {
2001 switch i32 %a, label %bb3 [
2006 bb: ; preds = %entry, %entry
2009 bb3: ; preds = %entry
2012 //===---------------------------------------------------------------------===//
2014 clang -O3 fails to devirtualize this virtual inheritance case: (GCC PR45875)
2015 Looks related to PR3100
2019 virtual void foo ();
2021 struct c11 : c10, c1{
2024 struct c28 : virtual c11{
2033 //===---------------------------------------------------------------------===//
2037 int foo(int a) { return (a & (~15)) / 16; }
2041 define i32 @foo(i32 %a) nounwind readnone ssp {
2043 %and = and i32 %a, -16
2044 %div = sdiv i32 %and, 16
2048 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
2049 should be instcombined into just "a >> 4".
2051 We do get this at the codegen level, so something knows about it, but
2052 instcombine should catch it earlier:
2060 //===---------------------------------------------------------------------===//
2062 This code (from GCC PR28685):
2064 int test(int a, int b) {
2074 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
2076 %cmp = icmp slt i32 %a, %b
2077 br i1 %cmp, label %return, label %if.end
2079 if.end: ; preds = %entry
2080 %cmp5 = icmp eq i32 %a, %b
2081 %conv6 = zext i1 %cmp5 to i32
2084 return: ; preds = %entry
2090 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
2092 %0 = icmp sle i32 %a, %b
2093 %retval = zext i1 %0 to i32
2097 //===---------------------------------------------------------------------===//