1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 We should recognized various "overflow detection" idioms and translate them into
22 llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom:
24 unsigned int mul(unsigned int a,unsigned int b) {
25 if ((unsigned long long)a*b>0xffffffff)
30 The legalization code for mul-with-overflow needs to be made more robust before
31 this can be implemented though.
33 //===---------------------------------------------------------------------===//
35 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
36 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
37 safe in general, even on darwin. See the libm implementation of hypot for
38 examples (which special case when x/y are exactly zero to get signed zeros etc
41 //===---------------------------------------------------------------------===//
43 On targets with expensive 64-bit multiply, we could LSR this:
50 for (i = ...; ++i, tmp+=tmp)
53 This would be a win on ppc32, but not x86 or ppc64.
55 //===---------------------------------------------------------------------===//
57 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
59 //===---------------------------------------------------------------------===//
61 Reassociate should turn things like:
63 int factorial(int X) {
64 return X*X*X*X*X*X*X*X;
67 into llvm.powi calls, allowing the code generator to produce balanced
70 First, the intrinsic needs to be extended to support integers, and second the
71 code generator needs to be enhanced to lower these to multiplication trees.
73 //===---------------------------------------------------------------------===//
75 Interesting? testcase for add/shift/mul reassoc:
77 int bar(int x, int y) {
78 return x*x*x+y+x*x*x*x*x*y*y*y*y;
80 int foo(int z, int n) {
81 return bar(z, n) + bar(2*z, 2*n);
84 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
85 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
86 which is the same number of multiplies and is canonical, because the 2*X has
87 multiple uses. Here's a simple example:
89 define i32 @test15(i32 %X1) {
90 %B = mul i32 %X1, 47 ; X1*47
96 //===---------------------------------------------------------------------===//
98 Reassociate should handle the example in GCC PR16157:
100 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
101 void f () { /* this can be optimized to four additions... */
102 b4 = a4 + a3 + a2 + a1 + a0;
103 b3 = a3 + a2 + a1 + a0;
108 This requires reassociating to forms of expressions that are already available,
109 something that reassoc doesn't think about yet.
112 //===---------------------------------------------------------------------===//
114 This function: (derived from GCC PR19988)
115 double foo(double x, double y) {
116 return ((x + 0.1234 * y) * (x + -0.1234 * y));
122 mulsd LCPI1_1(%rip), %xmm1
123 mulsd LCPI1_0(%rip), %xmm2
130 Reassociate should be able to turn it into:
132 double foo(double x, double y) {
133 return ((x + 0.1234 * y) * (x - 0.1234 * y));
136 Which allows the multiply by constant to be CSE'd, producing:
139 mulsd LCPI1_0(%rip), %xmm1
146 This doesn't need -ffast-math support at all. This is particularly bad because
147 the llvm-gcc frontend is canonicalizing the later into the former, but clang
148 doesn't have this problem.
150 //===---------------------------------------------------------------------===//
152 These two functions should generate the same code on big-endian systems:
154 int g(int *j,int *l) { return memcmp(j,l,4); }
155 int h(int *j, int *l) { return *j - *l; }
157 this could be done in SelectionDAGISel.cpp, along with other special cases,
160 //===---------------------------------------------------------------------===//
162 It would be nice to revert this patch:
163 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
165 And teach the dag combiner enough to simplify the code expanded before
166 legalize. It seems plausible that this knowledge would let it simplify other
169 //===---------------------------------------------------------------------===//
171 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
172 to the type size. It works but can be overly conservative as the alignment of
173 specific vector types are target dependent.
175 //===---------------------------------------------------------------------===//
177 We should produce an unaligned load from code like this:
179 v4sf example(float *P) {
180 return (v4sf){P[0], P[1], P[2], P[3] };
183 //===---------------------------------------------------------------------===//
185 Add support for conditional increments, and other related patterns. Instead
190 je LBB16_2 #cond_next
201 //===---------------------------------------------------------------------===//
203 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
205 Expand these to calls of sin/cos and stores:
206 double sincos(double x, double *sin, double *cos);
207 float sincosf(float x, float *sin, float *cos);
208 long double sincosl(long double x, long double *sin, long double *cos);
210 Doing so could allow SROA of the destination pointers. See also:
211 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
213 This is now easily doable with MRVs. We could even make an intrinsic for this
214 if anyone cared enough about sincos.
216 //===---------------------------------------------------------------------===//
218 quantum_sigma_x in 462.libquantum contains the following loop:
220 for(i=0; i<reg->size; i++)
222 /* Flip the target bit of each basis state */
223 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
226 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
227 so cool to turn it into something like:
229 long long Res = ((MAX_UNSIGNED) 1 << target);
231 for(i=0; i<reg->size; i++)
232 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
234 for(i=0; i<reg->size; i++)
235 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
238 ... which would only do one 32-bit XOR per loop iteration instead of two.
240 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
243 //===---------------------------------------------------------------------===//
245 This isn't recognized as bswap by instcombine (yes, it really is bswap):
247 unsigned long reverse(unsigned v) {
249 t = v ^ ((v << 16) | (v >> 16));
251 v = (v << 24) | (v >> 8);
255 //===---------------------------------------------------------------------===//
259 These idioms should be recognized as popcount (see PR1488):
261 unsigned countbits_slow(unsigned v) {
263 for (c = 0; v; v >>= 1)
267 unsigned countbits_fast(unsigned v){
270 v &= v - 1; // clear the least significant bit set
274 BITBOARD = unsigned long long
275 int PopCnt(register BITBOARD a) {
283 unsigned int popcount(unsigned int input) {
284 unsigned int count = 0;
285 for (unsigned int i = 0; i < 4 * 8; i++)
286 count += (input >> i) & i;
290 This sort of thing should be added to the loop idiom pass.
292 This loop isn't converted to a memset:
294 void f(char *dest, int n) {
295 for (int i = 0; i < n; ++i) {
300 //===---------------------------------------------------------------------===//
302 These should turn into single 16-bit (unaligned?) loads on little/big endian
305 unsigned short read_16_le(const unsigned char *adr) {
306 return adr[0] | (adr[1] << 8);
308 unsigned short read_16_be(const unsigned char *adr) {
309 return (adr[0] << 8) | adr[1];
312 //===---------------------------------------------------------------------===//
314 -instcombine should handle this transform:
315 icmp pred (sdiv X / C1 ), C2
316 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
318 Currently InstCombine avoids this transform but will do it when the signs of
319 the operands and the sign of the divide match. See the FIXME in
320 InstructionCombining.cpp in the visitSetCondInst method after the switch case
321 for Instruction::UDiv (around line 4447) for more details.
323 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
326 //===---------------------------------------------------------------------===//
330 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
331 opportunities in its double_array_divs_variable function: it needs loop
332 interchange, memory promotion (which LICM already does), vectorization and
333 variable trip count loop unrolling (since it has a constant trip count). ICC
334 apparently produces this very nice code with -ffast-math:
336 ..B1.70: # Preds ..B1.70 ..B1.69
337 mulpd %xmm0, %xmm1 #108.2
338 mulpd %xmm0, %xmm1 #108.2
339 mulpd %xmm0, %xmm1 #108.2
340 mulpd %xmm0, %xmm1 #108.2
342 cmpl $131072, %edx #108.2
343 jb ..B1.70 # Prob 99% #108.2
345 It would be better to count down to zero, but this is a lot better than what we
348 //===---------------------------------------------------------------------===//
352 typedef unsigned U32;
353 typedef unsigned long long U64;
354 int test (U32 *inst, U64 *regs) {
357 int r1 = (temp >> 20) & 0xf;
358 int b2 = (temp >> 16) & 0xf;
359 effective_addr2 = temp & 0xfff;
360 if (b2) effective_addr2 += regs[b2];
361 b2 = (temp >> 12) & 0xf;
362 if (b2) effective_addr2 += regs[b2];
363 effective_addr2 &= regs[4];
364 if ((effective_addr2 & 3) == 0)
369 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
370 we don't eliminate the computation of the top half of effective_addr2 because
371 we don't have whole-function selection dags. On x86, this means we use one
372 extra register for the function when effective_addr2 is declared as U64 than
373 when it is declared U32.
375 PHI Slicing could be extended to do this.
377 //===---------------------------------------------------------------------===//
379 LSR should know what GPR types a target has from TargetData. This code:
381 volatile short X, Y; // globals
385 for (i = 0; i < N; i++) { X = i; Y = i*4; }
388 produces two near identical IV's (after promotion) on PPC/ARM:
398 add r2, r2, #1 <- [0,+,1]
399 sub r0, r0, #1 <- [0,-,1]
403 LSR should reuse the "+" IV for the exit test.
405 //===---------------------------------------------------------------------===//
407 Tail call elim should be more aggressive, checking to see if the call is
408 followed by an uncond branch to an exit block.
410 ; This testcase is due to tail-duplication not wanting to copy the return
411 ; instruction into the terminating blocks because there was other code
412 ; optimized out of the function after the taildup happened.
413 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
415 define i32 @t4(i32 %a) {
417 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
418 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
419 br i1 %tmp.2, label %then.0, label %else.0
421 then.0: ; preds = %entry
422 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
423 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
426 else.0: ; preds = %entry
427 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
428 br i1 %tmp.7, label %then.1, label %return
430 then.1: ; preds = %else.0
431 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
432 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
435 return: ; preds = %then.1, %else.0, %then.0
436 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
441 //===---------------------------------------------------------------------===//
443 Tail recursion elimination should handle:
448 return 2 * pow2m1 (n - 1) + 1;
451 Also, multiplies can be turned into SHL's, so they should be handled as if
452 they were associative. "return foo() << 1" can be tail recursion eliminated.
454 //===---------------------------------------------------------------------===//
456 Argument promotion should promote arguments for recursive functions, like
459 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
461 define internal i32 @foo(i32* %x) {
463 %tmp = load i32* %x ; <i32> [#uses=0]
464 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
468 define i32 @bar(i32* %x) {
470 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
474 //===---------------------------------------------------------------------===//
476 We should investigate an instruction sinking pass. Consider this silly
492 je LBB1_2 # cond_true
500 The PIC base computation (call+popl) is only used on one path through the
501 code, but is currently always computed in the entry block. It would be
502 better to sink the picbase computation down into the block for the
503 assertion, as it is the only one that uses it. This happens for a lot of
504 code with early outs.
506 Another example is loads of arguments, which are usually emitted into the
507 entry block on targets like x86. If not used in all paths through a
508 function, they should be sunk into the ones that do.
510 In this case, whole-function-isel would also handle this.
512 //===---------------------------------------------------------------------===//
514 Investigate lowering of sparse switch statements into perfect hash tables:
515 http://burtleburtle.net/bob/hash/perfect.html
517 //===---------------------------------------------------------------------===//
519 We should turn things like "load+fabs+store" and "load+fneg+store" into the
520 corresponding integer operations. On a yonah, this loop:
525 for (b = 0; b < 10000000; b++)
526 for (i = 0; i < 256; i++)
530 is twice as slow as this loop:
535 for (b = 0; b < 10000000; b++)
536 for (i = 0; i < 256; i++)
537 a[i] ^= (1ULL << 63);
540 and I suspect other processors are similar. On X86 in particular this is a
541 big win because doing this with integers allows the use of read/modify/write
544 //===---------------------------------------------------------------------===//
546 DAG Combiner should try to combine small loads into larger loads when
547 profitable. For example, we compile this C++ example:
549 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
550 extern THotKey m_HotKey;
551 THotKey GetHotKey () { return m_HotKey; }
553 into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
555 __Z9GetHotKeyv: ## @_Z9GetHotKeyv
556 movq _m_HotKey@GOTPCREL(%rip), %rax
569 //===---------------------------------------------------------------------===//
571 We should add an FRINT node to the DAG to model targets that have legal
572 implementations of ceil/floor/rint.
574 //===---------------------------------------------------------------------===//
579 long long input[8] = {1,0,1,0,1,0,1,0};
583 Clang compiles this into:
585 call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
586 %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
587 store i64 1, i64* %0, align 16
588 %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
589 store i64 1, i64* %1, align 16
590 %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
591 store i64 1, i64* %2, align 16
592 %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
593 store i64 1, i64* %3, align 16
595 Which gets codegen'd into:
598 movaps %xmm0, -16(%rbp)
599 movaps %xmm0, -32(%rbp)
600 movaps %xmm0, -48(%rbp)
601 movaps %xmm0, -64(%rbp)
607 It would be better to have 4 movq's of 0 instead of the movaps's.
609 //===---------------------------------------------------------------------===//
611 http://llvm.org/PR717:
613 The following code should compile into "ret int undef". Instead, LLVM
614 produces "ret int 0":
623 //===---------------------------------------------------------------------===//
625 The loop unroller should partially unroll loops (instead of peeling them)
626 when code growth isn't too bad and when an unroll count allows simplification
627 of some code within the loop. One trivial example is:
633 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
642 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
643 reduction in code size. The resultant code would then also be suitable for
644 exit value computation.
646 //===---------------------------------------------------------------------===//
648 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
649 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
650 matching code in dag combine doesn't look through truncates aggressively
651 enough. Here are some testcases reduces from GCC PR17886:
653 unsigned long long f5(unsigned long long x, unsigned long long y) {
654 return (x << 8) | ((y >> 48) & 0xffull);
656 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
659 return (x << 8) | ((y >> 48) & 0xffull);
661 return (x << 16) | ((y >> 40) & 0xffffull);
663 return (x << 24) | ((y >> 32) & 0xffffffull);
665 return (x << 32) | ((y >> 24) & 0xffffffffull);
667 return (x << 40) | ((y >> 16) & 0xffffffffffull);
671 //===---------------------------------------------------------------------===//
673 This (and similar related idioms):
675 unsigned int foo(unsigned char i) {
676 return i | (i<<8) | (i<<16) | (i<<24);
681 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
683 %conv = zext i8 %i to i32
684 %shl = shl i32 %conv, 8
685 %shl5 = shl i32 %conv, 16
686 %shl9 = shl i32 %conv, 24
687 %or = or i32 %shl9, %conv
688 %or6 = or i32 %or, %shl5
689 %or10 = or i32 %or6, %shl
693 it would be better as:
695 unsigned int bar(unsigned char i) {
696 unsigned int j=i | (i << 8);
702 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
704 %conv = zext i8 %i to i32
705 %shl = shl i32 %conv, 8
706 %or = or i32 %shl, %conv
707 %shl5 = shl i32 %or, 16
708 %or6 = or i32 %shl5, %or
712 or even i*0x01010101, depending on the speed of the multiplier. The best way to
713 handle this is to canonicalize it to a multiply in IR and have codegen handle
714 lowering multiplies to shifts on cpus where shifts are faster.
716 //===---------------------------------------------------------------------===//
718 We do a number of simplifications in simplify libcalls to strength reduce
719 standard library functions, but we don't currently merge them together. For
720 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
721 be done safely if "b" isn't modified between the strlen and memcpy of course.
723 //===---------------------------------------------------------------------===//
725 We compile this program: (from GCC PR11680)
726 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
728 Into code that runs the same speed in fast/slow modes, but both modes run 2x
729 slower than when compile with GCC (either 4.0 or 4.2):
731 $ llvm-g++ perf.cpp -O3 -fno-exceptions
733 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
735 $ g++ perf.cpp -O3 -fno-exceptions
737 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
739 It looks like we are making the same inlining decisions, so this may be raw
740 codegen badness or something else (haven't investigated).
742 //===---------------------------------------------------------------------===//
744 We miss some instcombines for stuff like this:
746 void foo (unsigned int a) {
747 /* This one is equivalent to a >= (3 << 2). */
752 A few other related ones are in GCC PR14753.
754 //===---------------------------------------------------------------------===//
756 Divisibility by constant can be simplified (according to GCC PR12849) from
757 being a mulhi to being a mul lo (cheaper). Testcase:
759 void bar(unsigned n) {
764 This is equivalent to the following, where 2863311531 is the multiplicative
765 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
766 void bar(unsigned n) {
767 if (n * 2863311531U < 1431655766U)
771 The same transformation can work with an even modulo with the addition of a
772 rotate: rotate the result of the multiply to the right by the number of bits
773 which need to be zero for the condition to be true, and shrink the compare RHS
774 by the same amount. Unless the target supports rotates, though, that
775 transformation probably isn't worthwhile.
777 The transformation can also easily be made to work with non-zero equality
778 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
780 //===---------------------------------------------------------------------===//
782 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
783 bunch of other stuff from this example (see PR1604):
793 std::scanf("%d", &t.val);
794 std::printf("%d\n", t.val);
797 //===---------------------------------------------------------------------===//
799 These functions perform the same computation, but produce different assembly.
801 define i8 @select(i8 %x) readnone nounwind {
802 %A = icmp ult i8 %x, 250
803 %B = select i1 %A, i8 0, i8 1
807 define i8 @addshr(i8 %x) readnone nounwind {
808 %A = zext i8 %x to i9
809 %B = add i9 %A, 6 ;; 256 - 250 == 6
811 %D = trunc i9 %C to i8
815 //===---------------------------------------------------------------------===//
819 f (unsigned long a, unsigned long b, unsigned long c)
821 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
824 f (unsigned long a, unsigned long b, unsigned long c)
826 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
828 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
829 "clang -emit-llvm-bc | opt -std-compile-opts".
831 //===---------------------------------------------------------------------===//
834 #define PMD_MASK (~((1UL << 23) - 1))
835 void clear_pmd_range(unsigned long start, unsigned long end)
837 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
840 The expression should optimize to something like
841 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
842 -emit-llvm-bc | opt -std-compile-opts".
844 //===---------------------------------------------------------------------===//
846 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
848 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
849 These should combine to the same thing. Currently, the first function
850 produces better code on X86.
852 //===---------------------------------------------------------------------===//
855 #define abs(x) x>0?x:-x
858 return (abs(x)) >= 0;
860 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
861 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
863 //===---------------------------------------------------------------------===//
867 rotate_cst (unsigned int a)
869 a = (a << 10) | (a >> 22);
874 minus_cst (unsigned int a)
883 mask_gt (unsigned int a)
885 /* This is equivalent to a > 15. */
890 rshift_gt (unsigned int a)
892 /* This is equivalent to a > 23. */
896 All should simplify to a single comparison. All of these are
897 currently not optimized with "clang -emit-llvm-bc | opt
900 //===---------------------------------------------------------------------===//
903 int c(int* x) {return (char*)x+2 == (char*)x;}
904 Should combine to 0. Currently not optimized with "clang
905 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
907 //===---------------------------------------------------------------------===//
909 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
910 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
911 with "clang -emit-llvm-bc | opt -std-compile-opts".
913 //===---------------------------------------------------------------------===//
915 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
916 Should combine to "x | (y & 3)". Currently not optimized with "clang
917 -emit-llvm-bc | opt -std-compile-opts".
919 //===---------------------------------------------------------------------===//
921 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
922 Should fold to "(~a & c) | (a & b)". Currently not optimized with
923 "clang -emit-llvm-bc | opt -std-compile-opts".
925 //===---------------------------------------------------------------------===//
927 int a(int a,int b) {return (~(a|b))|a;}
928 Should fold to "a|~b". Currently not optimized with "clang
929 -emit-llvm-bc | opt -std-compile-opts".
931 //===---------------------------------------------------------------------===//
933 int a(int a, int b) {return (a&&b) || (a&&!b);}
934 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
935 | opt -std-compile-opts".
937 //===---------------------------------------------------------------------===//
939 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
940 Should fold to "a ? b : c", or at least something sane. Currently not
941 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
943 //===---------------------------------------------------------------------===//
945 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
946 Should fold to a && (b || c). Currently not optimized with "clang
947 -emit-llvm-bc | opt -std-compile-opts".
949 //===---------------------------------------------------------------------===//
951 int a(int x) {return x | ((x & 8) ^ 8);}
952 Should combine to x | 8. Currently not optimized with "clang
953 -emit-llvm-bc | opt -std-compile-opts".
955 //===---------------------------------------------------------------------===//
957 int a(int x) {return x ^ ((x & 8) ^ 8);}
958 Should also combine to x | 8. Currently not optimized with "clang
959 -emit-llvm-bc | opt -std-compile-opts".
961 //===---------------------------------------------------------------------===//
963 int a(int x) {return ((x | -9) ^ 8) & x;}
964 Should combine to x & -9. Currently not optimized with "clang
965 -emit-llvm-bc | opt -std-compile-opts".
967 //===---------------------------------------------------------------------===//
969 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
970 Should combine to "a * 0x88888888 >> 31". Currently not optimized
971 with "clang -emit-llvm-bc | opt -std-compile-opts".
973 //===---------------------------------------------------------------------===//
975 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
976 There's an unnecessary zext in the generated code with "clang
977 -emit-llvm-bc | opt -std-compile-opts".
979 //===---------------------------------------------------------------------===//
981 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
982 Should combine to "20 * (((unsigned)x) & -2)". Currently not
983 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
985 //===---------------------------------------------------------------------===//
987 This was noticed in the entryblock for grokdeclarator in 403.gcc:
989 %tmp = icmp eq i32 %decl_context, 4
990 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
991 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
992 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
994 tmp1 should be simplified to something like:
995 (!tmp || decl_context == 1)
997 This allows recursive simplifications, tmp1 is used all over the place in
998 the function, e.g. by:
1000 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1001 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1002 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1006 //===---------------------------------------------------------------------===//
1010 Store sinking: This code:
1012 void f (int n, int *cond, int *res) {
1015 for (i = 0; i < n; i++)
1017 *res ^= 234; /* (*) */
1020 On this function GVN hoists the fully redundant value of *res, but nothing
1021 moves the store out. This gives us this code:
1023 bb: ; preds = %bb2, %entry
1024 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1025 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1026 %1 = load i32* %cond, align 4
1027 %2 = icmp eq i32 %1, 0
1028 br i1 %2, label %bb2, label %bb1
1031 %3 = xor i32 %.rle, 234
1032 store i32 %3, i32* %res, align 4
1035 bb2: ; preds = %bb, %bb1
1036 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1037 %indvar.next = add i32 %i.05, 1
1038 %exitcond = icmp eq i32 %indvar.next, %n
1039 br i1 %exitcond, label %return, label %bb
1041 DSE should sink partially dead stores to get the store out of the loop.
1043 Here's another partial dead case:
1044 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1046 //===---------------------------------------------------------------------===//
1048 Scalar PRE hoists the mul in the common block up to the else:
1050 int test (int a, int b, int c, int g) {
1060 It would be better to do the mul once to reduce codesize above the if.
1061 This is GCC PR38204.
1064 //===---------------------------------------------------------------------===//
1065 This simple function from 179.art:
1068 struct { double y; int reset; } *Y;
1073 for (i=0;i<numf2s;i++)
1074 if (Y[i].y > Y[winner].y)
1078 Compiles into (with clang TBAA):
1080 for.body: ; preds = %for.inc, %bb.nph
1081 %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
1082 %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
1083 %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
1084 %tmp5 = load double* %tmp4, align 8, !tbaa !4
1085 %idxprom7 = sext i32 %i.01718 to i64
1086 %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
1087 %tmp11 = load double* %tmp10, align 8, !tbaa !4
1088 %cmp12 = fcmp ogt double %tmp5, %tmp11
1089 br i1 %cmp12, label %if.then, label %for.inc
1091 if.then: ; preds = %for.body
1092 %i.017 = trunc i64 %indvar to i32
1095 for.inc: ; preds = %for.body, %if.then
1096 %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
1097 %indvar.next = add i64 %indvar, 1
1098 %exitcond = icmp eq i64 %indvar.next, %tmp22
1099 br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
1102 It is good that we hoisted the reloads of numf2's, and Y out of the loop and
1103 sunk the store to winner out.
1105 However, this is awful on several levels: the conditional truncate in the loop
1106 (-indvars at fault? why can't we completely promote the IV to i64?).
1108 Beyond that, we have a partially redundant load in the loop: if "winner" (aka
1109 %i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
1110 Similarly, the addressing that feeds it (including the sext) is redundant. In
1111 the end we get this generated assembly:
1113 LBB0_2: ## %for.body
1114 ## =>This Inner Loop Header: Depth=1
1118 ucomisd (%rcx,%r8), %xmm0
1127 All things considered this isn't too bad, but we shouldn't need the movslq or
1128 the shlq instruction, or the load folded into ucomisd every time through the
1131 On an x86-specific topic, if the loop can't be restructure, the movl should be a
1134 //===---------------------------------------------------------------------===//
1138 GCC PR37810 is an interesting case where we should sink load/store reload
1139 into the if block and outside the loop, so we don't reload/store it on the
1160 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1161 we don't sink the store. We need partially dead store sinking.
1163 //===---------------------------------------------------------------------===//
1165 [LOAD PRE CRIT EDGE SPLITTING]
1167 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1168 leading to excess stack traffic. This could be handled by GVN with some crazy
1169 symbolic phi translation. The code we get looks like (g is on the stack):
1173 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1174 store i32 %8, i32* %9, align bel %bb3
1176 bb3: ; preds = %bb1, %bb2, %bb
1177 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1178 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1179 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1180 %11 = load i32* %10, align 4
1182 %11 is partially redundant, an in BB2 it should have the value %8.
1184 GCC PR33344 and PR35287 are similar cases.
1187 //===---------------------------------------------------------------------===//
1191 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1192 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1194 [CRIT EDGE BREAKING]
1195 loadpre3.c predcom-4.c
1197 [PRE OF READONLY CALL]
1200 [TURN SELECT INTO BRANCH]
1201 loadpre14.c loadpre15.c
1203 actually a conditional increment: loadpre18.c loadpre19.c
1205 //===---------------------------------------------------------------------===//
1207 [LOAD PRE / STORE SINKING / SPEC HACK]
1209 This is a chunk of code from 456.hmmer:
1211 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1212 int *tpdm, int xmb, int *bp, int *ms) {
1214 for (k = 1; k <= M; k++) {
1215 mc[k] = mpp[k-1] + tpmm[k-1];
1216 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1217 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1218 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1223 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1224 into a conditional move (select instr in IR) and allow the final store to do the
1225 store. See GCC PR27313 for more details. Note that this is valid to xform even
1226 with the new C++ memory model, since mc[k] is previously loaded and later
1229 //===---------------------------------------------------------------------===//
1232 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1235 //===---------------------------------------------------------------------===//
1237 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1238 GCC testsuite. For example, we get the first example in predcom-1.c, but
1239 miss the second one:
1244 __attribute__ ((noinline))
1245 void count_averages(int n) {
1247 for (i = 1; i < n; i++)
1248 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1251 which compiles into two loads instead of one in the loop.
1253 predcom-2.c is the same as predcom-1.c
1255 predcom-3.c is very similar but needs loads feeding each other instead of
1259 //===---------------------------------------------------------------------===//
1263 Type based alias analysis:
1264 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1266 We should do better analysis of posix_memalign. At the least it should
1267 no-capture its pointer argument, at best, we should know that the out-value
1268 result doesn't point to anything (like malloc). One example of this is in
1269 SingleSource/Benchmarks/Misc/dt.c
1271 //===---------------------------------------------------------------------===//
1273 Interesting missed case because of control flow flattening (should be 2 loads):
1274 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1275 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1276 opt -mem2reg -gvn -instcombine | llvm-dis
1277 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1278 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1280 //===---------------------------------------------------------------------===//
1282 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1283 We could eliminate the branch condition here, loading from null is undefined:
1285 struct S { int w, x, y, z; };
1286 struct T { int r; struct S s; };
1287 void bar (struct S, int);
1288 void foo (int a, struct T b)
1296 //===---------------------------------------------------------------------===//
1298 simplifylibcalls should do several optimizations for strspn/strcspn:
1300 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1302 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1304 register size_t __result = 0;
1305 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1306 __s[__result] != __reject2 && __s[__result] != __reject3)
1311 This should turn into a switch on the character. See PR3253 for some notes on
1314 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1316 //===---------------------------------------------------------------------===//
1318 "gas" uses this idiom:
1319 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1321 else if (strchr ("<>", *intel_parser.op_string)
1323 Those should be turned into a switch.
1325 //===---------------------------------------------------------------------===//
1327 252.eon contains this interesting code:
1329 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1330 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1331 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1332 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1333 call void @llvm.memcpy.i32(i8* %endptr,
1334 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1335 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1337 This is interesting for a couple reasons. First, in this:
1339 The memcpy+strlen strlen can be replaced with:
1341 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1343 Because the destination was just copied into the specified memory buffer. This,
1344 in turn, can be constant folded to "4".
1346 In other code, it contains:
1348 %endptr6978 = bitcast i8* %endptr69 to i32*
1349 store i32 7107374, i32* %endptr6978, align 1
1350 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1352 Which could also be constant folded. Whatever is producing this should probably
1353 be fixed to leave this as a memcpy from a string.
1355 Further, eon also has an interesting partially redundant strlen call:
1357 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1358 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1359 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1360 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1361 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1362 br i1 %685, label %bb10, label %bb9
1365 %686 = call i32 @strlen(i8* %683) nounwind readonly
1366 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1367 br i1 %687, label %bb10, label %bb11
1369 bb10: ; preds = %bb9, %bb8
1370 %688 = call i32 @strlen(i8* %683) nounwind readonly
1372 This could be eliminated by doing the strlen once in bb8, saving code size and
1373 improving perf on the bb8->9->10 path.
1375 //===---------------------------------------------------------------------===//
1377 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1379 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1382 bb62: ; preds = %bb55, %bb53
1383 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1384 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1385 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1386 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1389 br i1 %or.cond, label %bb65, label %bb72
1391 bb65: ; preds = %bb62
1392 store i8 0, i8* %173, align 1
1395 bb72: ; preds = %bb65, %bb62
1396 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1397 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1399 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1400 redundant with the %171 call. At worst, we could shove the %177 strlen call
1401 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1402 that bb65 stores to the string, zeroing out the last byte. This means that on
1403 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1406 This pattern repeats several times, basically doing:
1411 where it is "obvious" that B = A-1.
1413 //===---------------------------------------------------------------------===//
1415 186.crafty has this interesting pattern with the "out.4543" variable:
1417 call void @llvm.memcpy.i32(
1418 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1419 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1420 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1422 It is basically doing:
1424 memcpy(globalarray, "string");
1425 printf(..., globalarray);
1427 Anyway, by knowing that printf just reads the memory and forward substituting
1428 the string directly into the printf, this eliminates reads from globalarray.
1429 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1430 other similar functions) there are many stores to "out". Once all the printfs
1431 stop using "out", all that is left is the memcpy's into it. This should allow
1432 globalopt to remove the "stored only" global.
1434 //===---------------------------------------------------------------------===//
1438 define inreg i32 @foo(i8* inreg %p) nounwind {
1440 %tmp1 = ashr i8 %tmp0, 5
1441 %tmp2 = sext i8 %tmp1 to i32
1445 could be dagcombine'd to a sign-extending load with a shift.
1446 For example, on x86 this currently gets this:
1452 while it could get this:
1457 //===---------------------------------------------------------------------===//
1461 int test(int x) { return 1-x == x; } // --> return false
1462 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1464 Always foldable for odd constants, what is the rule for even?
1466 //===---------------------------------------------------------------------===//
1468 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1469 for next field in struct (which is at same address).
1471 For example: store of float into { {{}}, float } could be turned into a store to
1474 //===---------------------------------------------------------------------===//
1476 The arg promotion pass should make use of nocapture to make its alias analysis
1477 stuff much more precise.
1479 //===---------------------------------------------------------------------===//
1481 The following functions should be optimized to use a select instead of a
1482 branch (from gcc PR40072):
1484 char char_int(int m) {if(m>7) return 0; return m;}
1485 int int_char(char m) {if(m>7) return 0; return m;}
1487 //===---------------------------------------------------------------------===//
1489 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1493 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1495 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1496 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1497 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1498 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1499 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1503 However, it's functionally equivalent to:
1505 b = (b & ~0x80) | (a & 0x80);
1507 Which generates this:
1509 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1511 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1512 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1513 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1517 This can be generalized for other forms:
1519 b = (b & ~0x80) | (a & 0x40) << 1;
1521 //===---------------------------------------------------------------------===//
1523 These two functions produce different code. They shouldn't:
1527 uint8_t p1(uint8_t b, uint8_t a) {
1528 b = (b & ~0xc0) | (a & 0xc0);
1532 uint8_t p2(uint8_t b, uint8_t a) {
1533 b = (b & ~0x40) | (a & 0x40);
1534 b = (b & ~0x80) | (a & 0x80);
1538 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1540 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1541 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1542 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1546 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1548 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1549 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1550 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1551 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1552 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1556 //===---------------------------------------------------------------------===//
1558 IPSCCP does not currently propagate argument dependent constants through
1559 functions where it does not not all of the callers. This includes functions
1560 with normal external linkage as well as templates, C99 inline functions etc.
1561 Specifically, it does nothing to:
1563 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1565 %0 = add nsw i32 %y, %z
1568 %3 = add nsw i32 %1, %2
1572 define i32 @test2() nounwind {
1574 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1578 It would be interesting extend IPSCCP to be able to handle simple cases like
1579 this, where all of the arguments to a call are constant. Because IPSCCP runs
1580 before inlining, trivial templates and inline functions are not yet inlined.
1581 The results for a function + set of constant arguments should be memoized in a
1584 //===---------------------------------------------------------------------===//
1586 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1587 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1588 handle simple things like this:
1590 static int foo(const char *X) { return strlen(X); }
1591 int bar() { return foo("abcd"); }
1593 //===---------------------------------------------------------------------===//
1595 functionattrs doesn't know much about memcpy/memset. This function should be
1596 marked readnone rather than readonly, since it only twiddles local memory, but
1597 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1599 struct X { int *p; int *q; };
1606 p = __builtin_memcpy (&x, &y, sizeof (int *));
1610 This can be seen at:
1611 $ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
1614 //===---------------------------------------------------------------------===//
1616 Missed instcombine transformation:
1617 define i1 @a(i32 %x) nounwind readnone {
1619 %cmp = icmp eq i32 %x, 30
1620 %sub = add i32 %x, -30
1621 %cmp2 = icmp ugt i32 %sub, 9
1622 %or = or i1 %cmp, %cmp2
1625 This should be optimized to a single compare. Testcase derived from gcc.
1627 //===---------------------------------------------------------------------===//
1629 Missed instcombine or reassociate transformation:
1630 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1632 The sgt and slt should be combined into a single comparison. Testcase derived
1635 //===---------------------------------------------------------------------===//
1637 Missed instcombine transformation:
1639 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1640 %383 = zext i32 %382 to i64 ; [#uses=1]
1641 %384 = shl i64 %381, %383 ; [#uses=1]
1642 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1644 The srem can be transformed to an and because if %tmp14.i is negative, the
1645 shift is undefined. Testcase derived from 403.gcc.
1647 //===---------------------------------------------------------------------===//
1649 This is a range comparison on a divided result (from 403.gcc):
1651 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1652 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1653 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1655 We already catch this (removing the sdiv) if there isn't an add, we should
1656 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1659 int a(int x) { return (unsigned)(x/16+7) < 15; }
1661 Another similar case involves truncations on 64-bit targets:
1663 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1664 %362 = trunc i64 %361 to i32 ; [#uses=2]
1666 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1668 //===---------------------------------------------------------------------===//
1670 Missed instcombine/dagcombine transformation:
1671 define void @lshift_lt(i8 zeroext %a) nounwind {
1673 %conv = zext i8 %a to i32
1674 %shl = shl i32 %conv, 3
1675 %cmp = icmp ult i32 %shl, 33
1676 br i1 %cmp, label %if.then, label %if.end
1679 tail call void @bar() nounwind
1685 declare void @bar() nounwind
1687 The shift should be eliminated. Testcase derived from gcc.
1689 //===---------------------------------------------------------------------===//
1691 These compile into different code, one gets recognized as a switch and the
1692 other doesn't due to phase ordering issues (PR6212):
1694 int test1(int mainType, int subType) {
1697 else if (mainType == 9)
1699 else if (mainType == 11)
1704 int test2(int mainType, int subType) {
1714 //===---------------------------------------------------------------------===//
1716 The following test case (from PR6576):
1718 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1720 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1721 br i1 %cond1, label %exit, label %bb.nph
1722 bb.nph: ; preds = %entry
1723 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1725 exit: ; preds = %entry
1729 could be reduced to:
1731 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1733 %tmp = mul i32 %b, %a
1737 //===---------------------------------------------------------------------===//
1739 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1742 Another interesting case is that something related could be used for variables
1743 that go const after their ctor has finished. In these cases, globalopt (which
1744 can statically run the constructor) could mark the global const (so it gets put
1745 in the readonly section). A testcase would be:
1748 using namespace std;
1749 const complex<char> should_be_in_rodata (42,-42);
1750 complex<char> should_be_in_data (42,-42);
1751 complex<char> should_be_in_bss;
1753 Where we currently evaluate the ctors but the globals don't become const because
1754 the optimizer doesn't know they "become const" after the ctor is done. See
1755 GCC PR4131 for more examples.
1757 //===---------------------------------------------------------------------===//
1762 return x > 1 ? x : 1;
1765 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1766 and cheaper on most targets.
1768 LLVM prefers comparisons with zero over non-zero in general, but in this
1769 case it choses instead to keep the max operation obvious.
1771 //===---------------------------------------------------------------------===//
1773 Take the following testcase on x86-64 (similar testcases exist for all targets
1776 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1779 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1780 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1781 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1782 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1783 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1784 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1785 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1786 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1787 store i64 %7, i64* %s, align 8
1788 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1789 store i64 %8, i64* %t, align 8
1809 The generated SelectionDAG has an ADD of an ADDE, where both operands of the
1810 ADDE are zero. Replacing one of the operands of the ADDE with the other operand
1811 of the ADD, and replacing the ADD with the ADDE, should give the desired result.
1813 (That said, we are doing a lot better than gcc on this testcase. :) )
1815 //===---------------------------------------------------------------------===//
1817 Switch lowering generates less than ideal code for the following switch:
1818 define void @a(i32 %x) nounwind {
1820 switch i32 %x, label %if.end [
1821 i32 0, label %if.then
1822 i32 1, label %if.then
1823 i32 2, label %if.then
1824 i32 3, label %if.then
1825 i32 5, label %if.then
1828 tail call void @foo() nounwind
1835 Generated code on x86-64 (other platforms give similar results):
1848 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1850 Or, if we wanted to be really clever, we could simplify the whole thing to
1851 something like the following, which eliminates a branch:
1858 //===---------------------------------------------------------------------===//
1859 Given a branch where the two target blocks are identical ("ret i32 %b" in
1860 both), simplifycfg will simplify them away. But not so for a switch statement:
1862 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1864 switch i32 %a, label %bb3 [
1869 bb: ; preds = %entry, %entry
1872 bb3: ; preds = %entry
1875 //===---------------------------------------------------------------------===//
1877 clang -O3 fails to devirtualize this virtual inheritance case: (GCC PR45875)
1878 Looks related to PR3100
1882 virtual void foo ();
1884 struct c11 : c10, c1{
1887 struct c28 : virtual c11{
1896 //===---------------------------------------------------------------------===//
1900 int foo(int a) { return (a & (~15)) / 16; }
1904 define i32 @foo(i32 %a) nounwind readnone ssp {
1906 %and = and i32 %a, -16
1907 %div = sdiv i32 %and, 16
1911 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
1912 should be instcombined into just "a >> 4".
1914 We do get this at the codegen level, so something knows about it, but
1915 instcombine should catch it earlier:
1923 //===---------------------------------------------------------------------===//
1925 This code (from GCC PR28685):
1927 int test(int a, int b) {
1937 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
1939 %cmp = icmp slt i32 %a, %b
1940 br i1 %cmp, label %return, label %if.end
1942 if.end: ; preds = %entry
1943 %cmp5 = icmp eq i32 %a, %b
1944 %conv6 = zext i1 %cmp5 to i32
1947 return: ; preds = %entry
1953 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
1955 %0 = icmp sle i32 %a, %b
1956 %retval = zext i1 %0 to i32
1960 //===---------------------------------------------------------------------===//
1962 This code can be seen in viterbi:
1964 %64 = call noalias i8* @malloc(i64 %62) nounwind
1966 %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
1967 %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
1969 llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
1970 fold to %62. This is a security win (overflows of malloc will get caught)
1971 and also a performance win by exposing more memsets to the optimizer.
1973 This occurs several times in viterbi.
1975 Note that this would change the semantics of @llvm.objectsize which by its
1976 current definition always folds to a constant. We also should make sure that
1977 we remove checking in code like
1979 char *p = malloc(strlen(s)+1);
1980 __strcpy_chk(p, s, __builtin_objectsize(p, 0));
1982 //===---------------------------------------------------------------------===//
1984 This code (from Benchmarks/Dhrystone/dry.c):
1986 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
1988 %sext = shl i32 %0, 24
1989 %conv = ashr i32 %sext, 24
1990 %sext6 = shl i32 %1, 24
1991 %conv4 = ashr i32 %sext6, 24
1992 %cmp = icmp eq i32 %conv, %conv4
1993 %. = select i1 %cmp, i32 10000, i32 0
1997 Should be simplified into something like:
1999 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
2001 %sext = shl i32 %0, 24
2002 %conv = and i32 %sext, 0xFF000000
2003 %sext6 = shl i32 %1, 24
2004 %conv4 = and i32 %sext6, 0xFF000000
2005 %cmp = icmp eq i32 %conv, %conv4
2006 %. = select i1 %cmp, i32 10000, i32 0
2012 define i32 @Func1(i32, i32) nounwind readnone optsize ssp {
2014 %conv = and i32 %0, 0xFF
2015 %conv4 = and i32 %1, 0xFF
2016 %cmp = icmp eq i32 %conv, %conv4
2017 %. = select i1 %cmp, i32 10000, i32 0
2020 //===---------------------------------------------------------------------===//
2022 clang -O3 currently compiles this code
2024 int g(unsigned int a) {
2025 unsigned int c[100];
2028 unsigned int b = c[10] + c[11];
2036 define i32 @g(i32 a) nounwind readnone {
2037 %add = shl i32 %a, 1
2038 %mul = shl i32 %a, 1
2039 %cmp = icmp ugt i32 %add, %mul
2040 %a.addr.0 = select i1 %cmp, i32 11, i32 15
2044 The icmp should fold to false. This CSE opportunity is only available
2045 after GVN and InstCombine have run.
2047 //===---------------------------------------------------------------------===//
2049 memcpyopt should turn this:
2051 define i8* @test10(i32 %x) {
2052 %alloc = call noalias i8* @malloc(i32 %x) nounwind
2053 call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
2057 into a call to calloc. We should make sure that we analyze calloc as
2058 aggressively as malloc though.
2060 //===---------------------------------------------------------------------===//
2062 clang -O3 doesn't optimize this:
2064 void f1(int* begin, int* end) {
2065 std::fill(begin, end, 0);
2068 into a memset. This is PR8942.
2070 //===---------------------------------------------------------------------===//
2072 clang -O3 -fno-exceptions currently compiles this code:
2075 std::vector<int> v(N);
2077 extern void sink(void*); sink(&v);
2082 define void @_Z1fi(i32 %N) nounwind {
2084 %v2 = alloca [3 x i32*], align 8
2085 %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
2086 %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
2087 %conv = sext i32 %N to i64
2088 store i32* null, i32** %v2.sub, align 8, !tbaa !0
2089 %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
2090 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2091 %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
2092 store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2093 %cmp.i.i.i.i = icmp eq i32 %N, 0
2094 br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
2096 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
2097 store i32* null, i32** %v2.sub, align 8, !tbaa !0
2098 store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2099 %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
2100 store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2101 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2103 cond.true.i.i.i.i: ; preds = %entry
2104 %cmp.i.i.i.i.i = icmp slt i32 %N, 0
2105 br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
2107 if.then.i.i.i.i.i: ; preds = %cond.true.i.i.i.i
2108 call void @_ZSt17__throw_bad_allocv() noreturn nounwind
2111 _ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i: ; preds = %cond.true.i.i.i.i
2112 %mul.i.i.i.i.i = shl i64 %conv, 2
2113 %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
2114 %0 = bitcast i8* %call3.i.i.i.i.i to i32*
2115 store i32* %0, i32** %v2.sub, align 8, !tbaa !0
2116 store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
2117 %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
2118 store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
2119 call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
2120 br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
2122 This is just the handling the construction of the vector. Most surprising here
2123 is the fact that all three null stores in %entry are dead (because we do no
2126 Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
2127 This is a because the client of LazyValueInfo doesn't simplify all instruction
2128 operands, just selected ones.
2130 //===---------------------------------------------------------------------===//
2132 clang -O3 -fno-exceptions currently compiles this code:
2134 void f(char* a, int n) {
2135 __builtin_memset(a, 0, n);
2136 for (int i = 0; i < n; ++i)
2142 define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
2144 %conv = sext i32 %n to i64
2145 tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
2146 %cmp8 = icmp sgt i32 %n, 0
2147 br i1 %cmp8, label %for.body.lr.ph, label %for.end
2149 for.body.lr.ph: ; preds = %entry
2150 %tmp10 = add i32 %n, -1
2151 %tmp11 = zext i32 %tmp10 to i64
2152 %tmp12 = add i64 %tmp11, 1
2153 call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
2156 for.end: ; preds = %entry
2160 This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
2161 the two memset's together. The issue with %n seems to stem from poor handling
2162 of the original loop.
2164 To simplify this, we need SCEV to know that "n != 0" because of the dominating
2165 conditional. That would turn the second memset into a simple memset of 'n'.
2167 //===---------------------------------------------------------------------===//
2169 clang -O3 -fno-exceptions currently compiles this code:
2172 unsigned short m1, m2;
2173 unsigned char m3, m4;
2177 std::vector<S> v(N);
2178 extern void sink(void*); sink(&v);
2181 into poor code for zero-initializing 'v' when N is >0. The problem is that
2182 S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
2183 4 stores on each iteration. If the struct were 8 bytes, this gets turned into
2186 In order to handle this we have to:
2187 A) Teach clang to generate metadata for memsets of structs that have holes in
2189 B) Teach clang to use such a memset for zero init of this struct (since it has
2190 a hole), instead of doing elementwise zeroing.
2192 //===---------------------------------------------------------------------===//
2194 clang -O3 currently compiles this code:
2196 extern const int magic;
2197 double f() { return 0.0 * magic; }
2201 @magic = external constant i32
2203 define double @_Z1fv() nounwind readnone {
2205 %tmp = load i32* @magic, align 4, !tbaa !0
2206 %conv = sitofp i32 %tmp to double
2207 %mul = fmul double %conv, 0.000000e+00
2211 We should be able to fold away this fmul to 0.0. More generally, fmul(x,0.0)
2212 can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
2213 not an INF. The CannotBeNegativeZero predicate in value tracking should be
2214 extended to support general "fpclassify" operations that can return
2215 yes/no/unknown for each of these predicates.
2217 In this predicate, we know that uitofp is trivially never NaN or -0.0, and
2218 we know that it isn't +/-Inf if the floating point type has enough exponent bits
2219 to represent the largest integer value as < inf.
2221 //===---------------------------------------------------------------------===//
2223 When optimizing a transformation that can change the sign of 0.0 (such as the
2224 0.0*val -> 0.0 transformation above), it might be provable that the sign of the
2225 expression doesn't matter. For example, by the above rules, we can't transform
2226 fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
2227 expression is defined to be -0.0.
2229 If we look at the uses of the fmul for example, we might be able to prove that
2230 all uses don't care about the sign of zero. For example, if we have:
2232 fadd(fmul(sitofp(x), 0.0), 2.0)
2234 Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
2235 transform the fmul to 0.0, and then the fadd to 2.0.
2237 //===---------------------------------------------------------------------===//
2239 We should enhance memcpy/memcpy/memset to allow a metadata node on them
2240 indicating that some bytes of the transfer are undefined. This is useful for
2241 frontends like clang when lowering struct copies, when some elements of the
2242 struct are undefined. Consider something like this:
2248 void foo(struct x*P);
2249 struct x testfunc() {
2257 We currently compile this to:
2258 $ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
2261 %struct.x = type { i8, [4 x i32] }
2263 define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
2265 %V1 = alloca %struct.x, align 4
2266 call void @foo(%struct.x* %V1)
2267 %tmp1 = bitcast %struct.x* %V1 to i8*
2268 %0 = bitcast %struct.x* %V1 to i160*
2269 %srcval1 = load i160* %0, align 4
2270 %tmp2 = bitcast %struct.x* %agg.result to i8*
2271 %1 = bitcast %struct.x* %agg.result to i160*
2272 store i160 %srcval1, i160* %1, align 4
2276 This happens because SRoA sees that the temp alloca has is being memcpy'd into
2277 and out of and it has holes and it has to be conservative. If we knew about the
2278 holes, then this could be much much better.
2280 Having information about these holes would also improve memcpy (etc) lowering at
2281 llc time when it gets inlined, because we can use smaller transfers. This also
2282 avoids partial register stalls in some important cases.
2284 //===---------------------------------------------------------------------===//
2286 We miss an optzn when lowering divide by some constants. For example:
2287 int test(int x) { return x/10; }
2294 imulq $1717986919, %rax, %rax ## imm = 0x66666667
2302 The two starred instructions could be replaced with a "sarl $34, %rax". This
2303 occurs in 186.crafty very frequently.
2305 //===---------------------------------------------------------------------===//