1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should recognize idioms for add-with-carry and turn it into the appropriate
6 intrinsics. This example:
8 unsigned add32carry(unsigned sum, unsigned x) {
15 Compiles to: clang t.c -S -o - -O3 -fomit-frame-pointer -m64 -mkernel
17 _add32carry: ## @add32carry
28 leal (%rsi,%rdi), %eax
35 //===---------------------------------------------------------------------===//
37 Dead argument elimination should be enhanced to handle cases when an argument is
38 dead to an externally visible function. Though the argument can't be removed
39 from the externally visible function, the caller doesn't need to pass it in.
40 For example in this testcase:
42 void foo(int X) __attribute__((noinline));
43 void foo(int X) { sideeffect(); }
44 void bar(int A) { foo(A+1); }
48 define void @bar(i32 %A) nounwind ssp {
49 %0 = add nsw i32 %A, 1 ; <i32> [#uses=1]
50 tail call void @foo(i32 %0) nounwind noinline ssp
54 The add is dead, we could pass in 'i32 undef' instead. This occurs for C++
55 templates etc, which usually have linkonce_odr/weak_odr linkage, not internal
58 //===---------------------------------------------------------------------===//
60 With the recent changes to make the implicit def/use set explicit in
61 machineinstrs, we should change the target descriptions for 'call' instructions
62 so that the .td files don't list all the call-clobbered registers as implicit
63 defs. Instead, these should be added by the code generator (e.g. on the dag).
65 This has a number of uses:
67 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
68 for their different impdef sets.
69 2. Targets with multiple calling convs (e.g. x86) which have different clobber
70 sets don't need copies of call instructions.
71 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
74 //===---------------------------------------------------------------------===//
76 Make the PPC branch selector target independant
78 //===---------------------------------------------------------------------===//
80 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
81 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
82 safe in general, even on darwin. See the libm implementation of hypot for
83 examples (which special case when x/y are exactly zero to get signed zeros etc
86 //===---------------------------------------------------------------------===//
88 Solve this DAG isel folding deficiency:
106 The problem is the store's chain operand is not the load X but rather
107 a TokenFactor of the load X and load Y, which prevents the folding.
109 There are two ways to fix this:
111 1. The dag combiner can start using alias analysis to realize that y/x
112 don't alias, making the store to X not dependent on the load from Y.
113 2. The generated isel could be made smarter in the case it can't
114 disambiguate the pointers.
116 Number 1 is the preferred solution.
118 This has been "fixed" by a TableGen hack. But that is a short term workaround
119 which will be removed once the proper fix is made.
121 //===---------------------------------------------------------------------===//
123 On targets with expensive 64-bit multiply, we could LSR this:
130 for (i = ...; ++i, tmp+=tmp)
133 This would be a win on ppc32, but not x86 or ppc64.
135 //===---------------------------------------------------------------------===//
137 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
139 //===---------------------------------------------------------------------===//
141 Reassociate should turn things like:
143 int factorial(int X) {
144 return X*X*X*X*X*X*X*X;
147 into llvm.powi calls, allowing the code generator to produce balanced
148 multiplication trees.
150 First, the intrinsic needs to be extended to support integers, and second the
151 code generator needs to be enhanced to lower these to multiplication trees.
153 //===---------------------------------------------------------------------===//
155 Interesting? testcase for add/shift/mul reassoc:
157 int bar(int x, int y) {
158 return x*x*x+y+x*x*x*x*x*y*y*y*y;
160 int foo(int z, int n) {
161 return bar(z, n) + bar(2*z, 2*n);
164 This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
165 is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
166 which is the same number of multiplies and is canonical, because the 2*X has
167 multiple uses. Here's a simple example:
169 define i32 @test15(i32 %X1) {
170 %B = mul i32 %X1, 47 ; X1*47
176 //===---------------------------------------------------------------------===//
178 Reassociate should handle the example in GCC PR16157:
180 extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
181 void f () { /* this can be optimized to four additions... */
182 b4 = a4 + a3 + a2 + a1 + a0;
183 b3 = a3 + a2 + a1 + a0;
188 This requires reassociating to forms of expressions that are already available,
189 something that reassoc doesn't think about yet.
192 //===---------------------------------------------------------------------===//
194 This function: (derived from GCC PR19988)
195 double foo(double x, double y) {
196 return ((x + 0.1234 * y) * (x + -0.1234 * y));
202 mulsd LCPI1_1(%rip), %xmm1
203 mulsd LCPI1_0(%rip), %xmm2
210 Reassociate should be able to turn it into:
212 double foo(double x, double y) {
213 return ((x + 0.1234 * y) * (x - 0.1234 * y));
216 Which allows the multiply by constant to be CSE'd, producing:
219 mulsd LCPI1_0(%rip), %xmm1
226 This doesn't need -ffast-math support at all. This is particularly bad because
227 the llvm-gcc frontend is canonicalizing the later into the former, but clang
228 doesn't have this problem.
230 //===---------------------------------------------------------------------===//
232 These two functions should generate the same code on big-endian systems:
234 int g(int *j,int *l) { return memcmp(j,l,4); }
235 int h(int *j, int *l) { return *j - *l; }
237 this could be done in SelectionDAGISel.cpp, along with other special cases,
240 //===---------------------------------------------------------------------===//
242 It would be nice to revert this patch:
243 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
245 And teach the dag combiner enough to simplify the code expanded before
246 legalize. It seems plausible that this knowledge would let it simplify other
249 //===---------------------------------------------------------------------===//
251 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
252 to the type size. It works but can be overly conservative as the alignment of
253 specific vector types are target dependent.
255 //===---------------------------------------------------------------------===//
257 We should produce an unaligned load from code like this:
259 v4sf example(float *P) {
260 return (v4sf){P[0], P[1], P[2], P[3] };
263 //===---------------------------------------------------------------------===//
265 Add support for conditional increments, and other related patterns. Instead
270 je LBB16_2 #cond_next
281 //===---------------------------------------------------------------------===//
283 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
285 Expand these to calls of sin/cos and stores:
286 double sincos(double x, double *sin, double *cos);
287 float sincosf(float x, float *sin, float *cos);
288 long double sincosl(long double x, long double *sin, long double *cos);
290 Doing so could allow SROA of the destination pointers. See also:
291 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
293 This is now easily doable with MRVs. We could even make an intrinsic for this
294 if anyone cared enough about sincos.
296 //===---------------------------------------------------------------------===//
298 quantum_sigma_x in 462.libquantum contains the following loop:
300 for(i=0; i<reg->size; i++)
302 /* Flip the target bit of each basis state */
303 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
306 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
307 so cool to turn it into something like:
309 long long Res = ((MAX_UNSIGNED) 1 << target);
311 for(i=0; i<reg->size; i++)
312 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
314 for(i=0; i<reg->size; i++)
315 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
318 ... which would only do one 32-bit XOR per loop iteration instead of two.
320 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
323 //===---------------------------------------------------------------------===//
325 This isn't recognized as bswap by instcombine (yes, it really is bswap):
327 unsigned long reverse(unsigned v) {
329 t = v ^ ((v << 16) | (v >> 16));
331 v = (v << 24) | (v >> 8);
335 Neither is this (very standard idiom):
339 return (((n) << 24) | (((n) & 0xff00) << 8)
340 | (((n) >> 8) & 0xff00) | ((n) >> 24));
343 //===---------------------------------------------------------------------===//
347 These idioms should be recognized as popcount (see PR1488):
349 unsigned countbits_slow(unsigned v) {
351 for (c = 0; v; v >>= 1)
355 unsigned countbits_fast(unsigned v){
358 v &= v - 1; // clear the least significant bit set
362 BITBOARD = unsigned long long
363 int PopCnt(register BITBOARD a) {
371 unsigned int popcount(unsigned int input) {
372 unsigned int count = 0;
373 for (unsigned int i = 0; i < 4 * 8; i++)
374 count += (input >> i) & i;
378 This is a form of idiom recognition for loops, the same thing that could be
379 useful for recognizing memset/memcpy.
381 //===---------------------------------------------------------------------===//
383 These should turn into single 16-bit (unaligned?) loads on little/big endian
386 unsigned short read_16_le(const unsigned char *adr) {
387 return adr[0] | (adr[1] << 8);
389 unsigned short read_16_be(const unsigned char *adr) {
390 return (adr[0] << 8) | adr[1];
393 //===---------------------------------------------------------------------===//
395 -instcombine should handle this transform:
396 icmp pred (sdiv X / C1 ), C2
397 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
399 Currently InstCombine avoids this transform but will do it when the signs of
400 the operands and the sign of the divide match. See the FIXME in
401 InstructionCombining.cpp in the visitSetCondInst method after the switch case
402 for Instruction::UDiv (around line 4447) for more details.
404 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
407 //===---------------------------------------------------------------------===//
411 viterbi speeds up *significantly* if the various "history" related copy loops
412 are turned into memcpy calls at the source level. We need a "loops to memcpy"
415 //===---------------------------------------------------------------------===//
419 SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
420 opportunities in its double_array_divs_variable function: it needs loop
421 interchange, memory promotion (which LICM already does), vectorization and
422 variable trip count loop unrolling (since it has a constant trip count). ICC
423 apparently produces this very nice code with -ffast-math:
425 ..B1.70: # Preds ..B1.70 ..B1.69
426 mulpd %xmm0, %xmm1 #108.2
427 mulpd %xmm0, %xmm1 #108.2
428 mulpd %xmm0, %xmm1 #108.2
429 mulpd %xmm0, %xmm1 #108.2
431 cmpl $131072, %edx #108.2
432 jb ..B1.70 # Prob 99% #108.2
434 It would be better to count down to zero, but this is a lot better than what we
437 //===---------------------------------------------------------------------===//
441 typedef unsigned U32;
442 typedef unsigned long long U64;
443 int test (U32 *inst, U64 *regs) {
446 int r1 = (temp >> 20) & 0xf;
447 int b2 = (temp >> 16) & 0xf;
448 effective_addr2 = temp & 0xfff;
449 if (b2) effective_addr2 += regs[b2];
450 b2 = (temp >> 12) & 0xf;
451 if (b2) effective_addr2 += regs[b2];
452 effective_addr2 &= regs[4];
453 if ((effective_addr2 & 3) == 0)
458 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
459 we don't eliminate the computation of the top half of effective_addr2 because
460 we don't have whole-function selection dags. On x86, this means we use one
461 extra register for the function when effective_addr2 is declared as U64 than
462 when it is declared U32.
464 PHI Slicing could be extended to do this.
466 //===---------------------------------------------------------------------===//
468 LSR should know what GPR types a target has from TargetData. This code:
470 volatile short X, Y; // globals
474 for (i = 0; i < N; i++) { X = i; Y = i*4; }
477 produces two near identical IV's (after promotion) on PPC/ARM:
487 add r2, r2, #1 <- [0,+,1]
488 sub r0, r0, #1 <- [0,-,1]
492 LSR should reuse the "+" IV for the exit test.
494 //===---------------------------------------------------------------------===//
496 Tail call elim should be more aggressive, checking to see if the call is
497 followed by an uncond branch to an exit block.
499 ; This testcase is due to tail-duplication not wanting to copy the return
500 ; instruction into the terminating blocks because there was other code
501 ; optimized out of the function after the taildup happened.
502 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
504 define i32 @t4(i32 %a) {
506 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
507 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
508 br i1 %tmp.2, label %then.0, label %else.0
510 then.0: ; preds = %entry
511 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
512 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
515 else.0: ; preds = %entry
516 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
517 br i1 %tmp.7, label %then.1, label %return
519 then.1: ; preds = %else.0
520 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
521 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
524 return: ; preds = %then.1, %else.0, %then.0
525 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
530 //===---------------------------------------------------------------------===//
532 Tail recursion elimination should handle:
537 return 2 * pow2m1 (n - 1) + 1;
540 Also, multiplies can be turned into SHL's, so they should be handled as if
541 they were associative. "return foo() << 1" can be tail recursion eliminated.
543 //===---------------------------------------------------------------------===//
545 Argument promotion should promote arguments for recursive functions, like
548 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
550 define internal i32 @foo(i32* %x) {
552 %tmp = load i32* %x ; <i32> [#uses=0]
553 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
557 define i32 @bar(i32* %x) {
559 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
563 //===---------------------------------------------------------------------===//
565 We should investigate an instruction sinking pass. Consider this silly
581 je LBB1_2 # cond_true
589 The PIC base computation (call+popl) is only used on one path through the
590 code, but is currently always computed in the entry block. It would be
591 better to sink the picbase computation down into the block for the
592 assertion, as it is the only one that uses it. This happens for a lot of
593 code with early outs.
595 Another example is loads of arguments, which are usually emitted into the
596 entry block on targets like x86. If not used in all paths through a
597 function, they should be sunk into the ones that do.
599 In this case, whole-function-isel would also handle this.
601 //===---------------------------------------------------------------------===//
603 Investigate lowering of sparse switch statements into perfect hash tables:
604 http://burtleburtle.net/bob/hash/perfect.html
606 //===---------------------------------------------------------------------===//
608 We should turn things like "load+fabs+store" and "load+fneg+store" into the
609 corresponding integer operations. On a yonah, this loop:
614 for (b = 0; b < 10000000; b++)
615 for (i = 0; i < 256; i++)
619 is twice as slow as this loop:
624 for (b = 0; b < 10000000; b++)
625 for (i = 0; i < 256; i++)
626 a[i] ^= (1ULL << 63);
629 and I suspect other processors are similar. On X86 in particular this is a
630 big win because doing this with integers allows the use of read/modify/write
633 //===---------------------------------------------------------------------===//
635 DAG Combiner should try to combine small loads into larger loads when
636 profitable. For example, we compile this C++ example:
638 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
639 extern THotKey m_HotKey;
640 THotKey GetHotKey () { return m_HotKey; }
642 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
647 movb _m_HotKey+3, %cl
648 movb _m_HotKey+4, %dl
649 movb _m_HotKey+2, %ch
664 movzwl _m_HotKey+4, %edx
668 The LLVM IR contains the needed alignment info, so we should be able to
669 merge the loads and stores into 4-byte loads:
671 %struct.THotKey = type { i16, i8, i8, i8 }
672 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
674 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
675 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
676 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
677 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
679 Alternatively, we should use a small amount of base-offset alias analysis
680 to make it so the scheduler doesn't need to hold all the loads in regs at
683 //===---------------------------------------------------------------------===//
685 We should add an FRINT node to the DAG to model targets that have legal
686 implementations of ceil/floor/rint.
688 //===---------------------------------------------------------------------===//
693 long long input[8] = {1,1,1,1,1,1,1,1};
697 We currently compile this into a memcpy from a global array since the
698 initializer is fairly large and not memset'able. This is good, but the memcpy
699 gets lowered to load/stores in the code generator. This is also ok, except
700 that the codegen lowering for memcpy doesn't handle the case when the source
701 is a constant global. This gives us atrocious code like this:
706 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
708 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
710 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
712 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
714 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
716 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
718 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
730 //===---------------------------------------------------------------------===//
732 http://llvm.org/PR717:
734 The following code should compile into "ret int undef". Instead, LLVM
735 produces "ret int 0":
744 //===---------------------------------------------------------------------===//
746 The loop unroller should partially unroll loops (instead of peeling them)
747 when code growth isn't too bad and when an unroll count allows simplification
748 of some code within the loop. One trivial example is:
754 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
763 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
764 reduction in code size. The resultant code would then also be suitable for
765 exit value computation.
767 //===---------------------------------------------------------------------===//
769 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
770 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
771 matching code in dag combine doesn't look through truncates aggressively
772 enough. Here are some testcases reduces from GCC PR17886:
774 unsigned long long f(unsigned long long x, int y) {
775 return (x << y) | (x >> 64-y);
777 unsigned f2(unsigned x, int y){
778 return (x << y) | (x >> 32-y);
780 unsigned long long f3(unsigned long long x){
782 return (x << y) | (x >> 64-y);
784 unsigned f4(unsigned x){
786 return (x << y) | (x >> 32-y);
788 unsigned long long f5(unsigned long long x, unsigned long long y) {
789 return (x << 8) | ((y >> 48) & 0xffull);
791 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
794 return (x << 8) | ((y >> 48) & 0xffull);
796 return (x << 16) | ((y >> 40) & 0xffffull);
798 return (x << 24) | ((y >> 32) & 0xffffffull);
800 return (x << 32) | ((y >> 24) & 0xffffffffull);
802 return (x << 40) | ((y >> 16) & 0xffffffffffull);
806 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
807 generate truly horrible code, instead of using shld and friends. On
808 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
809 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
811 //===---------------------------------------------------------------------===//
813 This (and similar related idioms):
815 unsigned int foo(unsigned char i) {
816 return i | (i<<8) | (i<<16) | (i<<24);
821 define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
823 %conv = zext i8 %i to i32
824 %shl = shl i32 %conv, 8
825 %shl5 = shl i32 %conv, 16
826 %shl9 = shl i32 %conv, 24
827 %or = or i32 %shl9, %conv
828 %or6 = or i32 %or, %shl5
829 %or10 = or i32 %or6, %shl
833 it would be better as:
835 unsigned int bar(unsigned char i) {
836 unsigned int j=i | (i << 8);
842 define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
844 %conv = zext i8 %i to i32
845 %shl = shl i32 %conv, 8
846 %or = or i32 %shl, %conv
847 %shl5 = shl i32 %or, 16
848 %or6 = or i32 %shl5, %or
852 or even i*0x01010101, depending on the speed of the multiplier. The best way to
853 handle this is to canonicalize it to a multiply in IR and have codegen handle
854 lowering multiplies to shifts on cpus where shifts are faster.
856 //===---------------------------------------------------------------------===//
858 We do a number of simplifications in simplify libcalls to strength reduce
859 standard library functions, but we don't currently merge them together. For
860 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
861 be done safely if "b" isn't modified between the strlen and memcpy of course.
863 //===---------------------------------------------------------------------===//
865 We compile this program: (from GCC PR11680)
866 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
868 Into code that runs the same speed in fast/slow modes, but both modes run 2x
869 slower than when compile with GCC (either 4.0 or 4.2):
871 $ llvm-g++ perf.cpp -O3 -fno-exceptions
873 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
875 $ g++ perf.cpp -O3 -fno-exceptions
877 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
879 It looks like we are making the same inlining decisions, so this may be raw
880 codegen badness or something else (haven't investigated).
882 //===---------------------------------------------------------------------===//
884 We miss some instcombines for stuff like this:
886 void foo (unsigned int a) {
887 /* This one is equivalent to a >= (3 << 2). */
892 A few other related ones are in GCC PR14753.
894 //===---------------------------------------------------------------------===//
896 Divisibility by constant can be simplified (according to GCC PR12849) from
897 being a mulhi to being a mul lo (cheaper). Testcase:
899 void bar(unsigned n) {
904 This is equivalent to the following, where 2863311531 is the multiplicative
905 inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
906 void bar(unsigned n) {
907 if (n * 2863311531U < 1431655766U)
911 The same transformation can work with an even modulo with the addition of a
912 rotate: rotate the result of the multiply to the right by the number of bits
913 which need to be zero for the condition to be true, and shrink the compare RHS
914 by the same amount. Unless the target supports rotates, though, that
915 transformation probably isn't worthwhile.
917 The transformation can also easily be made to work with non-zero equality
918 comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
920 //===---------------------------------------------------------------------===//
922 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
923 bunch of other stuff from this example (see PR1604):
933 std::scanf("%d", &t.val);
934 std::printf("%d\n", t.val);
937 //===---------------------------------------------------------------------===//
939 These functions perform the same computation, but produce different assembly.
941 define i8 @select(i8 %x) readnone nounwind {
942 %A = icmp ult i8 %x, 250
943 %B = select i1 %A, i8 0, i8 1
947 define i8 @addshr(i8 %x) readnone nounwind {
948 %A = zext i8 %x to i9
949 %B = add i9 %A, 6 ;; 256 - 250 == 6
951 %D = trunc i9 %C to i8
955 //===---------------------------------------------------------------------===//
959 f (unsigned long a, unsigned long b, unsigned long c)
961 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
964 f (unsigned long a, unsigned long b, unsigned long c)
966 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
968 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
969 "clang -emit-llvm-bc | opt -std-compile-opts".
971 //===---------------------------------------------------------------------===//
974 #define PMD_MASK (~((1UL << 23) - 1))
975 void clear_pmd_range(unsigned long start, unsigned long end)
977 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
980 The expression should optimize to something like
981 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
982 -emit-llvm-bc | opt -std-compile-opts".
984 //===---------------------------------------------------------------------===//
986 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
988 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
989 These should combine to the same thing. Currently, the first function
990 produces better code on X86.
992 //===---------------------------------------------------------------------===//
995 #define abs(x) x>0?x:-x
998 return (abs(x)) >= 0;
1000 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
1001 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1003 //===---------------------------------------------------------------------===//
1007 rotate_cst (unsigned int a)
1009 a = (a << 10) | (a >> 22);
1014 minus_cst (unsigned int a)
1023 mask_gt (unsigned int a)
1025 /* This is equivalent to a > 15. */
1030 rshift_gt (unsigned int a)
1032 /* This is equivalent to a > 23. */
1036 All should simplify to a single comparison. All of these are
1037 currently not optimized with "clang -emit-llvm-bc | opt
1040 //===---------------------------------------------------------------------===//
1043 int c(int* x) {return (char*)x+2 == (char*)x;}
1044 Should combine to 0. Currently not optimized with "clang
1045 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
1047 //===---------------------------------------------------------------------===//
1049 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1050 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1051 with "clang -emit-llvm-bc | opt -std-compile-opts".
1053 //===---------------------------------------------------------------------===//
1055 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1056 Should combine to "x | (y & 3)". Currently not optimized with "clang
1057 -emit-llvm-bc | opt -std-compile-opts".
1059 //===---------------------------------------------------------------------===//
1061 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1062 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1063 "clang -emit-llvm-bc | opt -std-compile-opts".
1065 //===---------------------------------------------------------------------===//
1067 int a(int a,int b) {return (~(a|b))|a;}
1068 Should fold to "a|~b". Currently not optimized with "clang
1069 -emit-llvm-bc | opt -std-compile-opts".
1071 //===---------------------------------------------------------------------===//
1073 int a(int a, int b) {return (a&&b) || (a&&!b);}
1074 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1075 | opt -std-compile-opts".
1077 //===---------------------------------------------------------------------===//
1079 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1080 Should fold to "a ? b : c", or at least something sane. Currently not
1081 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1083 //===---------------------------------------------------------------------===//
1085 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1086 Should fold to a && (b || c). Currently not optimized with "clang
1087 -emit-llvm-bc | opt -std-compile-opts".
1089 //===---------------------------------------------------------------------===//
1091 int a(int x) {return x | ((x & 8) ^ 8);}
1092 Should combine to x | 8. Currently not optimized with "clang
1093 -emit-llvm-bc | opt -std-compile-opts".
1095 //===---------------------------------------------------------------------===//
1097 int a(int x) {return x ^ ((x & 8) ^ 8);}
1098 Should also combine to x | 8. Currently not optimized with "clang
1099 -emit-llvm-bc | opt -std-compile-opts".
1101 //===---------------------------------------------------------------------===//
1103 int a(int x) {return ((x | -9) ^ 8) & x;}
1104 Should combine to x & -9. Currently not optimized with "clang
1105 -emit-llvm-bc | opt -std-compile-opts".
1107 //===---------------------------------------------------------------------===//
1109 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1110 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1111 with "clang -emit-llvm-bc | opt -std-compile-opts".
1113 //===---------------------------------------------------------------------===//
1115 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1116 There's an unnecessary zext in the generated code with "clang
1117 -emit-llvm-bc | opt -std-compile-opts".
1119 //===---------------------------------------------------------------------===//
1121 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1122 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1123 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1125 //===---------------------------------------------------------------------===//
1127 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1129 %tmp = icmp eq i32 %decl_context, 4
1130 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1131 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1132 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1134 tmp1 should be simplified to something like:
1135 (!tmp || decl_context == 1)
1137 This allows recursive simplifications, tmp1 is used all over the place in
1138 the function, e.g. by:
1140 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1141 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1142 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1146 //===---------------------------------------------------------------------===//
1150 Store sinking: This code:
1152 void f (int n, int *cond, int *res) {
1155 for (i = 0; i < n; i++)
1157 *res ^= 234; /* (*) */
1160 On this function GVN hoists the fully redundant value of *res, but nothing
1161 moves the store out. This gives us this code:
1163 bb: ; preds = %bb2, %entry
1164 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1165 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1166 %1 = load i32* %cond, align 4
1167 %2 = icmp eq i32 %1, 0
1168 br i1 %2, label %bb2, label %bb1
1171 %3 = xor i32 %.rle, 234
1172 store i32 %3, i32* %res, align 4
1175 bb2: ; preds = %bb, %bb1
1176 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1177 %indvar.next = add i32 %i.05, 1
1178 %exitcond = icmp eq i32 %indvar.next, %n
1179 br i1 %exitcond, label %return, label %bb
1181 DSE should sink partially dead stores to get the store out of the loop.
1183 Here's another partial dead case:
1184 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1186 //===---------------------------------------------------------------------===//
1188 Scalar PRE hoists the mul in the common block up to the else:
1190 int test (int a, int b, int c, int g) {
1200 It would be better to do the mul once to reduce codesize above the if.
1201 This is GCC PR38204.
1203 //===---------------------------------------------------------------------===//
1207 GCC PR37810 is an interesting case where we should sink load/store reload
1208 into the if block and outside the loop, so we don't reload/store it on the
1229 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1230 we don't sink the store. We need partially dead store sinking.
1232 //===---------------------------------------------------------------------===//
1234 [LOAD PRE CRIT EDGE SPLITTING]
1236 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1237 leading to excess stack traffic. This could be handled by GVN with some crazy
1238 symbolic phi translation. The code we get looks like (g is on the stack):
1242 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1243 store i32 %8, i32* %9, align bel %bb3
1245 bb3: ; preds = %bb1, %bb2, %bb
1246 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1247 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1248 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1249 %11 = load i32* %10, align 4
1251 %11 is partially redundant, an in BB2 it should have the value %8.
1253 GCC PR33344 and PR35287 are similar cases.
1256 //===---------------------------------------------------------------------===//
1260 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1261 GCC testsuite, ones we don't get yet are (checked through loadpre25):
1263 [CRIT EDGE BREAKING]
1264 loadpre3.c predcom-4.c
1266 [PRE OF READONLY CALL]
1269 [TURN SELECT INTO BRANCH]
1270 loadpre14.c loadpre15.c
1272 actually a conditional increment: loadpre18.c loadpre19.c
1274 //===---------------------------------------------------------------------===//
1276 [LOAD PRE / STORE SINKING / SPEC HACK]
1278 This is a chunk of code from 456.hmmer:
1280 int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
1281 int *tpdm, int xmb, int *bp, int *ms) {
1283 for (k = 1; k <= M; k++) {
1284 mc[k] = mpp[k-1] + tpmm[k-1];
1285 if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
1286 if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
1287 if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
1292 It is very profitable for this benchmark to turn the conditional stores to mc[k]
1293 into a conditional move (select instr in IR) and allow the final store to do the
1294 store. See GCC PR27313 for more details. Note that this is valid to xform even
1295 with the new C++ memory model, since mc[k] is previously loaded and later
1298 //===---------------------------------------------------------------------===//
1301 There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
1304 //===---------------------------------------------------------------------===//
1306 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1307 GCC testsuite. For example, we get the first example in predcom-1.c, but
1308 miss the second one:
1313 __attribute__ ((noinline))
1314 void count_averages(int n) {
1316 for (i = 1; i < n; i++)
1317 avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
1320 which compiles into two loads instead of one in the loop.
1322 predcom-2.c is the same as predcom-1.c
1324 predcom-3.c is very similar but needs loads feeding each other instead of
1328 //===---------------------------------------------------------------------===//
1332 Type based alias analysis:
1333 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1335 We should do better analysis of posix_memalign. At the least it should
1336 no-capture its pointer argument, at best, we should know that the out-value
1337 result doesn't point to anything (like malloc). One example of this is in
1338 SingleSource/Benchmarks/Misc/dt.c
1340 //===---------------------------------------------------------------------===//
1342 A/B get pinned to the stack because we turn an if/then into a select instead
1343 of PRE'ing the load/store. This may be fixable in instcombine:
1344 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1346 struct X { int i; };
1360 //===---------------------------------------------------------------------===//
1362 Interesting missed case because of control flow flattening (should be 2 loads):
1363 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1364 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1365 opt -mem2reg -gvn -instcombine | llvm-dis
1366 we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
1367 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1369 //===---------------------------------------------------------------------===//
1371 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1372 We could eliminate the branch condition here, loading from null is undefined:
1374 struct S { int w, x, y, z; };
1375 struct T { int r; struct S s; };
1376 void bar (struct S, int);
1377 void foo (int a, struct T b)
1385 //===---------------------------------------------------------------------===//
1387 simplifylibcalls should do several optimizations for strspn/strcspn:
1389 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1391 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1393 register size_t __result = 0;
1394 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1395 __s[__result] != __reject2 && __s[__result] != __reject3)
1400 This should turn into a switch on the character. See PR3253 for some notes on
1403 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1405 //===---------------------------------------------------------------------===//
1407 "gas" uses this idiom:
1408 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1410 else if (strchr ("<>", *intel_parser.op_string)
1412 Those should be turned into a switch.
1414 //===---------------------------------------------------------------------===//
1416 252.eon contains this interesting code:
1418 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1419 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1420 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1421 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1422 call void @llvm.memcpy.i32(i8* %endptr,
1423 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1424 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1426 This is interesting for a couple reasons. First, in this:
1428 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1429 %strlen = call i32 @strlen(i8* %3072)
1431 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1432 strcpy call returns a pointer to the end of the string. Based on that, the
1433 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1435 Second, the memcpy+strlen strlen can be replaced with:
1437 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1439 Because the destination was just copied into the specified memory buffer. This,
1440 in turn, can be constant folded to "4".
1442 In other code, it contains:
1444 %endptr6978 = bitcast i8* %endptr69 to i32*
1445 store i32 7107374, i32* %endptr6978, align 1
1446 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1448 Which could also be constant folded. Whatever is producing this should probably
1449 be fixed to leave this as a memcpy from a string.
1451 Further, eon also has an interesting partially redundant strlen call:
1453 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1454 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1455 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1456 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1457 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1458 br i1 %685, label %bb10, label %bb9
1461 %686 = call i32 @strlen(i8* %683) nounwind readonly
1462 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1463 br i1 %687, label %bb10, label %bb11
1465 bb10: ; preds = %bb9, %bb8
1466 %688 = call i32 @strlen(i8* %683) nounwind readonly
1468 This could be eliminated by doing the strlen once in bb8, saving code size and
1469 improving perf on the bb8->9->10 path.
1471 //===---------------------------------------------------------------------===//
1473 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1475 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1478 bb62: ; preds = %bb55, %bb53
1479 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1480 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1481 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1482 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1485 br i1 %or.cond, label %bb65, label %bb72
1487 bb65: ; preds = %bb62
1488 store i8 0, i8* %173, align 1
1491 bb72: ; preds = %bb65, %bb62
1492 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1493 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1495 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1496 redundant with the %171 call. At worst, we could shove the %177 strlen call
1497 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1498 that bb65 stores to the string, zeroing out the last byte. This means that on
1499 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1502 This pattern repeats several times, basically doing:
1507 where it is "obvious" that B = A-1.
1509 //===---------------------------------------------------------------------===//
1511 186.crafty also contains this code:
1513 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1514 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1515 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1516 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1517 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1519 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1521 //===---------------------------------------------------------------------===//
1523 186.crafty has this interesting pattern with the "out.4543" variable:
1525 call void @llvm.memcpy.i32(
1526 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1527 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1528 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1530 It is basically doing:
1532 memcpy(globalarray, "string");
1533 printf(..., globalarray);
1535 Anyway, by knowing that printf just reads the memory and forward substituting
1536 the string directly into the printf, this eliminates reads from globalarray.
1537 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1538 other similar functions) there are many stores to "out". Once all the printfs
1539 stop using "out", all that is left is the memcpy's into it. This should allow
1540 globalopt to remove the "stored only" global.
1542 //===---------------------------------------------------------------------===//
1546 define inreg i32 @foo(i8* inreg %p) nounwind {
1548 %tmp1 = ashr i8 %tmp0, 5
1549 %tmp2 = sext i8 %tmp1 to i32
1553 could be dagcombine'd to a sign-extending load with a shift.
1554 For example, on x86 this currently gets this:
1560 while it could get this:
1565 //===---------------------------------------------------------------------===//
1569 int test(int x) { return 1-x == x; } // --> return false
1570 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1572 Always foldable for odd constants, what is the rule for even?
1574 //===---------------------------------------------------------------------===//
1576 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1577 for next field in struct (which is at same address).
1579 For example: store of float into { {{}}, float } could be turned into a store to
1582 //===---------------------------------------------------------------------===//
1584 The arg promotion pass should make use of nocapture to make its alias analysis
1585 stuff much more precise.
1587 //===---------------------------------------------------------------------===//
1589 The following functions should be optimized to use a select instead of a
1590 branch (from gcc PR40072):
1592 char char_int(int m) {if(m>7) return 0; return m;}
1593 int int_char(char m) {if(m>7) return 0; return m;}
1595 //===---------------------------------------------------------------------===//
1597 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1601 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1603 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1604 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1605 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1606 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1607 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1611 However, it's functionally equivalent to:
1613 b = (b & ~0x80) | (a & 0x80);
1615 Which generates this:
1617 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1619 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1620 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1621 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1625 This can be generalized for other forms:
1627 b = (b & ~0x80) | (a & 0x40) << 1;
1629 //===---------------------------------------------------------------------===//
1631 These two functions produce different code. They shouldn't:
1635 uint8_t p1(uint8_t b, uint8_t a) {
1636 b = (b & ~0xc0) | (a & 0xc0);
1640 uint8_t p2(uint8_t b, uint8_t a) {
1641 b = (b & ~0x40) | (a & 0x40);
1642 b = (b & ~0x80) | (a & 0x80);
1646 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1648 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1649 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1650 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1654 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1656 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1657 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1658 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1659 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1660 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1664 //===---------------------------------------------------------------------===//
1666 IPSCCP does not currently propagate argument dependent constants through
1667 functions where it does not not all of the callers. This includes functions
1668 with normal external linkage as well as templates, C99 inline functions etc.
1669 Specifically, it does nothing to:
1671 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1673 %0 = add nsw i32 %y, %z
1676 %3 = add nsw i32 %1, %2
1680 define i32 @test2() nounwind {
1682 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1686 It would be interesting extend IPSCCP to be able to handle simple cases like
1687 this, where all of the arguments to a call are constant. Because IPSCCP runs
1688 before inlining, trivial templates and inline functions are not yet inlined.
1689 The results for a function + set of constant arguments should be memoized in a
1692 //===---------------------------------------------------------------------===//
1694 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1695 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1696 handle simple things like this:
1698 static int foo(const char *X) { return strlen(X); }
1699 int bar() { return foo("abcd"); }
1701 //===---------------------------------------------------------------------===//
1703 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1705 define i1 @test(i8 %x, i8 %y) {
1707 %B = icmp ugt i8 %A, 3
1711 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1712 the sign bit, if the comparison is obviously a sign test. In this case, we only
1713 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1714 would delete the or instruction for us.
1716 //===---------------------------------------------------------------------===//
1718 functionattrs doesn't know much about memcpy/memset. This function should be
1719 marked readnone rather than readonly, since it only twiddles local memory, but
1720 functionattrs doesn't handle memset/memcpy/memmove aggressively:
1722 struct X { int *p; int *q; };
1729 p = __builtin_memcpy (&x, &y, sizeof (int *));
1733 //===---------------------------------------------------------------------===//
1735 Missed instcombine transformation:
1736 define i1 @a(i32 %x) nounwind readnone {
1738 %cmp = icmp eq i32 %x, 30
1739 %sub = add i32 %x, -30
1740 %cmp2 = icmp ugt i32 %sub, 9
1741 %or = or i1 %cmp, %cmp2
1744 This should be optimized to a single compare. Testcase derived from gcc.
1746 //===---------------------------------------------------------------------===//
1748 Missed instcombine or reassociate transformation:
1749 int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
1751 The sgt and slt should be combined into a single comparison. Testcase derived
1754 //===---------------------------------------------------------------------===//
1756 Missed instcombine transformation:
1758 %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
1759 %383 = zext i32 %382 to i64 ; [#uses=1]
1760 %384 = shl i64 %381, %383 ; [#uses=1]
1761 %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
1763 The srem can be transformed to an and because if %tmp14.i is negative, the
1764 shift is undefined. Testcase derived from 403.gcc.
1766 //===---------------------------------------------------------------------===//
1768 This is a range comparison on a divided result (from 403.gcc):
1770 %1337 = sdiv i32 %1336, 8 ; [#uses=1]
1771 %.off.i208 = add i32 %1336, 7 ; [#uses=1]
1772 %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
1774 We already catch this (removing the sdiv) if there isn't an add, we should
1775 handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
1778 int a(int x) { return (unsigned)(x/16+7) < 15; }
1780 Another similar case involves truncations on 64-bit targets:
1782 %361 = sdiv i64 %.046, 8 ; [#uses=1]
1783 %362 = trunc i64 %361 to i32 ; [#uses=2]
1785 %367 = icmp eq i32 %362, 0 ; [#uses=1]
1787 //===---------------------------------------------------------------------===//
1789 Missed instcombine/dagcombine transformation:
1790 define void @lshift_lt(i8 zeroext %a) nounwind {
1792 %conv = zext i8 %a to i32
1793 %shl = shl i32 %conv, 3
1794 %cmp = icmp ult i32 %shl, 33
1795 br i1 %cmp, label %if.then, label %if.end
1798 tail call void @bar() nounwind
1804 declare void @bar() nounwind
1806 The shift should be eliminated. Testcase derived from gcc.
1808 //===---------------------------------------------------------------------===//
1810 These compile into different code, one gets recognized as a switch and the
1811 other doesn't due to phase ordering issues (PR6212):
1813 int test1(int mainType, int subType) {
1816 else if (mainType == 9)
1818 else if (mainType == 11)
1823 int test2(int mainType, int subType) {
1833 //===---------------------------------------------------------------------===//
1835 The following test case (from PR6576):
1837 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1839 %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
1840 br i1 %cond1, label %exit, label %bb.nph
1841 bb.nph: ; preds = %entry
1842 %tmp = mul i32 %b, %a ; <i32> [#uses=1]
1844 exit: ; preds = %entry
1848 could be reduced to:
1850 define i32 @mul(i32 %a, i32 %b) nounwind readnone {
1852 %tmp = mul i32 %b, %a
1856 //===---------------------------------------------------------------------===//
1858 We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
1861 Another interesting case is that something related could be used for variables
1862 that go const after their ctor has finished. In these cases, globalopt (which
1863 can statically run the constructor) could mark the global const (so it gets put
1864 in the readonly section). A testcase would be:
1867 using namespace std;
1868 const complex<char> should_be_in_rodata (42,-42);
1869 complex<char> should_be_in_data (42,-42);
1870 complex<char> should_be_in_bss;
1872 Where we currently evaluate the ctors but the globals don't become const because
1873 the optimizer doesn't know they "become const" after the ctor is done. See
1874 GCC PR4131 for more examples.
1876 //===---------------------------------------------------------------------===//
1881 return x > 1 ? x : 1;
1884 LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
1885 and cheaper on most targets.
1887 LLVM prefers comparisons with zero over non-zero in general, but in this
1888 case it choses instead to keep the max operation obvious.
1890 //===---------------------------------------------------------------------===//
1892 Take the following testcase on x86-64 (similar testcases exist for all targets
1895 define void @a(i64* nocapture %s, i64* nocapture %t, i64 %a, i64 %b,
1898 %0 = zext i64 %a to i128 ; <i128> [#uses=1]
1899 %1 = zext i64 %b to i128 ; <i128> [#uses=1]
1900 %2 = add i128 %1, %0 ; <i128> [#uses=2]
1901 %3 = zext i64 %c to i128 ; <i128> [#uses=1]
1902 %4 = shl i128 %3, 64 ; <i128> [#uses=1]
1903 %5 = add i128 %4, %2 ; <i128> [#uses=1]
1904 %6 = lshr i128 %5, 64 ; <i128> [#uses=1]
1905 %7 = trunc i128 %6 to i64 ; <i64> [#uses=1]
1906 store i64 %7, i64* %s, align 8
1907 %8 = trunc i128 %2 to i64 ; <i64> [#uses=1]
1908 store i64 %8, i64* %t, align 8
1928 The generated SelectionDAG has an ADD of an ADDE, where both operands of the
1929 ADDE are zero. Replacing one of the operands of the ADDE with the other operand
1930 of the ADD, and replacing the ADD with the ADDE, should give the desired result.
1932 (That said, we are doing a lot better than gcc on this testcase. :) )
1934 //===---------------------------------------------------------------------===//
1936 Switch lowering generates less than ideal code for the following switch:
1937 define void @a(i32 %x) nounwind {
1939 switch i32 %x, label %if.end [
1940 i32 0, label %if.then
1941 i32 1, label %if.then
1942 i32 2, label %if.then
1943 i32 3, label %if.then
1944 i32 5, label %if.then
1947 tail call void @foo() nounwind
1954 Generated code on x86-64 (other platforms give similar results):
1967 The movl+movl+btq+jb could be simplified to a cmpl+jne.
1969 Or, if we wanted to be really clever, we could simplify the whole thing to
1970 something like the following, which eliminates a branch:
1977 //===---------------------------------------------------------------------===//
1978 Given a branch where the two target blocks are identical ("ret i32 %b" in
1979 both), simplifycfg will simplify them away. But not so for a switch statement:
1981 define i32 @f(i32 %a, i32 %b) nounwind readnone {
1983 switch i32 %a, label %bb3 [
1988 bb: ; preds = %entry, %entry
1991 bb3: ; preds = %entry
1994 //===---------------------------------------------------------------------===//
1996 clang -O3 fails to devirtualize this virtual inheritance case: (GCC PR45875)
1997 Looks related to PR3100
2001 virtual void foo ();
2003 struct c11 : c10, c1{
2006 struct c28 : virtual c11{
2015 //===---------------------------------------------------------------------===//
2019 int foo(int a) { return (a & (~15)) / 16; }
2023 define i32 @foo(i32 %a) nounwind readnone ssp {
2025 %and = and i32 %a, -16
2026 %div = sdiv i32 %and, 16
2030 but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
2031 should be instcombined into just "a >> 4".
2033 We do get this at the codegen level, so something knows about it, but
2034 instcombine should catch it earlier:
2042 //===---------------------------------------------------------------------===//
2044 This code (from GCC PR28685):
2046 int test(int a, int b) {
2056 define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
2058 %cmp = icmp slt i32 %a, %b
2059 br i1 %cmp, label %return, label %if.end
2061 if.end: ; preds = %entry
2062 %cmp5 = icmp eq i32 %a, %b
2063 %conv6 = zext i1 %cmp5 to i32
2066 return: ; preds = %entry
2072 define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
2074 %0 = icmp sle i32 %a, %b
2075 %retval = zext i1 %0 to i32
2079 //===---------------------------------------------------------------------===//