1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 Make the PPC branch selector target independant
23 //===---------------------------------------------------------------------===//
25 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
26 precision don't matter (ffastmath). Misc/mandel will like this. :) This isn't
27 safe in general, even on darwin. See the libm implementation of hypot for
28 examples (which special case when x/y are exactly zero to get signed zeros etc
31 //===---------------------------------------------------------------------===//
33 Solve this DAG isel folding deficiency:
51 The problem is the store's chain operand is not the load X but rather
52 a TokenFactor of the load X and load Y, which prevents the folding.
54 There are two ways to fix this:
56 1. The dag combiner can start using alias analysis to realize that y/x
57 don't alias, making the store to X not dependent on the load from Y.
58 2. The generated isel could be made smarter in the case it can't
59 disambiguate the pointers.
61 Number 1 is the preferred solution.
63 This has been "fixed" by a TableGen hack. But that is a short term workaround
64 which will be removed once the proper fix is made.
66 //===---------------------------------------------------------------------===//
68 On targets with expensive 64-bit multiply, we could LSR this:
75 for (i = ...; ++i, tmp+=tmp)
78 This would be a win on ppc32, but not x86 or ppc64.
80 //===---------------------------------------------------------------------===//
82 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
84 //===---------------------------------------------------------------------===//
86 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
88 //===---------------------------------------------------------------------===//
90 Interesting? testcase for add/shift/mul reassoc:
92 int bar(int x, int y) {
93 return x*x*x+y+x*x*x*x*x*y*y*y*y;
95 int foo(int z, int n) {
96 return bar(z, n) + bar(2*z, 2*n);
99 Reassociate should handle the example in GCC PR16157.
101 //===---------------------------------------------------------------------===//
103 These two functions should generate the same code on big-endian systems:
105 int g(int *j,int *l) { return memcmp(j,l,4); }
106 int h(int *j, int *l) { return *j - *l; }
108 this could be done in SelectionDAGISel.cpp, along with other special cases,
111 //===---------------------------------------------------------------------===//
113 It would be nice to revert this patch:
114 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
116 And teach the dag combiner enough to simplify the code expanded before
117 legalize. It seems plausible that this knowledge would let it simplify other
120 //===---------------------------------------------------------------------===//
122 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
123 to the type size. It works but can be overly conservative as the alignment of
124 specific vector types are target dependent.
126 //===---------------------------------------------------------------------===//
128 We should produce an unaligned load from code like this:
130 v4sf example(float *P) {
131 return (v4sf){P[0], P[1], P[2], P[3] };
134 //===---------------------------------------------------------------------===//
136 Add support for conditional increments, and other related patterns. Instead
141 je LBB16_2 #cond_next
152 //===---------------------------------------------------------------------===//
154 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
156 Expand these to calls of sin/cos and stores:
157 double sincos(double x, double *sin, double *cos);
158 float sincosf(float x, float *sin, float *cos);
159 long double sincosl(long double x, long double *sin, long double *cos);
161 Doing so could allow SROA of the destination pointers. See also:
162 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
164 This is now easily doable with MRVs. We could even make an intrinsic for this
165 if anyone cared enough about sincos.
167 //===---------------------------------------------------------------------===//
169 Turn this into a single byte store with no load (the other 3 bytes are
172 define void @test(i32* %P) {
174 %tmp14 = or i32 %tmp, 3305111552
175 %tmp15 = and i32 %tmp14, 3321888767
176 store i32 %tmp15, i32* %P
180 //===---------------------------------------------------------------------===//
182 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
188 int t = __builtin_clz(x);
198 //===---------------------------------------------------------------------===//
200 quantum_sigma_x in 462.libquantum contains the following loop:
202 for(i=0; i<reg->size; i++)
204 /* Flip the target bit of each basis state */
205 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
208 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
209 so cool to turn it into something like:
211 long long Res = ((MAX_UNSIGNED) 1 << target);
213 for(i=0; i<reg->size; i++)
214 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
216 for(i=0; i<reg->size; i++)
217 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
220 ... which would only do one 32-bit XOR per loop iteration instead of two.
222 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
225 //===---------------------------------------------------------------------===//
227 This should be optimized to one 'and' and one 'or', from PR4216:
229 define i32 @test_bitfield(i32 %bf.prev.low) nounwind ssp {
231 %bf.prev.lo.cleared10 = or i32 %bf.prev.low, 32962 ; <i32> [#uses=1]
232 %0 = and i32 %bf.prev.low, -65536 ; <i32> [#uses=1]
233 %1 = and i32 %bf.prev.lo.cleared10, 40186 ; <i32> [#uses=1]
234 %2 = or i32 %1, %0 ; <i32> [#uses=1]
238 //===---------------------------------------------------------------------===//
240 This isn't recognized as bswap by instcombine (yes, it really is bswap):
242 unsigned long reverse(unsigned v) {
244 t = v ^ ((v << 16) | (v >> 16));
246 v = (v << 24) | (v >> 8);
250 //===---------------------------------------------------------------------===//
252 These idioms should be recognized as popcount (see PR1488):
254 unsigned countbits_slow(unsigned v) {
256 for (c = 0; v; v >>= 1)
260 unsigned countbits_fast(unsigned v){
263 v &= v - 1; // clear the least significant bit set
267 BITBOARD = unsigned long long
268 int PopCnt(register BITBOARD a) {
276 unsigned int popcount(unsigned int input) {
277 unsigned int count = 0;
278 for (unsigned int i = 0; i < 4 * 8; i++)
279 count += (input >> i) & i;
283 This is a form of idiom recognition for loops, the same thing that could be
284 useful for recognizing memset/memcpy.
286 //===---------------------------------------------------------------------===//
288 These should turn into single 16-bit (unaligned?) loads on little/big endian
291 unsigned short read_16_le(const unsigned char *adr) {
292 return adr[0] | (adr[1] << 8);
294 unsigned short read_16_be(const unsigned char *adr) {
295 return (adr[0] << 8) | adr[1];
298 //===---------------------------------------------------------------------===//
300 -instcombine should handle this transform:
301 icmp pred (sdiv X / C1 ), C2
302 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
304 Currently InstCombine avoids this transform but will do it when the signs of
305 the operands and the sign of the divide match. See the FIXME in
306 InstructionCombining.cpp in the visitSetCondInst method after the switch case
307 for Instruction::UDiv (around line 4447) for more details.
309 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
312 //===---------------------------------------------------------------------===//
314 viterbi speeds up *significantly* if the various "history" related copy loops
315 are turned into memcpy calls at the source level. We need a "loops to memcpy"
318 //===---------------------------------------------------------------------===//
322 typedef unsigned U32;
323 typedef unsigned long long U64;
324 int test (U32 *inst, U64 *regs) {
327 int r1 = (temp >> 20) & 0xf;
328 int b2 = (temp >> 16) & 0xf;
329 effective_addr2 = temp & 0xfff;
330 if (b2) effective_addr2 += regs[b2];
331 b2 = (temp >> 12) & 0xf;
332 if (b2) effective_addr2 += regs[b2];
333 effective_addr2 &= regs[4];
334 if ((effective_addr2 & 3) == 0)
339 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
340 we don't eliminate the computation of the top half of effective_addr2 because
341 we don't have whole-function selection dags. On x86, this means we use one
342 extra register for the function when effective_addr2 is declared as U64 than
343 when it is declared U32.
345 PHI Slicing could be extended to do this.
347 //===---------------------------------------------------------------------===//
349 LSR should know what GPR types a target has from TargetData. This code:
351 volatile short X, Y; // globals
355 for (i = 0; i < N; i++) { X = i; Y = i*4; }
358 produces two near identical IV's (after promotion) on PPC/ARM:
368 add r2, r2, #1 <- [0,+,1]
369 sub r0, r0, #1 <- [0,-,1]
373 LSR should reuse the "+" IV for the exit test.
375 //===---------------------------------------------------------------------===//
377 Tail call elim should be more aggressive, checking to see if the call is
378 followed by an uncond branch to an exit block.
380 ; This testcase is due to tail-duplication not wanting to copy the return
381 ; instruction into the terminating blocks because there was other code
382 ; optimized out of the function after the taildup happened.
383 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
385 define i32 @t4(i32 %a) {
387 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
388 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
389 br i1 %tmp.2, label %then.0, label %else.0
391 then.0: ; preds = %entry
392 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
393 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
396 else.0: ; preds = %entry
397 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
398 br i1 %tmp.7, label %then.1, label %return
400 then.1: ; preds = %else.0
401 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
402 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
405 return: ; preds = %then.1, %else.0, %then.0
406 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
411 //===---------------------------------------------------------------------===//
413 Tail recursion elimination should handle:
418 return 2 * pow2m1 (n - 1) + 1;
421 Also, multiplies can be turned into SHL's, so they should be handled as if
422 they were associative. "return foo() << 1" can be tail recursion eliminated.
424 //===---------------------------------------------------------------------===//
426 Argument promotion should promote arguments for recursive functions, like
429 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
431 define internal i32 @foo(i32* %x) {
433 %tmp = load i32* %x ; <i32> [#uses=0]
434 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
438 define i32 @bar(i32* %x) {
440 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
444 //===---------------------------------------------------------------------===//
446 "basicaa" should know how to look through "or" instructions that act like add
447 instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
448 basicaa can't analyze the array subscript, leading to duplicated loads in the
451 void test(int X, int Y, int a[]) {
453 for (i=2; i<1000; i+=4) {
454 a[i+0] = a[i-1+0]*a[i-2+0];
455 a[i+1] = a[i-1+1]*a[i-2+1];
456 a[i+2] = a[i-1+2]*a[i-2+2];
457 a[i+3] = a[i-1+3]*a[i-2+3];
461 BasicAA also doesn't do this for add. It needs to know that &A[i+1] != &A[i].
463 //===---------------------------------------------------------------------===//
465 We should investigate an instruction sinking pass. Consider this silly
481 je LBB1_2 # cond_true
489 The PIC base computation (call+popl) is only used on one path through the
490 code, but is currently always computed in the entry block. It would be
491 better to sink the picbase computation down into the block for the
492 assertion, as it is the only one that uses it. This happens for a lot of
493 code with early outs.
495 Another example is loads of arguments, which are usually emitted into the
496 entry block on targets like x86. If not used in all paths through a
497 function, they should be sunk into the ones that do.
499 In this case, whole-function-isel would also handle this.
501 //===---------------------------------------------------------------------===//
503 Investigate lowering of sparse switch statements into perfect hash tables:
504 http://burtleburtle.net/bob/hash/perfect.html
506 //===---------------------------------------------------------------------===//
508 We should turn things like "load+fabs+store" and "load+fneg+store" into the
509 corresponding integer operations. On a yonah, this loop:
514 for (b = 0; b < 10000000; b++)
515 for (i = 0; i < 256; i++)
519 is twice as slow as this loop:
524 for (b = 0; b < 10000000; b++)
525 for (i = 0; i < 256; i++)
526 a[i] ^= (1ULL << 63);
529 and I suspect other processors are similar. On X86 in particular this is a
530 big win because doing this with integers allows the use of read/modify/write
533 //===---------------------------------------------------------------------===//
535 DAG Combiner should try to combine small loads into larger loads when
536 profitable. For example, we compile this C++ example:
538 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
539 extern THotKey m_HotKey;
540 THotKey GetHotKey () { return m_HotKey; }
542 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
547 movb _m_HotKey+3, %cl
548 movb _m_HotKey+4, %dl
549 movb _m_HotKey+2, %ch
564 movzwl _m_HotKey+4, %edx
568 The LLVM IR contains the needed alignment info, so we should be able to
569 merge the loads and stores into 4-byte loads:
571 %struct.THotKey = type { i16, i8, i8, i8 }
572 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
574 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
575 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
576 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
577 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
579 Alternatively, we should use a small amount of base-offset alias analysis
580 to make it so the scheduler doesn't need to hold all the loads in regs at
583 //===---------------------------------------------------------------------===//
585 We should add an FRINT node to the DAG to model targets that have legal
586 implementations of ceil/floor/rint.
588 //===---------------------------------------------------------------------===//
593 long long input[8] = {1,1,1,1,1,1,1,1};
597 We currently compile this into a memcpy from a global array since the
598 initializer is fairly large and not memset'able. This is good, but the memcpy
599 gets lowered to load/stores in the code generator. This is also ok, except
600 that the codegen lowering for memcpy doesn't handle the case when the source
601 is a constant global. This gives us atrocious code like this:
606 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
608 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
610 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
612 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
614 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
616 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
618 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
630 //===---------------------------------------------------------------------===//
632 http://llvm.org/PR717:
634 The following code should compile into "ret int undef". Instead, LLVM
635 produces "ret int 0":
644 //===---------------------------------------------------------------------===//
646 The loop unroller should partially unroll loops (instead of peeling them)
647 when code growth isn't too bad and when an unroll count allows simplification
648 of some code within the loop. One trivial example is:
654 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
663 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
664 reduction in code size. The resultant code would then also be suitable for
665 exit value computation.
667 //===---------------------------------------------------------------------===//
669 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
670 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
671 matching code in dag combine doesn't look through truncates aggressively
672 enough. Here are some testcases reduces from GCC PR17886:
674 unsigned long long f(unsigned long long x, int y) {
675 return (x << y) | (x >> 64-y);
677 unsigned f2(unsigned x, int y){
678 return (x << y) | (x >> 32-y);
680 unsigned long long f3(unsigned long long x){
682 return (x << y) | (x >> 64-y);
684 unsigned f4(unsigned x){
686 return (x << y) | (x >> 32-y);
688 unsigned long long f5(unsigned long long x, unsigned long long y) {
689 return (x << 8) | ((y >> 48) & 0xffull);
691 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
694 return (x << 8) | ((y >> 48) & 0xffull);
696 return (x << 16) | ((y >> 40) & 0xffffull);
698 return (x << 24) | ((y >> 32) & 0xffffffull);
700 return (x << 32) | ((y >> 24) & 0xffffffffull);
702 return (x << 40) | ((y >> 16) & 0xffffffffffull);
706 On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
707 generate truly horrible code, instead of using shld and friends. On
708 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
709 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
711 //===---------------------------------------------------------------------===//
713 We do a number of simplifications in simplify libcalls to strength reduce
714 standard library functions, but we don't currently merge them together. For
715 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
716 be done safely if "b" isn't modified between the strlen and memcpy of course.
718 //===---------------------------------------------------------------------===//
720 Reassociate should turn things like:
722 int factorial(int X) {
723 return X*X*X*X*X*X*X*X;
726 into llvm.powi calls, allowing the code generator to produce balanced
727 multiplication trees.
729 //===---------------------------------------------------------------------===//
731 We generate a horrible libcall for llvm.powi. For example, we compile:
734 double f(double a) { return std::pow(a, 4); }
740 movsd 16(%esp), %xmm0
743 call L___powidf2$stub
751 movsd 16(%esp), %xmm0
759 //===---------------------------------------------------------------------===//
761 We compile this program: (from GCC PR11680)
762 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
764 Into code that runs the same speed in fast/slow modes, but both modes run 2x
765 slower than when compile with GCC (either 4.0 or 4.2):
767 $ llvm-g++ perf.cpp -O3 -fno-exceptions
769 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
771 $ g++ perf.cpp -O3 -fno-exceptions
773 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
775 It looks like we are making the same inlining decisions, so this may be raw
776 codegen badness or something else (haven't investigated).
778 //===---------------------------------------------------------------------===//
780 We miss some instcombines for stuff like this:
782 void foo (unsigned int a) {
783 /* This one is equivalent to a >= (3 << 2). */
788 A few other related ones are in GCC PR14753.
790 //===---------------------------------------------------------------------===//
792 Divisibility by constant can be simplified (according to GCC PR12849) from
793 being a mulhi to being a mul lo (cheaper). Testcase:
795 void bar(unsigned n) {
800 I think this basically amounts to a dag combine to simplify comparisons against
801 multiply hi's into a comparison against the mullo.
803 //===---------------------------------------------------------------------===//
805 Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
806 bunch of other stuff from this example (see PR1604):
816 std::scanf("%d", &t.val);
817 std::printf("%d\n", t.val);
820 //===---------------------------------------------------------------------===//
822 Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
823 10) u< 10, but only when the comparisons have matching sign.
825 This could be converted with a similiar technique. (PR1941)
827 define i1 @test(i8 %x) {
828 %A = icmp uge i8 %x, 5
829 %B = icmp slt i8 %x, 20
834 //===---------------------------------------------------------------------===//
836 These functions perform the same computation, but produce different assembly.
838 define i8 @select(i8 %x) readnone nounwind {
839 %A = icmp ult i8 %x, 250
840 %B = select i1 %A, i8 0, i8 1
844 define i8 @addshr(i8 %x) readnone nounwind {
845 %A = zext i8 %x to i9
846 %B = add i9 %A, 6 ;; 256 - 250 == 6
848 %D = trunc i9 %C to i8
852 //===---------------------------------------------------------------------===//
856 f (unsigned long a, unsigned long b, unsigned long c)
858 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
861 f (unsigned long a, unsigned long b, unsigned long c)
863 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
865 Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
866 "clang -emit-llvm-bc | opt -std-compile-opts".
868 //===---------------------------------------------------------------------===//
871 #define PMD_MASK (~((1UL << 23) - 1))
872 void clear_pmd_range(unsigned long start, unsigned long end)
874 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
877 The expression should optimize to something like
878 "!((start|end)&~PMD_MASK). Currently not optimized with "clang
879 -emit-llvm-bc | opt -std-compile-opts".
881 //===---------------------------------------------------------------------===//
885 foo (unsigned int a, unsigned int b)
887 if (a <= 7 && b <= 7)
890 Should combine to "(a|b) <= 7". Currently not optimized with "clang
891 -emit-llvm-bc | opt -std-compile-opts".
893 //===---------------------------------------------------------------------===//
899 return (n >= 0 ? 1 : -1);
901 Should combine to (n >> 31) | 1. Currently not optimized with "clang
902 -emit-llvm-bc | opt -std-compile-opts | llc".
904 //===---------------------------------------------------------------------===//
907 int test(int a, int b)
914 Should combine to "a <= b". Currently not optimized with "clang
915 -emit-llvm-bc | opt -std-compile-opts | llc".
917 //===---------------------------------------------------------------------===//
921 if (variable == 4 || variable == 6)
924 This should optimize to "if ((variable | 2) == 6)". Currently not
925 optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
927 //===---------------------------------------------------------------------===//
929 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
931 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
932 These should combine to the same thing. Currently, the first function
933 produces better code on X86.
935 //===---------------------------------------------------------------------===//
938 #define abs(x) x>0?x:-x
941 return (abs(x)) >= 0;
943 This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
944 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
946 //===---------------------------------------------------------------------===//
950 rotate_cst (unsigned int a)
952 a = (a << 10) | (a >> 22);
957 minus_cst (unsigned int a)
966 mask_gt (unsigned int a)
968 /* This is equivalent to a > 15. */
973 rshift_gt (unsigned int a)
975 /* This is equivalent to a > 23. */
979 All should simplify to a single comparison. All of these are
980 currently not optimized with "clang -emit-llvm-bc | opt
983 //===---------------------------------------------------------------------===//
986 int c(int* x) {return (char*)x+2 == (char*)x;}
987 Should combine to 0. Currently not optimized with "clang
988 -emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
990 //===---------------------------------------------------------------------===//
992 int a(unsigned char* b) {return *b > 99;}
993 There's an unnecessary zext in the generated code with "clang
994 -emit-llvm-bc | opt -std-compile-opts".
996 //===---------------------------------------------------------------------===//
998 int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
999 Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1000 with "clang -emit-llvm-bc | opt -std-compile-opts".
1002 //===---------------------------------------------------------------------===//
1004 unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1005 Should combine to "x | (y & 3)". Currently not optimized with "clang
1006 -emit-llvm-bc | opt -std-compile-opts".
1008 //===---------------------------------------------------------------------===//
1010 unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
1011 Should combine to "a | 1". Currently not optimized with "clang
1012 -emit-llvm-bc | opt -std-compile-opts".
1014 //===---------------------------------------------------------------------===//
1016 int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1017 Should fold to "(~a & c) | (a & b)". Currently not optimized with
1018 "clang -emit-llvm-bc | opt -std-compile-opts".
1020 //===---------------------------------------------------------------------===//
1022 int a(int a,int b) {return (~(a|b))|a;}
1023 Should fold to "a|~b". Currently not optimized with "clang
1024 -emit-llvm-bc | opt -std-compile-opts".
1026 //===---------------------------------------------------------------------===//
1028 int a(int a, int b) {return (a&&b) || (a&&!b);}
1029 Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1030 | opt -std-compile-opts".
1032 //===---------------------------------------------------------------------===//
1034 int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1035 Should fold to "a ? b : c", or at least something sane. Currently not
1036 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1038 //===---------------------------------------------------------------------===//
1040 int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1041 Should fold to a && (b || c). Currently not optimized with "clang
1042 -emit-llvm-bc | opt -std-compile-opts".
1044 //===---------------------------------------------------------------------===//
1046 int a(int x) {return x | ((x & 8) ^ 8);}
1047 Should combine to x | 8. Currently not optimized with "clang
1048 -emit-llvm-bc | opt -std-compile-opts".
1050 //===---------------------------------------------------------------------===//
1052 int a(int x) {return x ^ ((x & 8) ^ 8);}
1053 Should also combine to x | 8. Currently not optimized with "clang
1054 -emit-llvm-bc | opt -std-compile-opts".
1056 //===---------------------------------------------------------------------===//
1058 int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1059 Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1060 -emit-llvm-bc | opt -std-compile-opts".
1062 //===---------------------------------------------------------------------===//
1064 int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1065 Should combine to x | -9. Currently not optimized with "clang
1066 -emit-llvm-bc | opt -std-compile-opts".
1068 //===---------------------------------------------------------------------===//
1070 int a(int x) {return ((x | -9) ^ 8) & x;}
1071 Should combine to x & -9. Currently not optimized with "clang
1072 -emit-llvm-bc | opt -std-compile-opts".
1074 //===---------------------------------------------------------------------===//
1076 unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1077 Should combine to "a * 0x88888888 >> 31". Currently not optimized
1078 with "clang -emit-llvm-bc | opt -std-compile-opts".
1080 //===---------------------------------------------------------------------===//
1082 unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1083 There's an unnecessary zext in the generated code with "clang
1084 -emit-llvm-bc | opt -std-compile-opts".
1086 //===---------------------------------------------------------------------===//
1088 unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1089 Should combine to "20 * (((unsigned)x) & -2)". Currently not
1090 optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1092 //===---------------------------------------------------------------------===//
1094 This was noticed in the entryblock for grokdeclarator in 403.gcc:
1096 %tmp = icmp eq i32 %decl_context, 4
1097 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1098 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1099 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1101 tmp1 should be simplified to something like:
1102 (!tmp || decl_context == 1)
1104 This allows recursive simplifications, tmp1 is used all over the place in
1105 the function, e.g. by:
1107 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1108 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1109 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1113 //===---------------------------------------------------------------------===//
1115 Store sinking: This code:
1117 void f (int n, int *cond, int *res) {
1120 for (i = 0; i < n; i++)
1122 *res ^= 234; /* (*) */
1125 On this function GVN hoists the fully redundant value of *res, but nothing
1126 moves the store out. This gives us this code:
1128 bb: ; preds = %bb2, %entry
1129 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1130 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1131 %1 = load i32* %cond, align 4
1132 %2 = icmp eq i32 %1, 0
1133 br i1 %2, label %bb2, label %bb1
1136 %3 = xor i32 %.rle, 234
1137 store i32 %3, i32* %res, align 4
1140 bb2: ; preds = %bb, %bb1
1141 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1142 %indvar.next = add i32 %i.05, 1
1143 %exitcond = icmp eq i32 %indvar.next, %n
1144 br i1 %exitcond, label %return, label %bb
1146 DSE should sink partially dead stores to get the store out of the loop.
1148 Here's another partial dead case:
1149 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1151 //===---------------------------------------------------------------------===//
1153 Scalar PRE hoists the mul in the common block up to the else:
1155 int test (int a, int b, int c, int g) {
1165 It would be better to do the mul once to reduce codesize above the if.
1166 This is GCC PR38204.
1168 //===---------------------------------------------------------------------===//
1170 GCC PR37810 is an interesting case where we should sink load/store reload
1171 into the if block and outside the loop, so we don't reload/store it on the
1192 We now hoist the reload after the call (Transforms/GVN/lpre-call-wrap.ll), but
1193 we don't sink the store. We need partially dead store sinking.
1195 //===---------------------------------------------------------------------===//
1197 [PHI TRANSLATE GEPs]
1199 GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1200 leading to excess stack traffic. This could be handled by GVN with some crazy
1201 symbolic phi translation. The code we get looks like (g is on the stack):
1205 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1206 store i32 %8, i32* %9, align bel %bb3
1208 bb3: ; preds = %bb1, %bb2, %bb
1209 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1210 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1211 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1212 %11 = load i32* %10, align 4
1214 %11 is fully redundant, an in BB2 it should have the value %8.
1216 GCC PR33344 is a similar case.
1218 //===---------------------------------------------------------------------===//
1220 [PHI TRANSLATE INDEXED GEPs] PR5313
1222 Load redundancy elimination for simple loop. This loop:
1224 void append_text(const char* text,unsigned char * const io) {
1229 Compiles to have a fully redundant load in the loop (%2):
1231 define void @append_text(i8* nocapture %text, i8* nocapture %io) nounwind {
1233 %0 = load i8* %text, align 1 ; <i8> [#uses=1]
1234 %1 = icmp eq i8 %0, 0 ; <i1> [#uses=1]
1235 br i1 %1, label %return, label %bb
1237 bb: ; preds = %bb, %entry
1238 %indvar = phi i32 [ 0, %entry ], [ %tmp, %bb ] ; <i32> [#uses=2]
1239 %text_addr.04 = getelementptr i8* %text, i32 %indvar ; <i8*> [#uses=1]
1240 %2 = load i8* %text_addr.04, align 1 ; <i8> [#uses=1]
1241 store i8 %2, i8* %io, align 1
1242 %tmp = add i32 %indvar, 1 ; <i32> [#uses=2]
1243 %scevgep = getelementptr i8* %text, i32 %tmp ; <i8*> [#uses=1]
1244 %3 = load i8* %scevgep, align 1 ; <i8> [#uses=1]
1245 %4 = icmp eq i8 %3, 0 ; <i1> [#uses=1]
1246 br i1 %4, label %return, label %bb
1248 return: ; preds = %bb, %entry
1252 //===---------------------------------------------------------------------===//
1254 There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1255 GCC testsuite. There are many pre testcases as ssa-pre-*.c
1257 //===---------------------------------------------------------------------===//
1259 There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
1260 GCC testsuite. For example, predcom-1.c is:
1262 for (i = 2; i < 1000; i++)
1263 fib[i] = (fib[i-1] + fib[i - 2]) & 0xffff;
1265 which compiles into:
1267 bb1: ; preds = %bb1, %bb1.thread
1268 %indvar = phi i32 [ 0, %bb1.thread ], [ %0, %bb1 ]
1269 %i.0.reg2mem.0 = add i32 %indvar, 2
1270 %0 = add i32 %indvar, 1 ; <i32> [#uses=3]
1271 %1 = getelementptr [1000 x i32]* @fib, i32 0, i32 %0
1272 %2 = load i32* %1, align 4 ; <i32> [#uses=1]
1273 %3 = getelementptr [1000 x i32]* @fib, i32 0, i32 %indvar
1274 %4 = load i32* %3, align 4 ; <i32> [#uses=1]
1275 %5 = add i32 %4, %2 ; <i32> [#uses=1]
1276 %6 = and i32 %5, 65535 ; <i32> [#uses=1]
1277 %7 = getelementptr [1000 x i32]* @fib, i32 0, i32 %i.0.reg2mem.0
1278 store i32 %6, i32* %7, align 4
1279 %exitcond = icmp eq i32 %0, 998 ; <i1> [#uses=1]
1280 br i1 %exitcond, label %return, label %bb1
1287 instead of handling this as a loop or other xform, all we'd need to do is teach
1288 load PRE to phi translate the %0 add (i+1) into the predecessor as (i'+1+1) =
1289 (i'+2) (where i' is the previous iteration of i). This would find the store
1292 predcom-2.c is apparently the same as predcom-1.c
1293 predcom-3.c is very similar but needs loads feeding each other instead of
1295 predcom-4.c seems the same as the rest.
1298 //===---------------------------------------------------------------------===//
1300 Other simple load PRE cases:
1301 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
1303 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
1304 llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
1306 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16799 [BITCAST PHI TRANS]
1308 //===---------------------------------------------------------------------===//
1310 Type based alias analysis:
1311 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1313 //===---------------------------------------------------------------------===//
1315 A/B get pinned to the stack because we turn an if/then into a select instead
1316 of PRE'ing the load/store. This may be fixable in instcombine:
1317 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1319 struct X { int i; };
1333 //===---------------------------------------------------------------------===//
1335 Interesting missed case because of control flow flattening (should be 2 loads):
1336 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1337 With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
1338 opt -mem2reg -gvn -instcombine | llvm-dis
1339 we miss it because we need 1) GEP PHI TRAN, 2) CRIT EDGE 3) MULTIPLE DIFFERENT
1340 VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
1342 //===---------------------------------------------------------------------===//
1344 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1345 We could eliminate the branch condition here, loading from null is undefined:
1347 struct S { int w, x, y, z; };
1348 struct T { int r; struct S s; };
1349 void bar (struct S, int);
1350 void foo (int a, struct T b)
1358 //===---------------------------------------------------------------------===//
1360 simplifylibcalls should do several optimizations for strspn/strcspn:
1362 strcspn(x, "") -> strlen(x)
1365 strspn(x, "") -> strlen(x)
1366 strspn(x, "a") -> strchr(x, 'a')-x
1368 strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
1370 size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
1372 register size_t __result = 0;
1373 while (__s[__result] != '\0' && __s[__result] != __reject1 &&
1374 __s[__result] != __reject2 && __s[__result] != __reject3)
1379 This should turn into a switch on the character. See PR3253 for some notes on
1382 456.hmmer apparently uses strcspn and strspn a lot. 471.omnetpp uses strspn.
1384 //===---------------------------------------------------------------------===//
1386 "gas" uses this idiom:
1387 else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
1389 else if (strchr ("<>", *intel_parser.op_string)
1391 Those should be turned into a switch.
1393 //===---------------------------------------------------------------------===//
1395 252.eon contains this interesting code:
1397 %3072 = getelementptr [100 x i8]* %tempString, i32 0, i32 0
1398 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1399 %strlen = call i32 @strlen(i8* %3072) ; uses = 1
1400 %endptr = getelementptr [100 x i8]* %tempString, i32 0, i32 %strlen
1401 call void @llvm.memcpy.i32(i8* %endptr,
1402 i8* getelementptr ([5 x i8]* @"\01LC42", i32 0, i32 0), i32 5, i32 1)
1403 %3074 = call i32 @strlen(i8* %endptr) nounwind readonly
1405 This is interesting for a couple reasons. First, in this:
1407 %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
1408 %strlen = call i32 @strlen(i8* %3072)
1410 The strlen could be replaced with: %strlen = sub %3072, %3073, because the
1411 strcpy call returns a pointer to the end of the string. Based on that, the
1412 endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
1414 Second, the memcpy+strlen strlen can be replaced with:
1416 %3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
1418 Because the destination was just copied into the specified memory buffer. This,
1419 in turn, can be constant folded to "4".
1421 In other code, it contains:
1423 %endptr6978 = bitcast i8* %endptr69 to i32*
1424 store i32 7107374, i32* %endptr6978, align 1
1425 %3167 = call i32 @strlen(i8* %endptr69) nounwind readonly
1427 Which could also be constant folded. Whatever is producing this should probably
1428 be fixed to leave this as a memcpy from a string.
1430 Further, eon also has an interesting partially redundant strlen call:
1432 bb8: ; preds = %_ZN18eonImageCalculatorC1Ev.exit
1433 %682 = getelementptr i8** %argv, i32 6 ; <i8**> [#uses=2]
1434 %683 = load i8** %682, align 4 ; <i8*> [#uses=4]
1435 %684 = load i8* %683, align 1 ; <i8> [#uses=1]
1436 %685 = icmp eq i8 %684, 0 ; <i1> [#uses=1]
1437 br i1 %685, label %bb10, label %bb9
1440 %686 = call i32 @strlen(i8* %683) nounwind readonly
1441 %687 = icmp ugt i32 %686, 254 ; <i1> [#uses=1]
1442 br i1 %687, label %bb10, label %bb11
1444 bb10: ; preds = %bb9, %bb8
1445 %688 = call i32 @strlen(i8* %683) nounwind readonly
1447 This could be eliminated by doing the strlen once in bb8, saving code size and
1448 improving perf on the bb8->9->10 path.
1450 //===---------------------------------------------------------------------===//
1452 I see an interesting fully redundant call to strlen left in 186.crafty:InputMove
1454 %movetext11 = getelementptr [128 x i8]* %movetext, i32 0, i32 0
1457 bb62: ; preds = %bb55, %bb53
1458 %promote.0 = phi i32 [ %169, %bb55 ], [ 0, %bb53 ]
1459 %171 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1460 %172 = add i32 %171, -1 ; <i32> [#uses=1]
1461 %173 = getelementptr [128 x i8]* %movetext, i32 0, i32 %172
1464 br i1 %or.cond, label %bb65, label %bb72
1466 bb65: ; preds = %bb62
1467 store i8 0, i8* %173, align 1
1470 bb72: ; preds = %bb65, %bb62
1471 %trank.1 = phi i32 [ %176, %bb65 ], [ -1, %bb62 ]
1472 %177 = call i32 @strlen(i8* %movetext11) nounwind readonly align 1
1474 Note that on the bb62->bb72 path, that the %177 strlen call is partially
1475 redundant with the %171 call. At worst, we could shove the %177 strlen call
1476 up into the bb65 block moving it out of the bb62->bb72 path. However, note
1477 that bb65 stores to the string, zeroing out the last byte. This means that on
1478 that path the value of %177 is actually just %171-1. A sub is cheaper than a
1481 This pattern repeats several times, basically doing:
1486 where it is "obvious" that B = A-1.
1488 //===---------------------------------------------------------------------===//
1490 186.crafty contains this interesting pattern:
1492 %77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
1494 %phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
1495 br i1 %phitmp648, label %bb70, label %bb76
1497 bb70: ; preds = %OptionMatch.exit91, %bb69
1498 %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
1502 if (strstr(cststr, P) == cststr) {
1506 The strstr call would be significantly cheaper written as:
1509 if (memcmp(P, str, strlen(P)))
1512 This is memcmp+strlen instead of strstr. This also makes the strlen fully
1515 //===---------------------------------------------------------------------===//
1517 186.crafty also contains this code:
1519 %1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1520 %1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
1521 %1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
1522 %1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
1523 %1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
1525 The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
1527 //===---------------------------------------------------------------------===//
1529 186.crafty has this interesting pattern with the "out.4543" variable:
1531 call void @llvm.memcpy.i32(
1532 i8* getelementptr ([10 x i8]* @out.4543, i32 0, i32 0),
1533 i8* getelementptr ([7 x i8]* @"\01LC28700", i32 0, i32 0), i32 7, i32 1)
1534 %101 = call@printf(i8* ... @out.4543, i32 0, i32 0)) nounwind
1536 It is basically doing:
1538 memcpy(globalarray, "string");
1539 printf(..., globalarray);
1541 Anyway, by knowing that printf just reads the memory and forward substituting
1542 the string directly into the printf, this eliminates reads from globalarray.
1543 Since this pattern occurs frequently in crafty (due to the "DisplayTime" and
1544 other similar functions) there are many stores to "out". Once all the printfs
1545 stop using "out", all that is left is the memcpy's into it. This should allow
1546 globalopt to remove the "stored only" global.
1548 //===---------------------------------------------------------------------===//
1552 define inreg i32 @foo(i8* inreg %p) nounwind {
1554 %tmp1 = ashr i8 %tmp0, 5
1555 %tmp2 = sext i8 %tmp1 to i32
1559 could be dagcombine'd to a sign-extending load with a shift.
1560 For example, on x86 this currently gets this:
1566 while it could get this:
1571 //===---------------------------------------------------------------------===//
1575 int test(int x) { return 1-x == x; } // --> return false
1576 int test2(int x) { return 2-x == x; } // --> return x == 1 ?
1578 Always foldable for odd constants, what is the rule for even?
1580 //===---------------------------------------------------------------------===//
1582 PR 3381: GEP to field of size 0 inside a struct could be turned into GEP
1583 for next field in struct (which is at same address).
1585 For example: store of float into { {{}}, float } could be turned into a store to
1588 //===---------------------------------------------------------------------===//
1591 double foo(double a) { return sin(a); }
1593 This compiles into this on x86-64 Linux:
1604 //===---------------------------------------------------------------------===//
1606 The arg promotion pass should make use of nocapture to make its alias analysis
1607 stuff much more precise.
1609 //===---------------------------------------------------------------------===//
1611 The following functions should be optimized to use a select instead of a
1612 branch (from gcc PR40072):
1614 char char_int(int m) {if(m>7) return 0; return m;}
1615 int int_char(char m) {if(m>7) return 0; return m;}
1617 //===---------------------------------------------------------------------===//
1619 int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
1623 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1625 %0 = and i32 %a, 128 ; <i32> [#uses=1]
1626 %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
1627 %2 = or i32 %b, 128 ; <i32> [#uses=1]
1628 %3 = and i32 %b, -129 ; <i32> [#uses=1]
1629 %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
1633 However, it's functionally equivalent to:
1635 b = (b & ~0x80) | (a & 0x80);
1637 Which generates this:
1639 define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
1641 %0 = and i32 %b, -129 ; <i32> [#uses=1]
1642 %1 = and i32 %a, 128 ; <i32> [#uses=1]
1643 %2 = or i32 %0, %1 ; <i32> [#uses=1]
1647 This can be generalized for other forms:
1649 b = (b & ~0x80) | (a & 0x40) << 1;
1651 //===---------------------------------------------------------------------===//
1653 These two functions produce different code. They shouldn't:
1657 uint8_t p1(uint8_t b, uint8_t a) {
1658 b = (b & ~0xc0) | (a & 0xc0);
1662 uint8_t p2(uint8_t b, uint8_t a) {
1663 b = (b & ~0x40) | (a & 0x40);
1664 b = (b & ~0x80) | (a & 0x80);
1668 define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1670 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1671 %1 = and i8 %a, -64 ; <i8> [#uses=1]
1672 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1676 define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
1678 %0 = and i8 %b, 63 ; <i8> [#uses=1]
1679 %.masked = and i8 %a, 64 ; <i8> [#uses=1]
1680 %1 = and i8 %a, -128 ; <i8> [#uses=1]
1681 %2 = or i8 %1, %0 ; <i8> [#uses=1]
1682 %3 = or i8 %2, %.masked ; <i8> [#uses=1]
1686 //===---------------------------------------------------------------------===//
1688 IPSCCP does not currently propagate argument dependent constants through
1689 functions where it does not not all of the callers. This includes functions
1690 with normal external linkage as well as templates, C99 inline functions etc.
1691 Specifically, it does nothing to:
1693 define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
1695 %0 = add nsw i32 %y, %z
1698 %3 = add nsw i32 %1, %2
1702 define i32 @test2() nounwind {
1704 %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
1708 It would be interesting extend IPSCCP to be able to handle simple cases like
1709 this, where all of the arguments to a call are constant. Because IPSCCP runs
1710 before inlining, trivial templates and inline functions are not yet inlined.
1711 The results for a function + set of constant arguments should be memoized in a
1714 //===---------------------------------------------------------------------===//
1716 The libcall constant folding stuff should be moved out of SimplifyLibcalls into
1717 libanalysis' constantfolding logic. This would allow IPSCCP to be able to
1718 handle simple things like this:
1720 static int foo(const char *X) { return strlen(X); }
1721 int bar() { return foo("abcd"); }
1723 //===---------------------------------------------------------------------===//
1725 InstCombine should use SimplifyDemandedBits to remove the or instruction:
1727 define i1 @test(i8 %x, i8 %y) {
1729 %B = icmp ugt i8 %A, 3
1733 Currently instcombine calls SimplifyDemandedBits with either all bits or just
1734 the sign bit, if the comparison is obviously a sign test. In this case, we only
1735 need all but the bottom two bits from %A, and if we gave that mask to SDB it
1736 would delete the or instruction for us.
1738 //===---------------------------------------------------------------------===//