//===---------------------------------------------------------------------===//
-With the recent changes to make the implicit def/use set explicit in
-machineinstrs, we should change the target descriptions for 'call' instructions
-so that the .td files don't list all the call-clobbered registers as implicit
-defs. Instead, these should be added by the code generator (e.g. on the dag).
+We should recognized various "overflow detection" idioms and translate them into
+llvm.uadd.with.overflow and similar intrinsics. Here is a multiply idiom:
-This has a number of uses:
-
-1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
- for their different impdef sets.
-2. Targets with multiple calling convs (e.g. x86) which have different clobber
- sets don't need copies of call instructions.
-3. 'Interprocedural register allocation' can be done to reduce the clobber sets
- of calls.
-
-//===---------------------------------------------------------------------===//
+unsigned int mul(unsigned int a,unsigned int b) {
+ if ((unsigned long long)a*b>0xffffffff)
+ exit(0);
+ return a*b;
+}
-Make the PPC branch selector target independant
+The legalization code for mul-with-overflow needs to be made more robust before
+this can be implemented though.
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
-Solve this DAG isel folding deficiency:
-
-int X, Y;
-
-void fn1(void)
-{
- X = X | (Y << 3);
-}
-
-compiles to
-
-fn1:
- movl Y, %eax
- shll $3, %eax
- orl X, %eax
- movl %eax, X
- ret
-
-The problem is the store's chain operand is not the load X but rather
-a TokenFactor of the load X and load Y, which prevents the folding.
-
-There are two ways to fix this:
-
-1. The dag combiner can start using alias analysis to realize that y/x
- don't alias, making the store to X not dependent on the load from Y.
-2. The generated isel could be made smarter in the case it can't
- disambiguate the pointers.
-
-Number 1 is the preferred solution.
-
-This has been "fixed" by a TableGen hack. But that is a short term workaround
-which will be removed once the proper fix is made.
-
-//===---------------------------------------------------------------------===//
-
On targets with expensive 64-bit multiply, we could LSR this:
for (i = ...; ++i) {
//===---------------------------------------------------------------------===//
-Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
+Reassociate should turn things like:
+
+int factorial(int X) {
+ return X*X*X*X*X*X*X*X;
+}
+
+into llvm.powi calls, allowing the code generator to produce balanced
+multiplication trees.
+
+First, the intrinsic needs to be extended to support integers, and second the
+code generator needs to be enhanced to lower these to multiplication trees.
//===---------------------------------------------------------------------===//
return bar(z, n) + bar(2*z, 2*n);
}
-Reassociate should handle the example in GCC PR16157.
+This is blocked on not handling X*X*X -> powi(X, 3) (see note above). The issue
+is that we end up getting t = 2*X s = t*t and don't turn this into 4*X*X,
+which is the same number of multiplies and is canonical, because the 2*X has
+multiple uses. Here's a simple example:
+
+define i32 @test15(i32 %X1) {
+ %B = mul i32 %X1, 47 ; X1*47
+ %C = mul i32 %B, %B
+ ret i32 %C
+}
+
+
+//===---------------------------------------------------------------------===//
+
+Reassociate should handle the example in GCC PR16157:
+
+extern int a0, a1, a2, a3, a4; extern int b0, b1, b2, b3, b4;
+void f () { /* this can be optimized to four additions... */
+ b4 = a4 + a3 + a2 + a1 + a0;
+ b3 = a3 + a2 + a1 + a0;
+ b2 = a2 + a1 + a0;
+ b1 = a1 + a0;
+}
+
+This requires reassociating to forms of expressions that are already available,
+something that reassoc doesn't think about yet.
+
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
It would be nice to revert this patch:
-http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
+http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
And teach the dag combiner enough to simplify the code expanded before
legalize. It seems plausible that this knowledge would let it simplify other
//===---------------------------------------------------------------------===//
-For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
+For vector types, DataLayout.cpp::getTypeInfo() returns alignment that is equal
to the type size. It works but can be overly conservative as the alignment of
specific vector types are target dependent.
//===---------------------------------------------------------------------===//
-We should add 'unaligned load/store' nodes, and produce them from code like
-this:
+We should produce an unaligned load from code like this:
v4sf example(float *P) {
return (v4sf){P[0], P[1], P[2], P[3] };
//===---------------------------------------------------------------------===//
-Turn this into a single byte store with no load (the other 3 bytes are
-unmodified):
-
-define void @test(i32* %P) {
- %tmp = load i32* %P
- %tmp14 = or i32 %tmp, 3305111552
- %tmp15 = and i32 %tmp14, 3321888767
- store i32 %tmp15, i32* %P
- ret void
-}
-
-//===---------------------------------------------------------------------===//
-
-dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
-
-Compile:
-
-int bar(int x)
-{
- int t = __builtin_clz(x);
- return -(t>>5);
-}
-
-to:
-
-_bar: addic r3,r3,-1
- subfe r3,r3,r3
- blr
-
-//===---------------------------------------------------------------------===//
-
-Legalize should lower ctlz like this:
- ctlz(x) = popcnt((x-1) & ~x)
-
-on targets that have popcnt but not ctlz. itanium, what else?
-
-//===---------------------------------------------------------------------===//
-
quantum_sigma_x in 462.libquantum contains the following loop:
for(i=0; i<reg->size; i++)
... which would only do one 32-bit XOR per loop iteration instead of two.
It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
-alas...
+this requires TBAA.
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
+[LOOP DELETION]
+
+We don't delete this output free loop, because trip count analysis doesn't
+realize that it is finite (if it were infinite, it would be undefined). Not
+having this blocks Loop Idiom from matching strlen and friends.
+
+void foo(char *C) {
+ int x = 0;
+ while (*C)
+ ++x,++C;
+}
+
+//===---------------------------------------------------------------------===//
+
+[LOOP RECOGNITION]
+
These idioms should be recognized as popcount (see PR1488):
unsigned countbits_slow(unsigned v) {
c += v & 1;
return c;
}
-unsigned countbits_fast(unsigned v){
- unsigned c;
- for (c = 0; v; c++)
- v &= v - 1; // clear the least significant bit set
- return c;
-}
-BITBOARD = unsigned long long
-int PopCnt(register BITBOARD a) {
- register int c=0;
- while(a) {
- c++;
- a &= a - 1;
- }
- return c;
-}
unsigned int popcount(unsigned int input) {
unsigned int count = 0;
for (unsigned int i = 0; i < 4 * 8; i++)
return count;
}
+This should be recognized as CLZ: rdar://8459039
+
+unsigned clz_a(unsigned a) {
+ int i;
+ for (i=0;i<32;i++)
+ if (a & (1<<(31-i)))
+ return i;
+ return 32;
+}
+
+This sort of thing should be added to the loop idiom pass.
+
//===---------------------------------------------------------------------===//
These should turn into single 16-bit (unaligned?) loads on little/big endian
//===---------------------------------------------------------------------===//
-viterbi speeds up *significantly* if the various "history" related copy loops
-are turned into memcpy calls at the source level. We need a "loops to memcpy"
-pass.
+[LOOP OPTIMIZATION]
+
+SingleSource/Benchmarks/Misc/dt.c shows several interesting optimization
+opportunities in its double_array_divs_variable function: it needs loop
+interchange, memory promotion (which LICM already does), vectorization and
+variable trip count loop unrolling (since it has a constant trip count). ICC
+apparently produces this very nice code with -ffast-math:
+
+..B1.70: # Preds ..B1.70 ..B1.69
+ mulpd %xmm0, %xmm1 #108.2
+ mulpd %xmm0, %xmm1 #108.2
+ mulpd %xmm0, %xmm1 #108.2
+ mulpd %xmm0, %xmm1 #108.2
+ addl $8, %edx #
+ cmpl $131072, %edx #108.2
+ jb ..B1.70 # Prob 99% #108.2
+
+It would be better to count down to zero, but this is a lot better than what we
+do.
//===---------------------------------------------------------------------===//
extra register for the function when effective_addr2 is declared as U64 than
when it is declared U32.
-//===---------------------------------------------------------------------===//
-
-Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
-regs and bswap, like itanium.
-
-//===---------------------------------------------------------------------===//
-
-LSR should know what GPR types a target has. This code:
-
-volatile short X, Y; // globals
-
-void foo(int N) {
- int i;
- for (i = 0; i < N; i++) { X = i; Y = i*4; }
-}
-
-produces two identical IV's (after promotion) on PPC/ARM:
-
-LBB1_1: @bb.preheader
- mov r3, #0
- mov r2, r3
- mov r1, r3
-LBB1_2: @bb
- ldr r12, LCPI1_0
- ldr r12, [r12]
- strh r2, [r12]
- ldr r12, LCPI1_1
- ldr r12, [r12]
- strh r3, [r12]
- add r1, r1, #1 <- [0,+,1]
- add r3, r3, #4
- add r2, r2, #1 <- [0,+,1]
- cmp r1, r0
- bne LBB1_2 @bb
-
+PHI Slicing could be extended to do this.
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
-Tail recursion elimination is not transforming this function, because it is
-returning n, which fails the isDynamicConstant check in the accumulator
-recursion checks.
-
-long long fib(const long long n) {
- switch(n) {
- case 0:
- case 1:
- return n;
- default:
- return fib(n-1) + fib(n-2);
- }
-}
-
-//===---------------------------------------------------------------------===//
-
Tail recursion elimination should handle:
int pow2m1(int n) {
//===---------------------------------------------------------------------===//
-"basicaa" should know how to look through "or" instructions that act like add
-instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
-basicaa can't analyze the array subscript, leading to duplicated loads in the
-generated code:
-
-void test(int X, int Y, int a[]) {
-int i;
- for (i=2; i<1000; i+=4) {
- a[i+0] = a[i-1+0]*a[i-2+0];
- a[i+1] = a[i-1+1]*a[i-2+1];
- a[i+2] = a[i-1+2]*a[i-2+2];
- a[i+3] = a[i-1+3]*a[i-2+3];
- }
-}
-
-BasicAA also doesn't do this for add. It needs to know that &A[i+1] != &A[i].
-
-//===---------------------------------------------------------------------===//
-
We should investigate an instruction sinking pass. Consider this silly
example in pic mode:
extern THotKey m_HotKey;
THotKey GetHotKey () { return m_HotKey; }
-into (-O3 -fno-exceptions -static -fomit-frame-pointer):
-
-__Z9GetHotKeyv:
- pushl %esi
- movl 8(%esp), %eax
- movb _m_HotKey+3, %cl
- movb _m_HotKey+4, %dl
- movb _m_HotKey+2, %ch
- movw _m_HotKey, %si
- movw %si, (%eax)
- movb %ch, 2(%eax)
- movb %cl, 3(%eax)
- movb %dl, 4(%eax)
- popl %esi
- ret $4
-
-GCC produces:
-
-__Z9GetHotKeyv:
- movl _m_HotKey, %edx
- movl 4(%esp), %eax
- movl %edx, (%eax)
- movzwl _m_HotKey+4, %edx
- movw %dx, 4(%eax)
- ret $4
-
-The LLVM IR contains the needed alignment info, so we should be able to
-merge the loads and stores into 4-byte loads:
-
- %struct.THotKey = type { i16, i8, i8, i8 }
-define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
-...
- %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
- %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
- %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
- %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
-
-Alternatively, we should use a small amount of base-offset alias analysis
-to make it so the scheduler doesn't need to hold all the loads in regs at
-once.
+into (-m64 -O3 -fno-exceptions -static -fomit-frame-pointer):
+
+__Z9GetHotKeyv: ## @_Z9GetHotKeyv
+ movq _m_HotKey@GOTPCREL(%rip), %rax
+ movzwl (%rax), %ecx
+ movzbl 2(%rax), %edx
+ shlq $16, %rdx
+ orq %rcx, %rdx
+ movzbl 3(%rax), %ecx
+ shlq $24, %rcx
+ orq %rdx, %rcx
+ movzbl 4(%rax), %eax
+ shlq $32, %rax
+ orq %rcx, %rax
+ ret
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
-This GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34043
-contains a testcase that compiles down to:
-
- %struct.XMM128 = type { <4 x float> }
-..
- %src = alloca %struct.XMM128
-..
- %tmp6263 = bitcast %struct.XMM128* %src to <2 x i64>*
- %tmp65 = getelementptr %struct.XMM128* %src, i32 0, i32 0
- store <2 x i64> %tmp5899, <2 x i64>* %tmp6263, align 16
- %tmp66 = load <4 x float>* %tmp65, align 16
- %tmp71 = add <4 x float> %tmp66, %tmp66
-
-If the mid-level optimizer turned the bitcast of pointer + store of tmp5899
-into a bitcast of the vector value and a store to the pointer, then the
-store->load could be easily removed.
-
-//===---------------------------------------------------------------------===//
-
Consider:
int test() {
- long long input[8] = {1,1,1,1,1,1,1,1};
+ long long input[8] = {1,0,1,0,1,0,1,0};
foo(input);
}
-We currently compile this into a memcpy from a global array since the
-initializer is fairly large and not memset'able. This is good, but the memcpy
-gets lowered to load/stores in the code generator. This is also ok, except
-that the codegen lowering for memcpy doesn't handle the case when the source
-is a constant global. This gives us atrocious code like this:
+Clang compiles this into:
- call "L1$pb"
-"L1$pb":
- popl %eax
- movl _C.0.1444-"L1$pb"+32(%eax), %ecx
- movl %ecx, 40(%esp)
- movl _C.0.1444-"L1$pb"+20(%eax), %ecx
- movl %ecx, 28(%esp)
- movl _C.0.1444-"L1$pb"+36(%eax), %ecx
- movl %ecx, 44(%esp)
- movl _C.0.1444-"L1$pb"+44(%eax), %ecx
- movl %ecx, 52(%esp)
- movl _C.0.1444-"L1$pb"+40(%eax), %ecx
- movl %ecx, 48(%esp)
- movl _C.0.1444-"L1$pb"+12(%eax), %ecx
- movl %ecx, 20(%esp)
- movl _C.0.1444-"L1$pb"+4(%eax), %ecx
-...
+ call void @llvm.memset.p0i8.i64(i8* %tmp, i8 0, i64 64, i32 16, i1 false)
+ %0 = getelementptr [8 x i64]* %input, i64 0, i64 0
+ store i64 1, i64* %0, align 16
+ %1 = getelementptr [8 x i64]* %input, i64 0, i64 2
+ store i64 1, i64* %1, align 16
+ %2 = getelementptr [8 x i64]* %input, i64 0, i64 4
+ store i64 1, i64* %2, align 16
+ %3 = getelementptr [8 x i64]* %input, i64 0, i64 6
+ store i64 1, i64* %3, align 16
+
+Which gets codegen'd into:
+
+ pxor %xmm0, %xmm0
+ movaps %xmm0, -16(%rbp)
+ movaps %xmm0, -32(%rbp)
+ movaps %xmm0, -48(%rbp)
+ movaps %xmm0, -64(%rbp)
+ movq $1, -64(%rbp)
+ movq $1, -48(%rbp)
+ movq $1, -32(%rbp)
+ movq $1, -16(%rbp)
-instead of:
- movl $1, 16(%esp)
- movl $0, 20(%esp)
- movl $1, 24(%esp)
- movl $0, 28(%esp)
- movl $1, 32(%esp)
- movl $0, 36(%esp)
- ...
+It would be better to have 4 movq's of 0 instead of the movaps's.
//===---------------------------------------------------------------------===//
matching code in dag combine doesn't look through truncates aggressively
enough. Here are some testcases reduces from GCC PR17886:
-unsigned long long f(unsigned long long x, int y) {
- return (x << y) | (x >> 64-y);
-}
-unsigned f2(unsigned x, int y){
- return (x << y) | (x >> 32-y);
-}
-unsigned long long f3(unsigned long long x){
- int y = 9;
- return (x << y) | (x >> 64-y);
-}
-unsigned f4(unsigned x){
- int y = 10;
- return (x << y) | (x >> 32-y);
-}
unsigned long long f5(unsigned long long x, unsigned long long y) {
return (x << 8) | ((y >> 48) & 0xffull);
}
}
}
-On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
-generate truly horrible code, instead of using shld and friends. On
-ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
-badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
-
//===---------------------------------------------------------------------===//
-We do a number of simplifications in simplify libcalls to strength reduce
-standard library functions, but we don't currently merge them together. For
-example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
-be done safely if "b" isn't modified between the strlen and memcpy of course.
+This (and similar related idioms):
-//===---------------------------------------------------------------------===//
+unsigned int foo(unsigned char i) {
+ return i | (i<<8) | (i<<16) | (i<<24);
+}
-We should be able to evaluate this loop:
+compiles into:
-int test(int x_offs) {
- while (x_offs > 4)
- x_offs -= 4;
- return x_offs;
+define i32 @foo(i8 zeroext %i) nounwind readnone ssp noredzone {
+entry:
+ %conv = zext i8 %i to i32
+ %shl = shl i32 %conv, 8
+ %shl5 = shl i32 %conv, 16
+ %shl9 = shl i32 %conv, 24
+ %or = or i32 %shl9, %conv
+ %or6 = or i32 %or, %shl5
+ %or10 = or i32 %or6, %shl
+ ret i32 %or10
}
-//===---------------------------------------------------------------------===//
-
-Reassociate should turn things like:
+it would be better as:
-int factorial(int X) {
- return X*X*X*X*X*X*X*X;
+unsigned int bar(unsigned char i) {
+ unsigned int j=i | (i << 8);
+ return j | (j<<16);
}
-into llvm.powi calls, allowing the code generator to produce balanced
-multiplication trees.
-
-//===---------------------------------------------------------------------===//
-
-We generate a horrible libcall for llvm.powi. For example, we compile:
+aka:
-#include <cmath>
-double f(double a) { return std::pow(a, 4); }
-
-into:
+define i32 @bar(i8 zeroext %i) nounwind readnone ssp noredzone {
+entry:
+ %conv = zext i8 %i to i32
+ %shl = shl i32 %conv, 8
+ %or = or i32 %shl, %conv
+ %shl5 = shl i32 %or, 16
+ %or6 = or i32 %shl5, %or
+ ret i32 %or6
+}
-__Z1fd:
- subl $12, %esp
- movsd 16(%esp), %xmm0
- movsd %xmm0, (%esp)
- movl $4, 8(%esp)
- call L___powidf2$stub
- addl $12, %esp
- ret
+or even i*0x01010101, depending on the speed of the multiplier. The best way to
+handle this is to canonicalize it to a multiply in IR and have codegen handle
+lowering multiplies to shifts on cpus where shifts are faster.
-GCC produces:
+//===---------------------------------------------------------------------===//
-__Z1fd:
- subl $12, %esp
- movsd 16(%esp), %xmm0
- mulsd %xmm0, %xmm0
- mulsd %xmm0, %xmm0
- movsd %xmm0, (%esp)
- fldl (%esp)
- addl $12, %esp
- ret
+We do a number of simplifications in simplify libcalls to strength reduce
+standard library functions, but we don't currently merge them together. For
+example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
+be done safely if "b" isn't modified between the strlen and memcpy of course.
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
-We miss some instcombines for stuff like this:
-void bar (void);
-void foo (unsigned int a) {
- /* This one is equivalent to a >= (3 << 2). */
- if ((a >> 2) >= 3)
- bar ();
-}
-
-A few other related ones are in GCC PR14753.
-
-//===---------------------------------------------------------------------===//
-
Divisibility by constant can be simplified (according to GCC PR12849) from
being a mulhi to being a mul lo (cheaper). Testcase:
true();
}
-I think this basically amounts to a dag combine to simplify comparisons against
-multiply hi's into a comparison against the mullo.
+This is equivalent to the following, where 2863311531 is the multiplicative
+inverse of 3, and 1431655766 is ((2^32)-1)/3+1:
+void bar(unsigned n) {
+ if (n * 2863311531U < 1431655766U)
+ true();
+}
+
+The same transformation can work with an even modulo with the addition of a
+rotate: rotate the result of the multiply to the right by the number of bits
+which need to be zero for the condition to be true, and shrink the compare RHS
+by the same amount. Unless the target supports rotates, though, that
+transformation probably isn't worthwhile.
+
+The transformation can also easily be made to work with non-zero equality
+comparisons: just transform, for example, "n % 3 == 1" to "(n-1) % 3 == 0".
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
-Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
-10) u< 10, but only when the comparisons have matching sign.
-
-This could be converted with a similiar technique. (PR1941)
-
-define i1 @test(i8 %x) {
- %A = icmp uge i8 %x, 5
- %B = icmp slt i8 %x, 20
- %C = and i1 %A, %B
- ret i1 %C
-}
-
-//===---------------------------------------------------------------------===//
-
These functions perform the same computation, but produce different assembly.
define i8 @select(i8 %x) readnone nounwind {
return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
}
Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
-"clang -emit-llvm-bc | opt -std-compile-opts".
+"clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
}
The expression should optimize to something like
"!((start|end)&~PMD_MASK). Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
-From GCC Bug 15241:
-unsigned int
-foo (unsigned int a, unsigned int b)
-{
- if (a <= 7 && b <= 7)
- baz ();
-}
-Should combine to "(a|b) <= 7". Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
-From GCC Bug 3756:
-int
-pn (int n)
-{
- return (n >= 0 ? 1 : -1);
-}
-Should combine to (n >> 31) | 1. Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts | llc".
-
-//===---------------------------------------------------------------------===//
-
-From GCC Bug 28685:
-int test(int a, int b)
-{
- int lt = a < b;
- int eq = a == b;
-
- return (lt || eq);
-}
-Should combine to "a <= b". Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts | llc".
-
-//===---------------------------------------------------------------------===//
-
-void a(int variable)
-{
- if (variable == 4 || variable == 6)
- bar();
-}
-This should optimize to "if ((variable | 2) == 6)". Currently not
-optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
return (abs(x)) >= 0;
}
This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
-optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
+optimized with "clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
if ((a >> 2) > 5)
bar ();
}
+
All should simplify to a single comparison. All of these are
currently not optimized with "clang -emit-llvm-bc | opt
--std-compile-opts".
+-O3".
//===---------------------------------------------------------------------===//
From GCC Bug 32605:
int c(int* x) {return (char*)x+2 == (char*)x;}
Should combine to 0. Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
-
-//===---------------------------------------------------------------------===//
-
-int a(unsigned char* b) {return *b > 99;}
-There's an unnecessary zext in the generated code with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3" (although llc can optimize it).
//===---------------------------------------------------------------------===//
int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
Should be combined to "((b >> 1) | b) & 1". Currently not optimized
-with "clang -emit-llvm-bc | opt -std-compile-opts".
+with "clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
Should combine to "x | (y & 3)". Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
-unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
-Should combine to "a | 1". Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
Should fold to "(~a & c) | (a & b)". Currently not optimized with
-"clang -emit-llvm-bc | opt -std-compile-opts".
+"clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
int a(int a,int b) {return (~(a|b))|a;}
Should fold to "a|~b". Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
int a(int a, int b) {return (a&&b) || (a&&!b);}
Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
-| opt -std-compile-opts".
+| opt -O3".
//===---------------------------------------------------------------------===//
int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
Should fold to "a ? b : c", or at least something sane. Currently not
-optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
+optimized with "clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
Should fold to a && (b || c). Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
int a(int x) {return x | ((x & 8) ^ 8);}
Should combine to x | 8. Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
int a(int x) {return x ^ ((x & 8) ^ 8);}
Should also combine to x | 8. Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
-int a(int x) {return (x & 8) == 0 ? -1 : -9;}
-Should combine to (x | -9) ^ 8. Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
-
-//===---------------------------------------------------------------------===//
-
-int a(int x) {return (x & 8) == 0 ? -9 : -1;}
-Should combine to x | -9. Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
int a(int x) {return ((x | -9) ^ 8) & x;}
Should combine to x & -9. Currently not optimized with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
Should combine to "a * 0x88888888 >> 31". Currently not optimized
-with "clang -emit-llvm-bc | opt -std-compile-opts".
+with "clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
unsigned a(char* x) {if ((*x & 32) == 0) return b();}
There's an unnecessary zext in the generated code with "clang
--emit-llvm-bc | opt -std-compile-opts".
+-emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
unsigned a(unsigned long long x) {return 40 * (x >> 1);}
Should combine to "20 * (((unsigned)x) & -2)". Currently not
-optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
+optimized with "clang -emit-llvm-bc | opt -O3".
+
+//===---------------------------------------------------------------------===//
+
+int g(int x) { return (x - 10) < 0; }
+Should combine to "x <= 9" (the sub has nsw). Currently not
+optimized with "clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
-We would like to do the following transform in the instcombiner:
+int g(int x) { return (x + 10) < 0; }
+Should combine to "x < -10" (the add has nsw). Currently not
+optimized with "clang -emit-llvm-bc | opt -O3".
- -X/C -> X/-C
+//===---------------------------------------------------------------------===//
+
+int f(int i, int j) { return i < j + 1; }
+int g(int i, int j) { return j > i - 1; }
+Should combine to "i <= j" (the add/sub has nsw). Currently not
+optimized with "clang -emit-llvm-bc | opt -O3".
+
+//===---------------------------------------------------------------------===//
-However, this isn't valid if (-X) overflows. We can implement this when we
-have the concept of a "C signed subtraction" operator that which is undefined
-on overflow.
+unsigned f(unsigned x) { return ((x & 7) + 1) & 15; }
+The & 15 part should be optimized away, it doesn't change the result. Currently
+not optimized with "clang -emit-llvm-bc | opt -O3".
//===---------------------------------------------------------------------===//
//===---------------------------------------------------------------------===//
+[STORE SINKING]
+
Store sinking: This code:
void f (int n, int *cond, int *res) {
It would be better to do the mul once to reduce codesize above the if.
This is GCC PR38204.
+
+//===---------------------------------------------------------------------===//
+This simple function from 179.art:
+
+int winner, numf2s;
+struct { double y; int reset; } *Y;
+
+void find_match() {
+ int i;
+ winner = 0;
+ for (i=0;i<numf2s;i++)
+ if (Y[i].y > Y[winner].y)
+ winner =i;
+}
+
+Compiles into (with clang TBAA):
+
+for.body: ; preds = %for.inc, %bb.nph
+ %indvar = phi i64 [ 0, %bb.nph ], [ %indvar.next, %for.inc ]
+ %i.01718 = phi i32 [ 0, %bb.nph ], [ %i.01719, %for.inc ]
+ %tmp4 = getelementptr inbounds %struct.anon* %tmp3, i64 %indvar, i32 0
+ %tmp5 = load double* %tmp4, align 8, !tbaa !4
+ %idxprom7 = sext i32 %i.01718 to i64
+ %tmp10 = getelementptr inbounds %struct.anon* %tmp3, i64 %idxprom7, i32 0
+ %tmp11 = load double* %tmp10, align 8, !tbaa !4
+ %cmp12 = fcmp ogt double %tmp5, %tmp11
+ br i1 %cmp12, label %if.then, label %for.inc
+
+if.then: ; preds = %for.body
+ %i.017 = trunc i64 %indvar to i32
+ br label %for.inc
+
+for.inc: ; preds = %for.body, %if.then
+ %i.01719 = phi i32 [ %i.01718, %for.body ], [ %i.017, %if.then ]
+ %indvar.next = add i64 %indvar, 1
+ %exitcond = icmp eq i64 %indvar.next, %tmp22
+ br i1 %exitcond, label %for.cond.for.end_crit_edge, label %for.body
+
+
+It is good that we hoisted the reloads of numf2's, and Y out of the loop and
+sunk the store to winner out.
+
+However, this is awful on several levels: the conditional truncate in the loop
+(-indvars at fault? why can't we completely promote the IV to i64?).
+
+Beyond that, we have a partially redundant load in the loop: if "winner" (aka
+%i.01718) isn't updated, we reload Y[winner].y the next time through the loop.
+Similarly, the addressing that feeds it (including the sext) is redundant. In
+the end we get this generated assembly:
+
+LBB0_2: ## %for.body
+ ## =>This Inner Loop Header: Depth=1
+ movsd (%rdi), %xmm0
+ movslq %edx, %r8
+ shlq $4, %r8
+ ucomisd (%rcx,%r8), %xmm0
+ jbe LBB0_4
+ movl %esi, %edx
+LBB0_4: ## %for.inc
+ addq $16, %rdi
+ incq %rsi
+ cmpq %rsi, %rax
+ jne LBB0_2
+
+All things considered this isn't too bad, but we shouldn't need the movslq or
+the shlq instruction, or the load folded into ucomisd every time through the
+loop.
+
+On an x86-specific topic, if the loop can't be restructure, the movl should be a
+cmov.
+
//===---------------------------------------------------------------------===//
+[STORE SINKING]
+
GCC PR37810 is an interesting case where we should sink load/store reload
into the if block and outside the loop, so we don't reload/store it on the
non-call path.
//===---------------------------------------------------------------------===//
-[PHI TRANSLATE GEPs]
+[LOAD PRE CRIT EDGE SPLITTING]
GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
leading to excess stack traffic. This could be handled by GVN with some crazy
%10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
%11 = load i32* %10, align 4
-%11 is fully redundant, an in BB2 it should have the value %8.
+%11 is partially redundant, an in BB2 it should have the value %8.
+
+GCC PR33344 and PR35287 are similar cases.
-GCC PR33344 is a similar case.
//===---------------------------------------------------------------------===//
+[LOAD PRE]
+
There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
-GCC testsuite. There are many pre testcases as ssa-pre-*.c
+GCC testsuite, ones we don't get yet are (checked through loadpre25):
-//===---------------------------------------------------------------------===//
+[CRIT EDGE BREAKING]
+predcom-4.c
-There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
-GCC testsuite. For example, predcom-1.c is:
-
- for (i = 2; i < 1000; i++)
- fib[i] = (fib[i-1] + fib[i - 2]) & 0xffff;
-
-which compiles into:
-
-bb1: ; preds = %bb1, %bb1.thread
- %indvar = phi i32 [ 0, %bb1.thread ], [ %0, %bb1 ]
- %i.0.reg2mem.0 = add i32 %indvar, 2
- %0 = add i32 %indvar, 1 ; <i32> [#uses=3]
- %1 = getelementptr [1000 x i32]* @fib, i32 0, i32 %0
- %2 = load i32* %1, align 4 ; <i32> [#uses=1]
- %3 = getelementptr [1000 x i32]* @fib, i32 0, i32 %indvar
- %4 = load i32* %3, align 4 ; <i32> [#uses=1]
- %5 = add i32 %4, %2 ; <i32> [#uses=1]
- %6 = and i32 %5, 65535 ; <i32> [#uses=1]
- %7 = getelementptr [1000 x i32]* @fib, i32 0, i32 %i.0.reg2mem.0
- store i32 %6, i32* %7, align 4
- %exitcond = icmp eq i32 %0, 998 ; <i1> [#uses=1]
- br i1 %exitcond, label %return, label %bb1
-
-This is basically:
- LOAD fib[i+1]
- LOAD fib[i]
- STORE fib[i+2]
-
-instead of handling this as a loop or other xform, all we'd need to do is teach
-load PRE to phi translate the %0 add (i+1) into the predecessor as (i'+1+1) =
-(i'+2) (where i' is the previous iteration of i). This would find the store
-which feeds it.
-
-predcom-2.c is apparently the same as predcom-1.c
-predcom-3.c is very similar but needs loads feeding each other instead of
-store->load.
-predcom-4.c seems the same as the rest.
+[PRE OF READONLY CALL]
+loadpre5.c
+
+[TURN SELECT INTO BRANCH]
+loadpre14.c loadpre15.c
+actually a conditional increment: loadpre18.c loadpre19.c
//===---------------------------------------------------------------------===//
-Other simple load PRE cases:
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287 [LPRE crit edge splitting]
+[LOAD PRE / STORE SINKING / SPEC HACK]
+
+This is a chunk of code from 456.hmmer:
+
+int f(int M, int *mc, int *mpp, int *tpmm, int *ip, int *tpim, int *dpp,
+ int *tpdm, int xmb, int *bp, int *ms) {
+ int k, sc;
+ for (k = 1; k <= M; k++) {
+ mc[k] = mpp[k-1] + tpmm[k-1];
+ if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
+ if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
+ if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
+ mc[k] += ms[k];
+ }
+}
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this, LPRE crit edge)
- llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as | opt -mem2reg -simplifycfg -gvn | llvm-dis
+It is very profitable for this benchmark to turn the conditional stores to mc[k]
+into a conditional move (select instr in IR) and allow the final store to do the
+store. See GCC PR27313 for more details. Note that this is valid to xform even
+with the new C++ memory model, since mc[k] is previously loaded and later
+stored.
//===---------------------------------------------------------------------===//
-Type based alias analysis:
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
+[SCALAR PRE]
+There are many PRE testcases in testsuite/gcc.dg/tree-ssa/ssa-pre-*.c in the
+GCC testsuite.
//===---------------------------------------------------------------------===//
-When GVN/PRE finds a store of float* to a must aliases pointer when expecting
-an int*, it should turn it into a bitcast. This is a nice generalization of
-the SROA hack that would apply to other cases, e.g.:
+There are some interesting cases in testsuite/gcc.dg/tree-ssa/pred-comm* in the
+GCC testsuite. For example, we get the first example in predcom-1.c, but
+miss the second one:
-int foo(int C, int *P, float X) {
- if (C) {
- bar();
- *P = 42;
- } else
- *(float*)P = X;
+unsigned fib[1000];
+unsigned avg[1000];
- return *P;
+__attribute__ ((noinline))
+void count_averages(int n) {
+ int i;
+ for (i = 1; i < n; i++)
+ avg[i] = (((unsigned long) fib[i - 1] + fib[i] + fib[i + 1]) / 3) & 0xffff;
}
+which compiles into two loads instead of one in the loop.
+
+predcom-2.c is the same as predcom-1.c
+
+predcom-3.c is very similar but needs loads feeding each other instead of
+store->load.
-One example (that requires crazy phi translation) is:
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16799 [BITCAST PHI TRANS]
//===---------------------------------------------------------------------===//
-A/B get pinned to the stack because we turn an if/then into a select instead
-of PRE'ing the load/store. This may be fixable in instcombine:
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
+[ALIAS ANALYSIS]
+
+Type based alias analysis:
+http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
+We should do better analysis of posix_memalign. At the least it should
+no-capture its pointer argument, at best, we should know that the out-value
+result doesn't point to anything (like malloc). One example of this is in
+SingleSource/Benchmarks/Misc/dt.c
+//===---------------------------------------------------------------------===//
Interesting missed case because of control flow flattening (should be 2 loads):
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
With: llvm-gcc t2.c -S -o - -O0 -emit-llvm | llvm-as |
opt -mem2reg -gvn -instcombine | llvm-dis
-we miss it because we need 1) GEP PHI TRAN, 2) CRIT EDGE 3) MULTIPLE DIFFERENT
+we miss it because we need 1) CRIT EDGE 2) MULTIPLE DIFFERENT
VALS PRODUCED BY ONE BLOCK OVER DIFFERENT PATHS
//===---------------------------------------------------------------------===//
simplifylibcalls should do several optimizations for strspn/strcspn:
-strcspn(x, "") -> strlen(x)
-strcspn("", x) -> 0
-strspn("", x) -> 0
-strspn(x, "") -> strlen(x)
-strspn(x, "a") -> strchr(x, 'a')-x
-
strcspn(x, "a") -> inlined loop for up to 3 letters (similarly for strspn):
size_t __strcspn_c3 (__const char *__s, int __reject1, int __reject2,
//===---------------------------------------------------------------------===//
-"gas" uses this idiom:
+simplifylibcalls should turn these snprintf idioms into memcpy (GCC PR47917)
+
+char buf1[6], buf2[6], buf3[4], buf4[4];
+int i;
+
+int foo (void) {
+ int ret = snprintf (buf1, sizeof buf1, "abcde");
+ ret += snprintf (buf2, sizeof buf2, "abcdef") * 16;
+ ret += snprintf (buf3, sizeof buf3, "%s", i++ < 6 ? "abc" : "def") * 256;
+ ret += snprintf (buf4, sizeof buf4, "%s", i++ > 10 ? "abcde" : "defgh")*4096;
+ return ret;
+}
+
+//===---------------------------------------------------------------------===//
+
+"gas" uses this idiom:
else if (strchr ("+-/*%|&^:[]()~", *intel_parser.op_string))
..
else if (strchr ("<>", *intel_parser.op_string)
-Those should be turned into a switch.
+Those should be turned into a switch. SimplifyLibCalls only gets the second
+case.
//===---------------------------------------------------------------------===//
This is interesting for a couple reasons. First, in this:
- %3073 = call i8* @strcpy(i8* %3072, i8* %3071) nounwind
- %strlen = call i32 @strlen(i8* %3072)
-
-The strlen could be replaced with: %strlen = sub %3072, %3073, because the
-strcpy call returns a pointer to the end of the string. Based on that, the
-endptr GEP just becomes equal to 3073, which eliminates a strlen call and GEP.
-
-Second, the memcpy+strlen strlen can be replaced with:
+The memcpy+strlen strlen can be replaced with:
%3074 = call i32 @strlen([5 x i8]* @"\01LC42") nounwind readonly
//===---------------------------------------------------------------------===//
-186.crafty contains this interesting pattern:
-
-%77 = call i8* @strstr(i8* getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0),
- i8* %30)
-%phitmp648 = icmp eq i8* %77, getelementptr ([6 x i8]* @"\01LC5", i32 0, i32 0)
-br i1 %phitmp648, label %bb70, label %bb76
-
-bb70: ; preds = %OptionMatch.exit91, %bb69
- %78 = call i32 @strlen(i8* %30) nounwind readonly align 1 ; <i32> [#uses=1]
-
-This is basically:
- cststr = "abcdef";
- if (strstr(cststr, P) == cststr) {
- x = strlen(P);
- ...
-
-The strstr call would be significantly cheaper written as:
-
-cststr = "abcdef";
-if (memcmp(P, str, strlen(P)))
- x = strlen(P);
-
-This is memcmp+strlen instead of strstr. This also makes the strlen fully
-redundant.
-
-//===---------------------------------------------------------------------===//
-
-186.crafty also contains this code:
-
-%1906 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
-%1907 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1906
-%1908 = call i8* @strcpy(i8* %1907, i8* %1905) nounwind align 1
-%1909 = call i32 @strlen(i8* getelementptr ([32 x i8]* @pgn_event, i32 0,i32 0))
-%1910 = getelementptr [32 x i8]* @pgn_event, i32 0, i32 %1909
-
-The last strlen is computable as 1908-@pgn_event, which means 1910=1908.
-
-//===---------------------------------------------------------------------===//
-
186.crafty has this interesting pattern with the "out.4543" variable:
call void @llvm.memcpy.i32(
//===---------------------------------------------------------------------===//
-#include <math.h>
-double foo(double a) { return sin(a); }
-
-This compiles into this on x86-64 Linux:
-foo:
- subq $8, %rsp
- call sin
- addq $8, %rsp
- ret
-vs:
-
-foo:
- jmp sin
-
-//===---------------------------------------------------------------------===//
-
The arg promotion pass should make use of nocapture to make its alias analysis
stuff much more precise.
//===---------------------------------------------------------------------===//
-Instcombine should replace the load with a constant in:
+int func(int a, int b) { if (a & 0x80) b |= 0x80; else b &= ~0x80; return b; }
+
+Generates this:
+
+define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
+entry:
+ %0 = and i32 %a, 128 ; <i32> [#uses=1]
+ %1 = icmp eq i32 %0, 0 ; <i1> [#uses=1]
+ %2 = or i32 %b, 128 ; <i32> [#uses=1]
+ %3 = and i32 %b, -129 ; <i32> [#uses=1]
+ %b_addr.0 = select i1 %1, i32 %3, i32 %2 ; <i32> [#uses=1]
+ ret i32 %b_addr.0
+}
+
+However, it's functionally equivalent to:
+
+ b = (b & ~0x80) | (a & 0x80);
+
+Which generates this:
+
+define i32 @func(i32 %a, i32 %b) nounwind readnone ssp {
+entry:
+ %0 = and i32 %b, -129 ; <i32> [#uses=1]
+ %1 = and i32 %a, 128 ; <i32> [#uses=1]
+ %2 = or i32 %0, %1 ; <i32> [#uses=1]
+ ret i32 %2
+}
+
+This can be generalized for other forms:
+
+ b = (b & ~0x80) | (a & 0x40) << 1;
+
+//===---------------------------------------------------------------------===//
+
+These two functions produce different code. They shouldn't:
+
+#include <stdint.h>
+
+uint8_t p1(uint8_t b, uint8_t a) {
+ b = (b & ~0xc0) | (a & 0xc0);
+ return (b);
+}
+
+uint8_t p2(uint8_t b, uint8_t a) {
+ b = (b & ~0x40) | (a & 0x40);
+ b = (b & ~0x80) | (a & 0x80);
+ return (b);
+}
+
+define zeroext i8 @p1(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
+entry:
+ %0 = and i8 %b, 63 ; <i8> [#uses=1]
+ %1 = and i8 %a, -64 ; <i8> [#uses=1]
+ %2 = or i8 %1, %0 ; <i8> [#uses=1]
+ ret i8 %2
+}
+
+define zeroext i8 @p2(i8 zeroext %b, i8 zeroext %a) nounwind readnone ssp {
+entry:
+ %0 = and i8 %b, 63 ; <i8> [#uses=1]
+ %.masked = and i8 %a, 64 ; <i8> [#uses=1]
+ %1 = and i8 %a, -128 ; <i8> [#uses=1]
+ %2 = or i8 %1, %0 ; <i8> [#uses=1]
+ %3 = or i8 %2, %.masked ; <i8> [#uses=1]
+ ret i8 %3
+}
+
+//===---------------------------------------------------------------------===//
+
+IPSCCP does not currently propagate argument dependent constants through
+functions where it does not not all of the callers. This includes functions
+with normal external linkage as well as templates, C99 inline functions etc.
+Specifically, it does nothing to:
+
+define i32 @test(i32 %x, i32 %y, i32 %z) nounwind {
+entry:
+ %0 = add nsw i32 %y, %z
+ %1 = mul i32 %0, %x
+ %2 = mul i32 %y, %z
+ %3 = add nsw i32 %1, %2
+ ret i32 %3
+}
+
+define i32 @test2() nounwind {
+entry:
+ %0 = call i32 @test(i32 1, i32 2, i32 4) nounwind
+ ret i32 %0
+}
+
+It would be interesting extend IPSCCP to be able to handle simple cases like
+this, where all of the arguments to a call are constant. Because IPSCCP runs
+before inlining, trivial templates and inline functions are not yet inlined.
+The results for a function + set of constant arguments should be memoized in a
+map.
+
+//===---------------------------------------------------------------------===//
+
+The libcall constant folding stuff should be moved out of SimplifyLibcalls into
+libanalysis' constantfolding logic. This would allow IPSCCP to be able to
+handle simple things like this:
+
+static int foo(const char *X) { return strlen(X); }
+int bar() { return foo("abcd"); }
+
+//===---------------------------------------------------------------------===//
+
+functionattrs doesn't know much about memcpy/memset. This function should be
+marked readnone rather than readonly, since it only twiddles local memory, but
+functionattrs doesn't handle memset/memcpy/memmove aggressively:
+
+struct X { int *p; int *q; };
+int foo() {
+ int i = 0, j = 1;
+ struct X x, y;
+ int **p;
+ y.p = &i;
+ x.q = &j;
+ p = __builtin_memcpy (&x, &y, sizeof (int *));
+ return **p;
+}
+
+This can be seen at:
+$ clang t.c -S -o - -mkernel -O0 -emit-llvm | opt -functionattrs -S
+
+
+//===---------------------------------------------------------------------===//
+
+Missed instcombine transformation:
+define i1 @a(i32 %x) nounwind readnone {
+entry:
+ %cmp = icmp eq i32 %x, 30
+ %sub = add i32 %x, -30
+ %cmp2 = icmp ugt i32 %sub, 9
+ %or = or i1 %cmp, %cmp2
+ ret i1 %or
+}
+This should be optimized to a single compare. Testcase derived from gcc.
+
+//===---------------------------------------------------------------------===//
+
+Missed instcombine or reassociate transformation:
+int a(int a, int b) { return (a==12)&(b>47)&(b<58); }
+
+The sgt and slt should be combined into a single comparison. Testcase derived
+from gcc.
+
+//===---------------------------------------------------------------------===//
+
+Missed instcombine transformation:
+
+ %382 = srem i32 %tmp14.i, 64 ; [#uses=1]
+ %383 = zext i32 %382 to i64 ; [#uses=1]
+ %384 = shl i64 %381, %383 ; [#uses=1]
+ %385 = icmp slt i32 %tmp14.i, 64 ; [#uses=1]
+
+The srem can be transformed to an and because if %tmp14.i is negative, the
+shift is undefined. Testcase derived from 403.gcc.
- static const char x[4] = {'a', 'b', 'c', 'd'};
+//===---------------------------------------------------------------------===//
+
+This is a range comparison on a divided result (from 403.gcc):
+
+ %1337 = sdiv i32 %1336, 8 ; [#uses=1]
+ %.off.i208 = add i32 %1336, 7 ; [#uses=1]
+ %1338 = icmp ult i32 %.off.i208, 15 ; [#uses=1]
- unsigned int y(void) {
- return *(unsigned int *)x;
- }
+We already catch this (removing the sdiv) if there isn't an add, we should
+handle the 'add' as well. This is a common idiom with it's builtin_alloca code.
+C testcase:
+
+int a(int x) { return (unsigned)(x/16+7) < 15; }
-It currently only does this transformation when the size of the constant
-is the same as the size of the integer (so, try x[5]) and the last byte
-is a null (making it a C string). There's no need for these restrictions.
+Another similar case involves truncations on 64-bit targets:
+
+ %361 = sdiv i64 %.046, 8 ; [#uses=1]
+ %362 = trunc i64 %361 to i32 ; [#uses=2]
+...
+ %367 = icmp eq i32 %362, 0 ; [#uses=1]
//===---------------------------------------------------------------------===//
-InstCombine's "turn load from constant into constant" optimization should be
-more aggressive in the presence of bitcasts. For example, because of unions,
-this code:
+Missed instcombine/dagcombine transformation:
+define void @lshift_lt(i8 zeroext %a) nounwind {
+entry:
+ %conv = zext i8 %a to i32
+ %shl = shl i32 %conv, 3
+ %cmp = icmp ult i32 %shl, 33
+ br i1 %cmp, label %if.then, label %if.end
-union vec2d {
- double e[2];
- double v __attribute__((vector_size(16)));
-};
-typedef union vec2d vec2d;
+if.then:
+ tail call void @bar() nounwind
+ ret void
+
+if.end:
+ ret void
+}
+declare void @bar() nounwind
+
+The shift should be eliminated. Testcase derived from gcc.
+
+//===---------------------------------------------------------------------===//
+
+These compile into different code, one gets recognized as a switch and the
+other doesn't due to phase ordering issues (PR6212):
+
+int test1(int mainType, int subType) {
+ if (mainType == 7)
+ subType = 4;
+ else if (mainType == 9)
+ subType = 6;
+ else if (mainType == 11)
+ subType = 9;
+ return subType;
+}
+
+int test2(int mainType, int subType) {
+ if (mainType == 7)
+ subType = 4;
+ if (mainType == 9)
+ subType = 6;
+ if (mainType == 11)
+ subType = 9;
+ return subType;
+}
+
+//===---------------------------------------------------------------------===//
+
+The following test case (from PR6576):
+
+define i32 @mul(i32 %a, i32 %b) nounwind readnone {
+entry:
+ %cond1 = icmp eq i32 %b, 0 ; <i1> [#uses=1]
+ br i1 %cond1, label %exit, label %bb.nph
+bb.nph: ; preds = %entry
+ %tmp = mul i32 %b, %a ; <i32> [#uses=1]
+ ret i32 %tmp
+exit: ; preds = %entry
+ ret i32 0
+}
+
+could be reduced to:
+
+define i32 @mul(i32 %a, i32 %b) nounwind readnone {
+entry:
+ %tmp = mul i32 %b, %a
+ ret i32 %tmp
+}
+
+//===---------------------------------------------------------------------===//
+
+We should use DSE + llvm.lifetime.end to delete dead vtable pointer updates.
+See GCC PR34949
+
+Another interesting case is that something related could be used for variables
+that go const after their ctor has finished. In these cases, globalopt (which
+can statically run the constructor) could mark the global const (so it gets put
+in the readonly section). A testcase would be:
+
+#include <complex>
+using namespace std;
+const complex<char> should_be_in_rodata (42,-42);
+complex<char> should_be_in_data (42,-42);
+complex<char> should_be_in_bss;
+
+Where we currently evaluate the ctors but the globals don't become const because
+the optimizer doesn't know they "become const" after the ctor is done. See
+GCC PR4131 for more examples.
+
+//===---------------------------------------------------------------------===//
+
+In this code:
+
+long foo(long x) {
+ return x > 1 ? x : 1;
+}
+
+LLVM emits a comparison with 1 instead of 0. 0 would be equivalent
+and cheaper on most targets.
+
+LLVM prefers comparisons with zero over non-zero in general, but in this
+case it choses instead to keep the max operation obvious.
+
+//===---------------------------------------------------------------------===//
+
+define void @a(i32 %x) nounwind {
+entry:
+ switch i32 %x, label %if.end [
+ i32 0, label %if.then
+ i32 1, label %if.then
+ i32 2, label %if.then
+ i32 3, label %if.then
+ i32 5, label %if.then
+ ]
+if.then:
+ tail call void @foo() nounwind
+ ret void
+if.end:
+ ret void
+}
+declare void @foo()
+
+Generated code on x86-64 (other platforms give similar results):
+a:
+ cmpl $5, %edi
+ ja LBB2_2
+ cmpl $4, %edi
+ jne LBB2_3
+.LBB0_2:
+ ret
+.LBB0_3:
+ jmp foo # TAILCALL
+
+If we wanted to be really clever, we could simplify the whole thing to
+something like the following, which eliminates a branch:
+ xorl $1, %edi
+ cmpl $4, %edi
+ ja .LBB0_2
+ ret
+.LBB0_2:
+ jmp foo # TAILCALL
+
+//===---------------------------------------------------------------------===//
+
+We compile this:
+
+int foo(int a) { return (a & (~15)) / 16; }
+
+Into:
-static vec2d a={{1,2}}, b={{3,4}};
-
-vec2d foo () {
- return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
+define i32 @foo(i32 %a) nounwind readnone ssp {
+entry:
+ %and = and i32 %a, -16
+ %div = sdiv i32 %and, 16
+ ret i32 %div
+}
+
+but this code (X & -A)/A is X >> log2(A) when A is a power of 2, so this case
+should be instcombined into just "a >> 4".
+
+We do get this at the codegen level, so something knows about it, but
+instcombine should catch it earlier:
+
+_foo: ## @foo
+## BB#0: ## %entry
+ movl %edi, %eax
+ sarl $4, %eax
+ ret
+
+//===---------------------------------------------------------------------===//
+
+This code (from GCC PR28685):
+
+int test(int a, int b) {
+ int lt = a < b;
+ int eq = a == b;
+ if (lt)
+ return 1;
+ return eq;
}
-Compiles into:
+Is compiled to:
+
+define i32 @test(i32 %a, i32 %b) nounwind readnone ssp {
+entry:
+ %cmp = icmp slt i32 %a, %b
+ br i1 %cmp, label %return, label %if.end
+
+if.end: ; preds = %entry
+ %cmp5 = icmp eq i32 %a, %b
+ %conv6 = zext i1 %cmp5 to i32
+ ret i32 %conv6
+
+return: ; preds = %entry
+ ret i32 1
+}
+
+it could be:
+
+define i32 @test__(i32 %a, i32 %b) nounwind readnone ssp {
+entry:
+ %0 = icmp sle i32 %a, %b
+ %retval = zext i1 %0 to i32
+ ret i32 %retval
+}
-@a = internal constant %0 { [2 x double]
- [double 1.000000e+00, double 2.000000e+00] }, align 16
-@b = internal constant %0 { [2 x double]
- [double 3.000000e+00, double 4.000000e+00] }, align 16
+//===---------------------------------------------------------------------===//
+
+This code can be seen in viterbi:
+
+ %64 = call noalias i8* @malloc(i64 %62) nounwind
...
-define void @foo(%struct.vec2d* noalias nocapture sret %agg.result) nounwind {
+ %67 = call i64 @llvm.objectsize.i64(i8* %64, i1 false) nounwind
+ %68 = call i8* @__memset_chk(i8* %64, i32 0, i64 %62, i64 %67) nounwind
+
+llvm.objectsize.i64 should be taught about malloc/calloc, allowing it to
+fold to %62. This is a security win (overflows of malloc will get caught)
+and also a performance win by exposing more memsets to the optimizer.
+
+This occurs several times in viterbi.
+
+Note that this would change the semantics of @llvm.objectsize which by its
+current definition always folds to a constant. We also should make sure that
+we remove checking in code like
+
+ char *p = malloc(strlen(s)+1);
+ __strcpy_chk(p, s, __builtin_objectsize(p, 0));
+
+//===---------------------------------------------------------------------===//
+
+clang -O3 currently compiles this code
+
+int g(unsigned int a) {
+ unsigned int c[100];
+ c[10] = a;
+ c[11] = a;
+ unsigned int b = c[10] + c[11];
+ if(b > a*2) a = 4;
+ else a = 8;
+ return a + 7;
+}
+
+into
+
+define i32 @g(i32 a) nounwind readnone {
+ %add = shl i32 %a, 1
+ %mul = shl i32 %a, 1
+ %cmp = icmp ugt i32 %add, %mul
+ %a.addr.0 = select i1 %cmp, i32 11, i32 15
+ ret i32 %a.addr.0
+}
+
+The icmp should fold to false. This CSE opportunity is only available
+after GVN and InstCombine have run.
+
+//===---------------------------------------------------------------------===//
+
+memcpyopt should turn this:
+
+define i8* @test10(i32 %x) {
+ %alloc = call noalias i8* @malloc(i32 %x) nounwind
+ call void @llvm.memset.p0i8.i32(i8* %alloc, i8 0, i32 %x, i32 1, i1 false)
+ ret i8* %alloc
+}
+
+into a call to calloc. We should make sure that we analyze calloc as
+aggressively as malloc though.
+
+//===---------------------------------------------------------------------===//
+
+clang -O3 doesn't optimize this:
+
+void f1(int* begin, int* end) {
+ std::fill(begin, end, 0);
+}
+
+into a memset. This is PR8942.
+
+//===---------------------------------------------------------------------===//
+
+clang -O3 -fno-exceptions currently compiles this code:
+
+void f(int N) {
+ std::vector<int> v(N);
+
+ extern void sink(void*); sink(&v);
+}
+
+into
+
+define void @_Z1fi(i32 %N) nounwind {
+entry:
+ %v2 = alloca [3 x i32*], align 8
+ %v2.sub = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 0
+ %tmpcast = bitcast [3 x i32*]* %v2 to %"class.std::vector"*
+ %conv = sext i32 %N to i64
+ store i32* null, i32** %v2.sub, align 8, !tbaa !0
+ %tmp3.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 1
+ store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
+ %tmp4.i.i.i.i.i = getelementptr inbounds [3 x i32*]* %v2, i64 0, i64 2
+ store i32* null, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
+ %cmp.i.i.i.i = icmp eq i32 %N, 0
+ br i1 %cmp.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i, label %cond.true.i.i.i.i
+
+_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.thread.i.i: ; preds = %entry
+ store i32* null, i32** %v2.sub, align 8, !tbaa !0
+ store i32* null, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
+ %add.ptr.i5.i.i = getelementptr inbounds i32* null, i64 %conv
+ store i32* %add.ptr.i5.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
+ br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
+
+cond.true.i.i.i.i: ; preds = %entry
+ %cmp.i.i.i.i.i = icmp slt i32 %N, 0
+ br i1 %cmp.i.i.i.i.i, label %if.then.i.i.i.i.i, label %_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i
+
+if.then.i.i.i.i.i: ; preds = %cond.true.i.i.i.i
+ call void @_ZSt17__throw_bad_allocv() noreturn nounwind
+ unreachable
+
+_ZNSt12_Vector_baseIiSaIiEEC2EmRKS0_.exit.i.i: ; preds = %cond.true.i.i.i.i
+ %mul.i.i.i.i.i = shl i64 %conv, 2
+ %call3.i.i.i.i.i = call noalias i8* @_Znwm(i64 %mul.i.i.i.i.i) nounwind
+ %0 = bitcast i8* %call3.i.i.i.i.i to i32*
+ store i32* %0, i32** %v2.sub, align 8, !tbaa !0
+ store i32* %0, i32** %tmp3.i.i.i.i.i, align 8, !tbaa !0
+ %add.ptr.i.i.i = getelementptr inbounds i32* %0, i64 %conv
+ store i32* %add.ptr.i.i.i, i32** %tmp4.i.i.i.i.i, align 8, !tbaa !0
+ call void @llvm.memset.p0i8.i64(i8* %call3.i.i.i.i.i, i8 0, i64 %mul.i.i.i.i.i, i32 4, i1 false)
+ br label %_ZNSt6vectorIiSaIiEEC1EmRKiRKS0_.exit
+
+This is just the handling the construction of the vector. Most surprising here
+is the fact that all three null stores in %entry are dead (because we do no
+cross-block DSE).
+
+Also surprising is that %conv isn't simplified to 0 in %....exit.thread.i.i.
+This is a because the client of LazyValueInfo doesn't simplify all instruction
+operands, just selected ones.
+
+//===---------------------------------------------------------------------===//
+
+clang -O3 -fno-exceptions currently compiles this code:
+
+void f(char* a, int n) {
+ __builtin_memset(a, 0, n);
+ for (int i = 0; i < n; ++i)
+ a[i] = 0;
+}
+
+into:
+
+define void @_Z1fPci(i8* nocapture %a, i32 %n) nounwind {
+entry:
+ %conv = sext i32 %n to i64
+ tail call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %conv, i32 1, i1 false)
+ %cmp8 = icmp sgt i32 %n, 0
+ br i1 %cmp8, label %for.body.lr.ph, label %for.end
+
+for.body.lr.ph: ; preds = %entry
+ %tmp10 = add i32 %n, -1
+ %tmp11 = zext i32 %tmp10 to i64
+ %tmp12 = add i64 %tmp11, 1
+ call void @llvm.memset.p0i8.i64(i8* %a, i8 0, i64 %tmp12, i32 1, i1 false)
+ ret void
+
+for.end: ; preds = %entry
+ ret void
+}
+
+This shouldn't need the ((zext (%n - 1)) + 1) game, and it should ideally fold
+the two memset's together.
+
+The issue with the addition only occurs in 64-bit mode, and appears to be at
+least partially caused by Scalar Evolution not keeping its cache updated: it
+returns the "wrong" result immediately after indvars runs, but figures out the
+expected result if it is run from scratch on IR resulting from running indvars.
+
+//===---------------------------------------------------------------------===//
+
+clang -O3 -fno-exceptions currently compiles this code:
+
+struct S {
+ unsigned short m1, m2;
+ unsigned char m3, m4;
+};
+
+void f(int N) {
+ std::vector<S> v(N);
+ extern void sink(void*); sink(&v);
+}
+
+into poor code for zero-initializing 'v' when N is >0. The problem is that
+S is only 6 bytes, but each element is 8 byte-aligned. We generate a loop and
+4 stores on each iteration. If the struct were 8 bytes, this gets turned into
+a memset.
+
+In order to handle this we have to:
+ A) Teach clang to generate metadata for memsets of structs that have holes in
+ them.
+ B) Teach clang to use such a memset for zero init of this struct (since it has
+ a hole), instead of doing elementwise zeroing.
+
+//===---------------------------------------------------------------------===//
+
+clang -O3 currently compiles this code:
+
+extern const int magic;
+double f() { return 0.0 * magic; }
+
+into
+
+@magic = external constant i32
+
+define double @_Z1fv() nounwind readnone {
entry:
- %0 = load <2 x double>* getelementptr (%struct.vec2d*
- bitcast (%0* @a to %struct.vec2d*), i32 0, i32 0), align 16
- %1 = load <2 x double>* getelementptr (%struct.vec2d*
- bitcast (%0* @b to %struct.vec2d*), i32 0, i32 0), align 16
+ %tmp = load i32* @magic, align 4, !tbaa !0
+ %conv = sitofp i32 %tmp to double
+ %mul = fmul double %conv, 0.000000e+00
+ ret double %mul
+}
+We should be able to fold away this fmul to 0.0. More generally, fmul(x,0.0)
+can be folded to 0.0 if we can prove that the LHS is not -0.0, not a NaN, and
+not an INF. The CannotBeNegativeZero predicate in value tracking should be
+extended to support general "fpclassify" operations that can return
+yes/no/unknown for each of these predicates.
-Instcombine should be able to optimize away the loads (and thus the globals).
+In this predicate, we know that uitofp is trivially never NaN or -0.0, and
+we know that it isn't +/-Inf if the floating point type has enough exponent bits
+to represent the largest integer value as < inf.
+
+//===---------------------------------------------------------------------===//
+
+When optimizing a transformation that can change the sign of 0.0 (such as the
+0.0*val -> 0.0 transformation above), it might be provable that the sign of the
+expression doesn't matter. For example, by the above rules, we can't transform
+fmul(sitofp(x), 0.0) into 0.0, because x might be -1 and the result of the
+expression is defined to be -0.0.
+
+If we look at the uses of the fmul for example, we might be able to prove that
+all uses don't care about the sign of zero. For example, if we have:
+
+ fadd(fmul(sitofp(x), 0.0), 2.0)
+
+Since we know that x+2.0 doesn't care about the sign of any zeros in X, we can
+transform the fmul to 0.0, and then the fadd to 2.0.
+
+//===---------------------------------------------------------------------===//
+
+We should enhance memcpy/memcpy/memset to allow a metadata node on them
+indicating that some bytes of the transfer are undefined. This is useful for
+frontends like clang when lowering struct copies, when some elements of the
+struct are undefined. Consider something like this:
+
+struct x {
+ char a;
+ int b[4];
+};
+void foo(struct x*P);
+struct x testfunc() {
+ struct x V1, V2;
+ foo(&V1);
+ V2 = V1;
+
+ return V2;
+}
+
+We currently compile this to:
+$ clang t.c -S -o - -O0 -emit-llvm | opt -scalarrepl -S
+
+
+%struct.x = type { i8, [4 x i32] }
+
+define void @testfunc(%struct.x* sret %agg.result) nounwind ssp {
+entry:
+ %V1 = alloca %struct.x, align 4
+ call void @foo(%struct.x* %V1)
+ %tmp1 = bitcast %struct.x* %V1 to i8*
+ %0 = bitcast %struct.x* %V1 to i160*
+ %srcval1 = load i160* %0, align 4
+ %tmp2 = bitcast %struct.x* %agg.result to i8*
+ %1 = bitcast %struct.x* %agg.result to i160*
+ store i160 %srcval1, i160* %1, align 4
+ ret void
+}
+
+This happens because SRoA sees that the temp alloca has is being memcpy'd into
+and out of and it has holes and it has to be conservative. If we knew about the
+holes, then this could be much much better.
+
+Having information about these holes would also improve memcpy (etc) lowering at
+llc time when it gets inlined, because we can use smaller transfers. This also
+avoids partial register stalls in some important cases.
+
+//===---------------------------------------------------------------------===//
+
+We don't fold (icmp (add) (add)) unless the two adds only have a single use.
+There are a lot of cases that we're refusing to fold in (e.g.) 256.bzip2, for
+example:
+
+ %indvar.next90 = add i64 %indvar89, 1 ;; Has 2 uses
+ %tmp96 = add i64 %tmp95, 1 ;; Has 1 use
+ %exitcond97 = icmp eq i64 %indvar.next90, %tmp96
+
+We don't fold this because we don't want to introduce an overlapped live range
+of the ivar. However if we can make this more aggressive without causing
+performance issues in two ways:
+
+1. If *either* the LHS or RHS has a single use, we can definitely do the
+ transformation. In the overlapping liverange case we're trading one register
+ use for one fewer operation, which is a reasonable trade. Before doing this
+ we should verify that the llc output actually shrinks for some benchmarks.
+2. If both ops have multiple uses, we can still fold it if the operations are
+ both sinkable to *after* the icmp (e.g. in a subsequent block) which doesn't
+ increase register pressure.
+
+There are a ton of icmp's we aren't simplifying because of the reg pressure
+concern. Care is warranted here though because many of these are induction
+variables and other cases that matter a lot to performance, like the above.
+Here's a blob of code that you can drop into the bottom of visitICmp to see some
+missed cases:
+
+ { Value *A, *B, *C, *D;
+ if (match(Op0, m_Add(m_Value(A), m_Value(B))) &&
+ match(Op1, m_Add(m_Value(C), m_Value(D))) &&
+ (A == C || A == D || B == C || B == D)) {
+ errs() << "OP0 = " << *Op0 << " U=" << Op0->getNumUses() << "\n";
+ errs() << "OP1 = " << *Op1 << " U=" << Op1->getNumUses() << "\n";
+ errs() << "CMP = " << I << "\n\n";
+ }
+ }
+
+//===---------------------------------------------------------------------===//
+
+define i1 @test1(i32 %x) nounwind {
+ %and = and i32 %x, 3
+ %cmp = icmp ult i32 %and, 2
+ ret i1 %cmp
+}
+
+Can be folded to (x & 2) == 0.
+
+define i1 @test2(i32 %x) nounwind {
+ %and = and i32 %x, 3
+ %cmp = icmp ugt i32 %and, 1
+ ret i1 %cmp
+}
+
+Can be folded to (x & 2) != 0.
+
+SimplifyDemandedBits shrinks the "and" constant to 2 but instcombine misses the
+icmp transform.
+
+//===---------------------------------------------------------------------===//
+
+This code:
+
+typedef struct {
+int f1:1;
+int f2:1;
+int f3:1;
+int f4:29;
+} t1;
+
+typedef struct {
+int f1:1;
+int f2:1;
+int f3:30;
+} t2;
+
+t1 s1;
+t2 s2;
+
+void func1(void)
+{
+s1.f1 = s2.f1;
+s1.f2 = s2.f2;
+}
+
+Compiles into this IR (on x86-64 at least):
+
+%struct.t1 = type { i8, [3 x i8] }
+@s2 = global %struct.t1 zeroinitializer, align 4
+@s1 = global %struct.t1 zeroinitializer, align 4
+define void @func1() nounwind ssp noredzone {
+entry:
+ %0 = load i32* bitcast (%struct.t1* @s2 to i32*), align 4
+ %bf.val.sext5 = and i32 %0, 1
+ %1 = load i32* bitcast (%struct.t1* @s1 to i32*), align 4
+ %2 = and i32 %1, -4
+ %3 = or i32 %2, %bf.val.sext5
+ %bf.val.sext26 = and i32 %0, 2
+ %4 = or i32 %3, %bf.val.sext26
+ store i32 %4, i32* bitcast (%struct.t1* @s1 to i32*), align 4
+ ret void
+}
+
+The two or/and's should be merged into one each.
+
+//===---------------------------------------------------------------------===//
+
+Machine level code hoisting can be useful in some cases. For example, PR9408
+is about:
+
+typedef union {
+ void (*f1)(int);
+ void (*f2)(long);
+} funcs;
+
+void foo(funcs f, int which) {
+ int a = 5;
+ if (which) {
+ f.f1(a);
+ } else {
+ f.f2(a);
+ }
+}
+
+which we compile to:
+
+foo: # @foo
+# BB#0: # %entry
+ pushq %rbp
+ movq %rsp, %rbp
+ testl %esi, %esi
+ movq %rdi, %rax
+ je .LBB0_2
+# BB#1: # %if.then
+ movl $5, %edi
+ callq *%rax
+ popq %rbp
+ ret
+.LBB0_2: # %if.else
+ movl $5, %edi
+ callq *%rax
+ popq %rbp
+ ret
+
+Note that bb1 and bb2 are the same. This doesn't happen at the IR level
+because one call is passing an i32 and the other is passing an i64.
+
+//===---------------------------------------------------------------------===//
+
+I see this sort of pattern in 176.gcc in a few places (e.g. the start of
+store_bit_field). The rem should be replaced with a multiply and subtract:
+
+ %3 = sdiv i32 %A, %B
+ %4 = srem i32 %A, %B
+
+Similarly for udiv/urem. Note that this shouldn't be done on X86 or ARM,
+which can do this in a single operation (instruction or libcall). It is
+probably best to do this in the code generator.
+
+//===---------------------------------------------------------------------===//
+
+unsigned foo(unsigned x, unsigned y) { return (x & y) == 0 || x == 0; }
+should fold to (x & y) == 0.
+
+//===---------------------------------------------------------------------===//
+unsigned foo(unsigned x, unsigned y) { return x > y && x != 0; }
+should fold to x > y.
//===---------------------------------------------------------------------===//