1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 We should make the various target's "IMPLICIT_DEF" instructions be a single
6 target-independent opcode like TargetInstrInfo::INLINEASM. This would allow
7 us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow
8 us to avoid having to define this for every target for every register class.
10 //===---------------------------------------------------------------------===//
12 With the recent changes to make the implicit def/use set explicit in
13 machineinstrs, we should change the target descriptions for 'call' instructions
14 so that the .td files don't list all the call-clobbered registers as implicit
15 defs. Instead, these should be added by the code generator (e.g. on the dag).
17 This has a number of uses:
19 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
20 for their different impdef sets.
21 2. Targets with multiple calling convs (e.g. x86) which have different clobber
22 sets don't need copies of call instructions.
23 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
26 //===---------------------------------------------------------------------===//
28 Make the PPC branch selector target independant
30 //===---------------------------------------------------------------------===//
32 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
33 precision don't matter (ffastmath). Misc/mandel will like this. :)
35 //===---------------------------------------------------------------------===//
37 Solve this DAG isel folding deficiency:
55 The problem is the store's chain operand is not the load X but rather
56 a TokenFactor of the load X and load Y, which prevents the folding.
58 There are two ways to fix this:
60 1. The dag combiner can start using alias analysis to realize that y/x
61 don't alias, making the store to X not dependent on the load from Y.
62 2. The generated isel could be made smarter in the case it can't
63 disambiguate the pointers.
65 Number 1 is the preferred solution.
67 This has been "fixed" by a TableGen hack. But that is a short term workaround
68 which will be removed once the proper fix is made.
70 //===---------------------------------------------------------------------===//
72 On targets with expensive 64-bit multiply, we could LSR this:
79 for (i = ...; ++i, tmp+=tmp)
82 This would be a win on ppc32, but not x86 or ppc64.
84 //===---------------------------------------------------------------------===//
86 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
88 //===---------------------------------------------------------------------===//
90 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
92 //===---------------------------------------------------------------------===//
94 Interesting? testcase for add/shift/mul reassoc:
96 int bar(int x, int y) {
97 return x*x*x+y+x*x*x*x*x*y*y*y*y;
99 int foo(int z, int n) {
100 return bar(z, n) + bar(2*z, 2*n);
103 Reassociate should handle the example in GCC PR16157.
105 //===---------------------------------------------------------------------===//
107 These two functions should generate the same code on big-endian systems:
109 int g(int *j,int *l) { return memcmp(j,l,4); }
110 int h(int *j, int *l) { return *j - *l; }
112 this could be done in SelectionDAGISel.cpp, along with other special cases,
115 //===---------------------------------------------------------------------===//
117 It would be nice to revert this patch:
118 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
120 And teach the dag combiner enough to simplify the code expanded before
121 legalize. It seems plausible that this knowledge would let it simplify other
124 //===---------------------------------------------------------------------===//
126 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
127 to the type size. It works but can be overly conservative as the alignment of
128 specific vector types are target dependent.
130 //===---------------------------------------------------------------------===//
132 We should add 'unaligned load/store' nodes, and produce them from code like
135 v4sf example(float *P) {
136 return (v4sf){P[0], P[1], P[2], P[3] };
139 //===---------------------------------------------------------------------===//
141 Add support for conditional increments, and other related patterns. Instead
146 je LBB16_2 #cond_next
157 //===---------------------------------------------------------------------===//
159 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
161 Expand these to calls of sin/cos and stores:
162 double sincos(double x, double *sin, double *cos);
163 float sincosf(float x, float *sin, float *cos);
164 long double sincosl(long double x, long double *sin, long double *cos);
166 Doing so could allow SROA of the destination pointers. See also:
167 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
169 //===---------------------------------------------------------------------===//
171 Scalar Repl cannot currently promote this testcase to 'ret long cst':
173 %struct.X = type { i32, i32 }
174 %struct.Y = type { %struct.X }
177 %retval = alloca %struct.Y, align 8
178 %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0
179 store i32 0, i32* %tmp12
180 %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1
181 store i32 1, i32* %tmp15
182 %retval.upgrd.1 = bitcast %struct.Y* %retval to i64*
183 %retval.upgrd.2 = load i64* %retval.upgrd.1
184 ret i64 %retval.upgrd.2
187 it should be extended to do so.
189 //===---------------------------------------------------------------------===//
191 -scalarrepl should promote this to be a vector scalar.
193 %struct..0anon = type { <4 x float> }
195 define void @test1(<4 x float> %V, float* %P) {
196 %u = alloca %struct..0anon, align 16
197 %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0
198 store <4 x float> %V, <4 x float>* %tmp
199 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
200 %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1
201 %tmp.upgrd.2 = load float* %tmp.upgrd.1
202 %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00
203 store float %tmp3, float* %P
207 //===---------------------------------------------------------------------===//
209 Turn this into a single byte store with no load (the other 3 bytes are
212 void %test(uint* %P) {
214 %tmp14 = or uint %tmp, 3305111552
215 %tmp15 = and uint %tmp14, 3321888767
216 store uint %tmp15, uint* %P
220 //===---------------------------------------------------------------------===//
222 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
228 int t = __builtin_clz(x);
238 //===---------------------------------------------------------------------===//
240 Legalize should lower ctlz like this:
241 ctlz(x) = popcnt((x-1) & ~x)
243 on targets that have popcnt but not ctlz. itanium, what else?
245 //===---------------------------------------------------------------------===//
247 quantum_sigma_x in 462.libquantum contains the following loop:
249 for(i=0; i<reg->size; i++)
251 /* Flip the target bit of each basis state */
252 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
255 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
256 so cool to turn it into something like:
258 long long Res = ((MAX_UNSIGNED) 1 << target);
260 for(i=0; i<reg->size; i++)
261 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
263 for(i=0; i<reg->size; i++)
264 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
267 ... which would only do one 32-bit XOR per loop iteration instead of two.
269 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
272 //===---------------------------------------------------------------------===//
274 This isn't recognized as bswap by instcombine:
276 unsigned int swap_32(unsigned int v) {
277 v = ((v & 0x00ff00ffU) << 8) | ((v & 0xff00ff00U) >> 8);
278 v = ((v & 0x0000ffffU) << 16) | ((v & 0xffff0000U) >> 16);
282 Nor is this (yes, it really is bswap):
284 unsigned long reverse(unsigned v) {
286 t = v ^ ((v << 16) | (v >> 16));
288 v = (v << 24) | (v >> 8);
292 //===---------------------------------------------------------------------===//
294 These should turn into single 16-bit (unaligned?) loads on little/big endian
297 unsigned short read_16_le(const unsigned char *adr) {
298 return adr[0] | (adr[1] << 8);
300 unsigned short read_16_be(const unsigned char *adr) {
301 return (adr[0] << 8) | adr[1];
304 //===---------------------------------------------------------------------===//
306 -instcombine should handle this transform:
307 icmp pred (sdiv X / C1 ), C2
308 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
310 Currently InstCombine avoids this transform but will do it when the signs of
311 the operands and the sign of the divide match. See the FIXME in
312 InstructionCombining.cpp in the visitSetCondInst method after the switch case
313 for Instruction::UDiv (around line 4447) for more details.
315 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
318 //===---------------------------------------------------------------------===//
320 viterbi speeds up *significantly* if the various "history" related copy loops
321 are turned into memcpy calls at the source level. We need a "loops to memcpy"
324 //===---------------------------------------------------------------------===//
328 typedef unsigned U32;
329 typedef unsigned long long U64;
330 int test (U32 *inst, U64 *regs) {
333 int r1 = (temp >> 20) & 0xf;
334 int b2 = (temp >> 16) & 0xf;
335 effective_addr2 = temp & 0xfff;
336 if (b2) effective_addr2 += regs[b2];
337 b2 = (temp >> 12) & 0xf;
338 if (b2) effective_addr2 += regs[b2];
339 effective_addr2 &= regs[4];
340 if ((effective_addr2 & 3) == 0)
345 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
346 we don't eliminate the computation of the top half of effective_addr2 because
347 we don't have whole-function selection dags. On x86, this means we use one
348 extra register for the function when effective_addr2 is declared as U64 than
349 when it is declared U32.
351 //===---------------------------------------------------------------------===//
353 Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
354 regs and bswap, like itanium.
356 //===---------------------------------------------------------------------===//
358 LSR should know what GPR types a target has. This code:
360 volatile short X, Y; // globals
364 for (i = 0; i < N; i++) { X = i; Y = i*4; }
367 produces two identical IV's (after promotion) on PPC/ARM:
369 LBB1_1: @bb.preheader
380 add r1, r1, #1 <- [0,+,1]
382 add r2, r2, #1 <- [0,+,1]
387 //===---------------------------------------------------------------------===//
389 Tail call elim should be more aggressive, checking to see if the call is
390 followed by an uncond branch to an exit block.
392 ; This testcase is due to tail-duplication not wanting to copy the return
393 ; instruction into the terminating blocks because there was other code
394 ; optimized out of the function after the taildup happened.
395 ; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
397 define i32 @t4(i32 %a) {
399 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
400 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
401 br i1 %tmp.2, label %then.0, label %else.0
403 then.0: ; preds = %entry
404 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
405 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
408 else.0: ; preds = %entry
409 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
410 br i1 %tmp.7, label %then.1, label %return
412 then.1: ; preds = %else.0
413 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
414 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
417 return: ; preds = %then.1, %else.0, %then.0
418 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
423 //===---------------------------------------------------------------------===//
425 Tail recursion elimination is not transforming this function, because it is
426 returning n, which fails the isDynamicConstant check in the accumulator
429 long long fib(const long long n) {
435 return fib(n-1) + fib(n-2);
439 //===---------------------------------------------------------------------===//
441 Tail recursion elimination should handle:
446 return 2 * pow2m1 (n - 1) + 1;
449 Also, multiplies can be turned into SHL's, so they should be handled as if
450 they were associative. "return foo() << 1" can be tail recursion eliminated.
452 //===---------------------------------------------------------------------===//
454 Argument promotion should promote arguments for recursive functions, like
457 ; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
459 define internal i32 @foo(i32* %x) {
461 %tmp = load i32* %x ; <i32> [#uses=0]
462 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
466 define i32 @bar(i32* %x) {
468 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
472 //===---------------------------------------------------------------------===//
474 "basicaa" should know how to look through "or" instructions that act like add
475 instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
476 basicaa can't analyze the array subscript, leading to duplicated loads in the
479 void test(int X, int Y, int a[]) {
481 for (i=2; i<1000; i+=4) {
482 a[i+0] = a[i-1+0]*a[i-2+0];
483 a[i+1] = a[i-1+1]*a[i-2+1];
484 a[i+2] = a[i-1+2]*a[i-2+2];
485 a[i+3] = a[i-1+3]*a[i-2+3];
489 //===---------------------------------------------------------------------===//
491 We should investigate an instruction sinking pass. Consider this silly
507 je LBB1_2 # cond_true
515 The PIC base computation (call+popl) is only used on one path through the
516 code, but is currently always computed in the entry block. It would be
517 better to sink the picbase computation down into the block for the
518 assertion, as it is the only one that uses it. This happens for a lot of
519 code with early outs.
521 Another example is loads of arguments, which are usually emitted into the
522 entry block on targets like x86. If not used in all paths through a
523 function, they should be sunk into the ones that do.
525 In this case, whole-function-isel would also handle this.
527 //===---------------------------------------------------------------------===//
529 Investigate lowering of sparse switch statements into perfect hash tables:
530 http://burtleburtle.net/bob/hash/perfect.html
532 //===---------------------------------------------------------------------===//
534 We should turn things like "load+fabs+store" and "load+fneg+store" into the
535 corresponding integer operations. On a yonah, this loop:
540 for (b = 0; b < 10000000; b++)
541 for (i = 0; i < 256; i++)
545 is twice as slow as this loop:
550 for (b = 0; b < 10000000; b++)
551 for (i = 0; i < 256; i++)
552 a[i] ^= (1ULL << 63);
555 and I suspect other processors are similar. On X86 in particular this is a
556 big win because doing this with integers allows the use of read/modify/write
559 //===---------------------------------------------------------------------===//
561 DAG Combiner should try to combine small loads into larger loads when
562 profitable. For example, we compile this C++ example:
564 struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
565 extern THotKey m_HotKey;
566 THotKey GetHotKey () { return m_HotKey; }
568 into (-O3 -fno-exceptions -static -fomit-frame-pointer):
573 movb _m_HotKey+3, %cl
574 movb _m_HotKey+4, %dl
575 movb _m_HotKey+2, %ch
590 movzwl _m_HotKey+4, %edx
594 The LLVM IR contains the needed alignment info, so we should be able to
595 merge the loads and stores into 4-byte loads:
597 %struct.THotKey = type { i16, i8, i8, i8 }
598 define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
600 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
601 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
602 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
603 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
605 Alternatively, we should use a small amount of base-offset alias analysis
606 to make it so the scheduler doesn't need to hold all the loads in regs at
609 //===---------------------------------------------------------------------===//
611 We should extend parameter attributes to capture more information about
612 pointer parameters for alias analysis. Some ideas:
614 1. Add a "nocapture" attribute, which indicates that the callee does not store
615 the address of the parameter into a global or any other memory location
616 visible to the callee. This can be used to make basicaa and other analyses
617 more powerful. It is true for things like memcpy, strcat, and many other
618 things, including structs passed by value, most C++ references, etc.
619 2. Generalize readonly to be set on parameters. This is important mod/ref
620 info for the function, which is important for basicaa and others. It can
621 also be used by the inliner to avoid inserting a memcpy for byval
622 arguments when the function is inlined.
624 These functions can be inferred by various analysis passes such as the
625 globalsmodrefaa pass. Note that getting #2 right is actually really tricky.
629 void caller(S byvalarg) { G.field = 1; ... }
630 void callee() { caller(G); }
632 The fact that the caller does not modify byval arg is not enough, we need
633 to know that it doesn't modify G either. This is very tricky.
635 //===---------------------------------------------------------------------===//
637 We should add an FRINT node to the DAG to model targets that have legal
638 implementations of ceil/floor/rint.
640 //===---------------------------------------------------------------------===//
642 This GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34043
643 contains a testcase that compiles down to:
645 %struct.XMM128 = type { <4 x float> }
647 %src = alloca %struct.XMM128
649 %tmp6263 = bitcast %struct.XMM128* %src to <2 x i64>*
650 %tmp65 = getelementptr %struct.XMM128* %src, i32 0, i32 0
651 store <2 x i64> %tmp5899, <2 x i64>* %tmp6263, align 16
652 %tmp66 = load <4 x float>* %tmp65, align 16
653 %tmp71 = add <4 x float> %tmp66, %tmp66
655 If the mid-level optimizer turned the bitcast of pointer + store of tmp5899
656 into a bitcast of the vector value and a store to the pointer, then the
657 store->load could be easily removed.
659 //===---------------------------------------------------------------------===//
664 long long input[8] = {1,1,1,1,1,1,1,1};
668 We currently compile this into a memcpy from a global array since the
669 initializer is fairly large and not memset'able. This is good, but the memcpy
670 gets lowered to load/stores in the code generator. This is also ok, except
671 that the codegen lowering for memcpy doesn't handle the case when the source
672 is a constant global. This gives us atrocious code like this:
677 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
679 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
681 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
683 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
685 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
687 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
689 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
701 //===---------------------------------------------------------------------===//
703 http://llvm.org/PR717:
705 The following code should compile into "ret int undef". Instead, LLVM
706 produces "ret int 0":
715 //===---------------------------------------------------------------------===//
717 The loop unroller should partially unroll loops (instead of peeling them)
718 when code growth isn't too bad and when an unroll count allows simplification
719 of some code within the loop. One trivial example is:
725 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
734 Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
735 reduction in code size. The resultant code would then also be suitable for
736 exit value computation.
738 //===---------------------------------------------------------------------===//
740 We miss a bunch of rotate opportunities on various targets, including ppc, x86,
741 etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
742 matching code in dag combine doesn't look through truncates aggressively
743 enough. Here are some testcases reduces from GCC PR17886:
745 unsigned long long f(unsigned long long x, int y) {
746 return (x << y) | (x >> 64-y);
748 unsigned f2(unsigned x, int y){
749 return (x << y) | (x >> 32-y);
751 unsigned long long f3(unsigned long long x){
753 return (x << y) | (x >> 64-y);
755 unsigned f4(unsigned x){
757 return (x << y) | (x >> 32-y);
759 unsigned long long f5(unsigned long long x, unsigned long long y) {
760 return (x << 8) | ((y >> 48) & 0xffull);
762 unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
765 return (x << 8) | ((y >> 48) & 0xffull);
767 return (x << 16) | ((y >> 40) & 0xffffull);
769 return (x << 24) | ((y >> 32) & 0xffffffull);
771 return (x << 32) | ((y >> 24) & 0xffffffffull);
773 return (x << 40) | ((y >> 16) & 0xffffffffffull);
777 On X86-64, we only handle f3/f4 right. On x86-32, several of these
778 generate truly horrible code, instead of using shld and friends. On
779 ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
780 badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
782 //===---------------------------------------------------------------------===//
784 We do a number of simplifications in simplify libcalls to strength reduce
785 standard library functions, but we don't currently merge them together. For
786 example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
787 be done safely if "b" isn't modified between the strlen and memcpy of course.
789 //===---------------------------------------------------------------------===//
791 We should be able to evaluate this loop:
793 int test(int x_offs) {
799 //===---------------------------------------------------------------------===//
801 Reassociate should turn things like:
803 int factorial(int X) {
804 return X*X*X*X*X*X*X*X;
807 into llvm.powi calls, allowing the code generator to produce balanced
808 multiplication trees.
810 //===---------------------------------------------------------------------===//
812 We generate a horrible libcall for llvm.powi. For example, we compile:
815 double f(double a) { return std::pow(a, 4); }
821 movsd 16(%esp), %xmm0
824 call L___powidf2$stub
832 movsd 16(%esp), %xmm0
840 //===---------------------------------------------------------------------===//
842 We compile this program: (from GCC PR11680)
843 http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
845 Into code that runs the same speed in fast/slow modes, but both modes run 2x
846 slower than when compile with GCC (either 4.0 or 4.2):
848 $ llvm-g++ perf.cpp -O3 -fno-exceptions
850 1.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
852 $ g++ perf.cpp -O3 -fno-exceptions
854 0.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
856 It looks like we are making the same inlining decisions, so this may be raw
857 codegen badness or something else (haven't investigated).
859 //===---------------------------------------------------------------------===//
861 We miss some instcombines for stuff like this:
863 void foo (unsigned int a) {
864 /* This one is equivalent to a >= (3 << 2). */
869 A few other related ones are in GCC PR14753.
871 //===---------------------------------------------------------------------===//
873 Divisibility by constant can be simplified (according to GCC PR12849) from
874 being a mulhi to being a mul lo (cheaper). Testcase:
876 void bar(unsigned n) {
881 I think this basically amounts to a dag combine to simplify comparisons against
882 multiply hi's into a comparison against the mullo.
884 //===---------------------------------------------------------------------===//
886 SROA is not promoting the union on the stack in this example, we should end
891 double v __attribute__((vector_size(16)));
893 typedef union vec2d vec2d;
895 static vec2d a={{1,2}}, b={{3,4}};
898 return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
901 //===---------------------------------------------------------------------===//