1 Target Independent Opportunities:
3 //===---------------------------------------------------------------------===//
5 With the recent changes to make the implicit def/use set explicit in
6 machineinstrs, we should change the target descriptions for 'call' instructions
7 so that the .td files don't list all the call-clobbered registers as implicit
8 defs. Instead, these should be added by the code generator (e.g. on the dag).
10 This has a number of uses:
12 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
13 for their different impdef sets.
14 2. Targets with multiple calling convs (e.g. x86) which have different clobber
15 sets don't need copies of call instructions.
16 3. 'Interprocedural register allocation' can be done to reduce the clobber sets
19 //===---------------------------------------------------------------------===//
21 Make the PPC branch selector target independant
23 //===---------------------------------------------------------------------===//
25 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
26 precision don't matter (ffastmath). Misc/mandel will like this. :)
28 //===---------------------------------------------------------------------===//
30 Solve this DAG isel folding deficiency:
48 The problem is the store's chain operand is not the load X but rather
49 a TokenFactor of the load X and load Y, which prevents the folding.
51 There are two ways to fix this:
53 1. The dag combiner can start using alias analysis to realize that y/x
54 don't alias, making the store to X not dependent on the load from Y.
55 2. The generated isel could be made smarter in the case it can't
56 disambiguate the pointers.
58 Number 1 is the preferred solution.
60 This has been "fixed" by a TableGen hack. But that is a short term workaround
61 which will be removed once the proper fix is made.
63 //===---------------------------------------------------------------------===//
65 On targets with expensive 64-bit multiply, we could LSR this:
72 for (i = ...; ++i, tmp+=tmp)
75 This would be a win on ppc32, but not x86 or ppc64.
77 //===---------------------------------------------------------------------===//
79 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
81 //===---------------------------------------------------------------------===//
83 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
85 //===---------------------------------------------------------------------===//
87 Interesting? testcase for add/shift/mul reassoc:
89 int bar(int x, int y) {
90 return x*x*x+y+x*x*x*x*x*y*y*y*y;
92 int foo(int z, int n) {
93 return bar(z, n) + bar(2*z, 2*n);
96 Reassociate should handle the example in GCC PR16157.
98 //===---------------------------------------------------------------------===//
100 These two functions should generate the same code on big-endian systems:
102 int g(int *j,int *l) { return memcmp(j,l,4); }
103 int h(int *j, int *l) { return *j - *l; }
105 this could be done in SelectionDAGISel.cpp, along with other special cases,
108 //===---------------------------------------------------------------------===//
110 It would be nice to revert this patch:
111 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
113 And teach the dag combiner enough to simplify the code expanded before
114 legalize. It seems plausible that this knowledge would let it simplify other
117 //===---------------------------------------------------------------------===//
119 For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
120 to the type size. It works but can be overly conservative as the alignment of
121 specific vector types are target dependent.
123 //===---------------------------------------------------------------------===//
125 We should add 'unaligned load/store' nodes, and produce them from code like
128 v4sf example(float *P) {
129 return (v4sf){P[0], P[1], P[2], P[3] };
132 //===---------------------------------------------------------------------===//
134 Add support for conditional increments, and other related patterns. Instead
139 je LBB16_2 #cond_next
150 //===---------------------------------------------------------------------===//
152 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
154 Expand these to calls of sin/cos and stores:
155 double sincos(double x, double *sin, double *cos);
156 float sincosf(float x, float *sin, float *cos);
157 long double sincosl(long double x, long double *sin, long double *cos);
159 Doing so could allow SROA of the destination pointers. See also:
160 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
162 //===---------------------------------------------------------------------===//
164 Scalar Repl cannot currently promote this testcase to 'ret long cst':
166 %struct.X = type { i32, i32 }
167 %struct.Y = type { %struct.X }
170 %retval = alloca %struct.Y, align 8
171 %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0
172 store i32 0, i32* %tmp12
173 %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1
174 store i32 1, i32* %tmp15
175 %retval.upgrd.1 = bitcast %struct.Y* %retval to i64*
176 %retval.upgrd.2 = load i64* %retval.upgrd.1
177 ret i64 %retval.upgrd.2
180 it should be extended to do so.
182 //===---------------------------------------------------------------------===//
184 -scalarrepl should promote this to be a vector scalar.
186 %struct..0anon = type { <4 x float> }
188 define void @test1(<4 x float> %V, float* %P) {
189 %u = alloca %struct..0anon, align 16
190 %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0
191 store <4 x float> %V, <4 x float>* %tmp
192 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
193 %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1
194 %tmp.upgrd.2 = load float* %tmp.upgrd.1
195 %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00
196 store float %tmp3, float* %P
200 //===---------------------------------------------------------------------===//
202 Turn this into a single byte store with no load (the other 3 bytes are
205 void %test(uint* %P) {
207 %tmp14 = or uint %tmp, 3305111552
208 %tmp15 = and uint %tmp14, 3321888767
209 store uint %tmp15, uint* %P
213 //===---------------------------------------------------------------------===//
215 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
221 int t = __builtin_clz(x);
231 //===---------------------------------------------------------------------===//
233 Legalize should lower ctlz like this:
234 ctlz(x) = popcnt((x-1) & ~x)
236 on targets that have popcnt but not ctlz. itanium, what else?
238 //===---------------------------------------------------------------------===//
240 quantum_sigma_x in 462.libquantum contains the following loop:
242 for(i=0; i<reg->size; i++)
244 /* Flip the target bit of each basis state */
245 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
248 Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
249 so cool to turn it into something like:
251 long long Res = ((MAX_UNSIGNED) 1 << target);
253 for(i=0; i<reg->size; i++)
254 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
256 for(i=0; i<reg->size; i++)
257 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
260 ... which would only do one 32-bit XOR per loop iteration instead of two.
262 It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
265 //===---------------------------------------------------------------------===//
267 This isn't recognized as bswap by instcombine:
269 unsigned int swap_32(unsigned int v) {
270 v = ((v & 0x00ff00ffU) << 8) | ((v & 0xff00ff00U) >> 8);
271 v = ((v & 0x0000ffffU) << 16) | ((v & 0xffff0000U) >> 16);
275 Nor is this (yes, it really is bswap):
277 unsigned long reverse(unsigned v) {
279 t = v ^ ((v << 16) | (v >> 16));
281 v = (v << 24) | (v >> 8);
285 //===---------------------------------------------------------------------===//
287 These should turn into single 16-bit (unaligned?) loads on little/big endian
290 unsigned short read_16_le(const unsigned char *adr) {
291 return adr[0] | (adr[1] << 8);
293 unsigned short read_16_be(const unsigned char *adr) {
294 return (adr[0] << 8) | adr[1];
297 //===---------------------------------------------------------------------===//
299 -instcombine should handle this transform:
300 icmp pred (sdiv X / C1 ), C2
301 when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
303 Currently InstCombine avoids this transform but will do it when the signs of
304 the operands and the sign of the divide match. See the FIXME in
305 InstructionCombining.cpp in the visitSetCondInst method after the switch case
306 for Instruction::UDiv (around line 4447) for more details.
308 The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
311 //===---------------------------------------------------------------------===//
313 Instcombine misses several of these cases (see the testcase in the patch):
314 http://gcc.gnu.org/ml/gcc-patches/2006-10/msg01519.html
316 //===---------------------------------------------------------------------===//
318 viterbi speeds up *significantly* if the various "history" related copy loops
319 are turned into memcpy calls at the source level. We need a "loops to memcpy"
322 //===---------------------------------------------------------------------===//
326 typedef unsigned U32;
327 typedef unsigned long long U64;
328 int test (U32 *inst, U64 *regs) {
331 int r1 = (temp >> 20) & 0xf;
332 int b2 = (temp >> 16) & 0xf;
333 effective_addr2 = temp & 0xfff;
334 if (b2) effective_addr2 += regs[b2];
335 b2 = (temp >> 12) & 0xf;
336 if (b2) effective_addr2 += regs[b2];
337 effective_addr2 &= regs[4];
338 if ((effective_addr2 & 3) == 0)
343 Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
344 we don't eliminate the computation of the top half of effective_addr2 because
345 we don't have whole-function selection dags. On x86, this means we use one
346 extra register for the function when effective_addr2 is declared as U64 than
347 when it is declared U32.
349 //===---------------------------------------------------------------------===//
351 Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
352 regs and bswap, like itanium.
354 //===---------------------------------------------------------------------===//
356 LSR should know what GPR types a target has. This code:
358 volatile short X, Y; // globals
362 for (i = 0; i < N; i++) { X = i; Y = i*4; }
365 produces two identical IV's (after promotion) on PPC/ARM:
367 LBB1_1: @bb.preheader
378 add r1, r1, #1 <- [0,+,1]
380 add r2, r2, #1 <- [0,+,1]
385 //===---------------------------------------------------------------------===//
387 Tail call elim should be more aggressive, checking to see if the call is
388 followed by an uncond branch to an exit block.
390 ; This testcase is due to tail-duplication not wanting to copy the return
391 ; instruction into the terminating blocks because there was other code
392 ; optimized out of the function after the taildup happened.
393 ;RUN: llvm-upgrade < %s | llvm-as | opt -tailcallelim | llvm-dis | not grep call
397 %tmp.1 = and int %a, 1
398 %tmp.2 = cast int %tmp.1 to bool
399 br bool %tmp.2, label %then.0, label %else.0
402 %tmp.5 = add int %a, -1
403 %tmp.3 = call int %t4( int %tmp.5 )
407 %tmp.7 = setne int %a, 0
408 br bool %tmp.7, label %then.1, label %return
411 %tmp.11 = add int %a, -2
412 %tmp.9 = call int %t4( int %tmp.11 )
416 %result.0 = phi int [ 0, %else.0 ], [ %tmp.3, %then.0 ],
421 //===---------------------------------------------------------------------===//
423 Tail recursion elimination is not transforming this function, because it is
424 returning n, which fails the isDynamicConstant check in the accumulator
427 long long fib(const long long n) {
433 return fib(n-1) + fib(n-2);
437 //===---------------------------------------------------------------------===//
439 Argument promotion should promote arguments for recursive functions, like
442 ; RUN: llvm-upgrade < %s | llvm-as | opt -argpromotion | llvm-dis | grep x.val
444 implementation ; Functions:
446 internal int %foo(int* %x) {
449 %tmp.foo = call int %foo(int *%x)
455 %tmp3 = call int %foo( int* %x) ; <int>[#uses=1]
459 //===---------------------------------------------------------------------===//
461 "basicaa" should know how to look through "or" instructions that act like add
462 instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
463 basicaa can't analyze the array subscript, leading to duplicated loads in the
466 void test(int X, int Y, int a[]) {
468 for (i=2; i<1000; i+=4) {
469 a[i+0] = a[i-1+0]*a[i-2+0];
470 a[i+1] = a[i-1+1]*a[i-2+1];
471 a[i+2] = a[i-1+2]*a[i-2+2];
472 a[i+3] = a[i-1+3]*a[i-2+3];
476 //===---------------------------------------------------------------------===//
478 We should investigate an instruction sinking pass. Consider this silly
494 je LBB1_2 # cond_true
502 The PIC base computation (call+popl) is only used on one path through the
503 code, but is currently always computed in the entry block. It would be
504 better to sink the picbase computation down into the block for the
505 assertion, as it is the only one that uses it. This happens for a lot of
506 code with early outs.
508 Another example is loads of arguments, which are usually emitted into the
509 entry block on targets like x86. If not used in all paths through a
510 function, they should be sunk into the ones that do.
512 In this case, whole-function-isel would also handle this.
514 //===---------------------------------------------------------------------===//