1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
5 Add a MUL2U and MUL2S nodes to represent a multiply that returns both the
6 Hi and Lo parts (combination of MUL and MULH[SU] into one node). Add this to
7 X86, & make the dag combiner produce it when needed. This will eliminate one
8 imul from the code generated for:
10 long long test(long long X, long long Y) { return X*Y; }
12 by using the EAX result from the mul. We should add a similar node for
17 long long test(int X, int Y) { return (long long)X*Y; }
19 ... which should only be one imul instruction.
21 //===---------------------------------------------------------------------===//
23 This should be one DIV/IDIV instruction, not a libcall:
25 unsigned test(unsigned long long X, unsigned Y) {
29 This can be done trivially with a custom legalizer. What about overflow
30 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
32 //===---------------------------------------------------------------------===//
34 Some targets (e.g. athlons) prefer freep to fstp ST(0):
35 http://gcc.gnu.org/ml/gcc-patches/2004-04/msg00659.html
37 //===---------------------------------------------------------------------===//
39 This should use fiadd on chips where it is profitable:
40 double foo(double P, int *I) { return P+*I; }
42 //===---------------------------------------------------------------------===//
44 The FP stackifier needs to be global. Also, it should handle simple permutates
45 to reduce number of shuffle instructions, e.g. turning:
58 http://gcc.gnu.org/ml/gcc-patches/2004-11/msg02410.html
61 //===---------------------------------------------------------------------===//
63 Improvements to the multiply -> shift/add algorithm:
64 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
66 //===---------------------------------------------------------------------===//
68 Improve code like this (occurs fairly frequently, e.g. in LLVM):
69 long long foo(int x) { return 1LL << x; }
71 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
72 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
73 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
75 Another useful one would be ~0ULL >> X and ~0ULL << X.
77 //===---------------------------------------------------------------------===//
79 Should support emission of the bswap instruction, probably by adding a new
80 DAG node for byte swapping. Also useful on PPC which has byte-swapping loads.
82 //===---------------------------------------------------------------------===//
85 _Bool f(_Bool a) { return a!=1; }
92 //===---------------------------------------------------------------------===//
96 1. Dynamic programming based approach when compile time if not an
98 2. Code duplication (addressing mode) during isel.
99 3. Other ideas from "Register-Sensitive Selection, Duplication, and
100 Sequencing of Instructions".
102 //===---------------------------------------------------------------------===//
104 Should we promote i16 to i32 to avoid partial register update stalls?
106 //===---------------------------------------------------------------------===//
108 Leave any_extend as pseudo instruction and hint to register
109 allocator. Delay codegen until post register allocation.
111 //===---------------------------------------------------------------------===//
113 Add a target specific hook to DAG combiner to handle SINT_TO_FP and
114 FP_TO_SINT when the source operand is already in memory.
116 //===---------------------------------------------------------------------===//
118 Check if load folding would add a cycle in the dag.
120 //===---------------------------------------------------------------------===//
122 Model X86 EFLAGS as a real register to avoid redudant cmp / test. e.g.
126 testb %al, %al # unnecessary
129 //===---------------------------------------------------------------------===//
131 Count leading zeros and count trailing zeros:
133 int clz(int X) { return __builtin_clz(X); }
134 int ctz(int X) { return __builtin_ctz(X); }
136 $ gcc t.c -S -o - -O3 -fomit-frame-pointer -masm=intel
138 bsr %eax, DWORD PTR [%esp+4]
142 bsf %eax, DWORD PTR [%esp+4]
145 however, check that these are defined for 0 and 32. Our intrinsics are, GCC's
148 //===---------------------------------------------------------------------===//
150 Use push/pop instructions in prolog/epilog sequences instead of stores off
151 ESP (certain code size win, perf win on some [which?] processors).
153 //===---------------------------------------------------------------------===//
155 Only use inc/neg/not instructions on processors where they are faster than
156 add/sub/xor. They are slower on the P4 due to only updating some processor
159 //===---------------------------------------------------------------------===//
161 Open code rint,floor,ceil,trunc:
162 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg02006.html
163 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg02011.html
165 //===---------------------------------------------------------------------===//
167 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
169 //===---------------------------------------------------------------------===//
171 For all targets, not just X86:
172 When llvm.memcpy, llvm.memset, or llvm.memmove are lowered, they should be
173 optimized to a few store instructions if the source is constant and the length
174 is smallish (< 8). This will greatly help some tests like Shootout/strcat.c
176 //===---------------------------------------------------------------------===//
178 Solve this DAG isel folding deficiency:
196 The problem is the store's chain operand is not the load X but rather
197 a TokenFactor of the load X and load Y, which prevents the folding.
199 There are two ways to fix this:
201 1. The dag combiner can start using alias analysis to realize that y/x
202 don't alias, making the store to X not dependent on the load from Y.
203 2. The generated isel could be made smarter in the case it can't
204 disambiguate the pointers.
206 Number 1 is the preferred solution.
208 //===---------------------------------------------------------------------===//
210 The instruction selector sometimes misses folding a load into a compare. The
211 pattern is written as (cmp reg, (load p)). Because the compare isn't
212 commutative, it is not matched with the load on both sides. The dag combiner
213 should be made smart enough to cannonicalize the load into the RHS of a compare
214 when it can invert the result of the compare for free.
216 //===---------------------------------------------------------------------===//
218 LSR should be turned on for the X86 backend and tuned to take advantage of its
221 //===---------------------------------------------------------------------===//
223 When compiled with unsafemath enabled, "main" should enable SSE DAZ mode and
224 other fast SSE modes.
226 //===---------------------------------------------------------------------===//
228 Think about doing i64 math in SSE regs.
230 //===---------------------------------------------------------------------===//
232 The DAG Isel doesn't fold the loads into the adds in this testcase. The
233 pattern selector does. This is because the chain value of the load gets
234 selected first, and the loads aren't checking to see if they are only used by
239 int %test(int* %x, int* %y, int* %z) {
272 This is bad for register pressure, though the dag isel is producing a
275 //===---------------------------------------------------------------------===//
277 This testcase should have no SSE instructions in it, and only one load from
280 double %test3(bool %B) {
281 %C = select bool %B, double 123.412, double 523.01123123
285 Currently, the select is being lowered, which prevents the dag combiner from
286 turning 'select (load CPI1), (load CPI2)' -> 'load (select CPI1, CPI2)'
288 The pattern isel got this one right.
290 //===---------------------------------------------------------------------===//
292 We need to lower switch statements to tablejumps when appropriate instead of
293 always into binary branch trees.
295 //===---------------------------------------------------------------------===//
297 SSE doesn't have [mem] op= reg instructions. If we have an SSE instruction
302 and the register allocator decides to spill X, it is cheaper to emit this as:
313 ..and this uses one fewer register (so this should be done at load folding
314 time, not at spiller time). *Note* however that this can only be done
315 if Y is dead. Here's a testcase:
317 %.str_3 = external global [15 x sbyte] ; <[15 x sbyte]*> [#uses=0]
318 implementation ; Functions:
319 declare void %printf(int, ...)
323 no_exit.i7: ; preds = %no_exit.i7, %build_tree.exit
324 %tmp.0.1.0.i9 = phi double [ 0.000000e+00, %build_tree.exit ], [ %tmp.34.i18, %no_exit.i7 ] ; <double> [#uses=1]
325 %tmp.0.0.0.i10 = phi double [ 0.000000e+00, %build_tree.exit ], [ %tmp.28.i16, %no_exit.i7 ] ; <double> [#uses=1]
326 %tmp.28.i16 = add double %tmp.0.0.0.i10, 0.000000e+00
327 %tmp.34.i18 = add double %tmp.0.1.0.i9, 0.000000e+00
328 br bool false, label %Compute_Tree.exit23, label %no_exit.i7
329 Compute_Tree.exit23: ; preds = %no_exit.i7
330 tail call void (int, ...)* %printf( int 0 )
331 store double %tmp.34.i18, double* null
340 *** movsd %XMM2, QWORD PTR [%ESP + 8]
341 *** addsd %XMM2, %XMM1
342 *** movsd QWORD PTR [%ESP + 8], %XMM2
343 jmp .BBmain_1 # no_exit.i7
345 This is a bugpoint reduced testcase, which is why the testcase doesn't make
346 much sense (e.g. its an infinite loop). :)
348 //===---------------------------------------------------------------------===//
350 None of the FPStack instructions are handled in
351 X86RegisterInfo::foldMemoryOperand, which prevents the spiller from
352 folding spill code into the instructions.
354 //===---------------------------------------------------------------------===//
356 In many cases, LLVM generates code like this:
365 on some processors (which ones?), it is more efficient to do this:
374 Doing this correctly is tricky though, as the xor clobbers the flags.