1 //===---------------------------------------------------------------------===//
2 // Random ideas for the ARM backend.
3 //===---------------------------------------------------------------------===//
5 Reimplement 'select' in terms of 'SEL'.
7 * We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
10 * implement predication support
11 * Implement pre/post increment support. (e.g. PR935)
12 * Coalesce stack slots!
13 * Implement smarter constant generation for binops with large immediates.
15 * Consider materializing FP constants like 0.0f and 1.0f using integer
16 immediate instructions then copy to FPU. Slower than load into FPU?
18 //===---------------------------------------------------------------------===//
20 The constant island pass is extremely naive. If a constant pool entry is
21 out of range, it *always* splits a block and inserts a copy of the cp
22 entry inline. It should:
24 1. Check to see if there is already a copy of this constant nearby. If so,
26 2. Instead of always splitting blocks to insert the constant, insert it in
28 3. Constant island references should be ref counted. If a constant reference
29 is out-of-range, and the last reference to a constant is relocated, the
30 dead constant should be removed.
32 This pass has all the framework needed to implement this, but it hasn't
35 //===---------------------------------------------------------------------===//
37 We need to start generating predicated instructions. The .td files have a way
38 to express this now (see the PPC conditional return instruction), but the
39 branch folding pass (or a new if-cvt pass) should start producing these, at
40 least in the trivial case.
42 Among the obvious wins, doing so can eliminate the need to custom expand
43 copysign (i.e. we won't need to custom expand it to get the conditional
46 This allows us to eliminate one instruction from:
48 define i32 @_Z6slow4bii(i32 %x, i32 %y) {
49 %tmp = icmp sgt i32 %x, %y
50 %retval = select i1 %tmp, i32 %x, i32 %y
60 //===---------------------------------------------------------------------===//
62 Implement long long "X-3" with instructions that fold the immediate in. These
63 were disabled due to badness with the ARM carry flag on subtracts.
65 //===---------------------------------------------------------------------===//
67 We currently compile abs:
68 int foo(int p) { return p < 0 ? -p : p; }
79 This is very, uh, literal. This could be a 3 operation sequence:
83 Which would be better. This occurs in png decode.
85 //===---------------------------------------------------------------------===//
87 More load / store optimizations:
88 1) Look past instructions without side-effects (not load, store, branch, etc.)
89 when forming the list of loads / stores to optimize.
91 2) Smarter register allocation?
92 We are probably missing some opportunities to use ldm / stm. Consider:
97 This cannot be merged into a ldm. Perhaps we will need to do the transformation
98 before register allocation. Then teach the register allocator to allocate a
99 chunk of consecutive registers.
101 3) Better representation for block transfer? This is from Olden/power:
112 If we can spare the registers, it would be better to use fldm and fstm here.
113 Need major register allocator enhancement though.
115 4) Can we recognize the relative position of constantpool entries? i.e. Treat
126 Then the ldr's can be combined into a single ldm. See Olden/power.
128 Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
129 double 64-bit FP constant:
139 5) Can we make use of ldrd and strd? Instead of generating ldm / stm, use
140 ldrd/strd instead if there are only two destination registers that form an
141 odd/even pair. However, we probably would pay a penalty if the address is not
142 aligned on 8-byte boundary. This requires more information on load / store
143 nodes (and MI's?) then we currently carry.
145 //===---------------------------------------------------------------------===//
147 * Consider this silly example:
149 double bar(double x) {
174 Ignore the prologue and epilogue stuff for a second. Note
177 the copys to callee-save registers and the fact they are only being used by the
178 fmdrr instruction. It would have been better had the fmdrr been scheduled
179 before the call and place the result in a callee-save DPR register. The two
180 mov ops would not have been necessary.
182 //===---------------------------------------------------------------------===//
184 Calling convention related stuff:
186 * gcc's parameter passing implementation is terrible and we suffer as a result:
194 void foo(struct s S) {
195 printf("%g, %d\n", S.d1, S.s1);
198 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
199 then reload them to r1, r2, and r3 before issuing the call (r0 contains the
200 address of the format string):
205 stmia sp, {r0, r1, r2}
213 Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
215 * Return an aggregate type is even worse:
219 struct s S = {1.1, 2};
228 @ lr needed for prologue
229 ldmia r0, {r0, r1, r2}
230 stmia sp, {r0, r1, r2}
231 stmia ip, {r0, r1, r2}
236 r0 (and later ip) is the hidden parameter from caller to store the value in. The
237 first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
238 r2 into the address passed in. However, there is one additional stmia that
239 stores r0, r1, and r2 to some stack location. The store is dead.
241 The llvm-gcc generated code looks like this:
243 csretcc void %foo(%struct.s* %agg.result) {
245 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
246 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
247 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
248 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
249 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
250 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
251 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
252 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
256 llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
257 constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
258 into a number of load and stores, or 2) custom lower memcpy (of small size) to
259 be ldmia / stmia. I think option 2 is better but the current register
260 allocator cannot allocate a chunk of registers at a time.
262 A feasible temporary solution is to use specific physical registers at the
263 lowering time for small (<= 4 words?) transfer size.
265 * ARM CSRet calling convention requires the hidden argument to be returned by
268 //===---------------------------------------------------------------------===//
270 We can definitely do a better job on BB placements to eliminate some branches.
271 It's very common to see llvm generated assembly code that looks like this:
280 If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
281 then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
283 See McCat/18-imp/ComputeBoundingBoxes for an example.
285 //===---------------------------------------------------------------------===//
287 We need register scavenging. Currently, the 'ip' register is reserved in case
288 frame indexes are too big. This means that we generate extra code for stuff
291 void foo(unsigned x, unsigned y, unsigned z, unsigned *a, unsigned *b, unsigned *c) {
292 short Rconst = (short) (16384.0f * 1.40200 + 0.5 );
301 *** stmfd sp!, {r4, r7}
304 orr r4, r4, #89, 24 @ 22784
314 *** ldmfd sp!, {r4, r7}
333 This is apparently all because we couldn't use ip here.
335 //===---------------------------------------------------------------------===//
337 Pre-/post- indexed load / stores:
339 1) We should not make the pre/post- indexed load/store transform if the base ptr
340 is guaranteed to be live beyond the load/store. This can happen if the base
341 ptr is live out of the block we are performing the optimization. e.g.
353 In most cases, this is just a wasted optimization. However, sometimes it can
354 negatively impact the performance because two-address code is more restrictive
355 when it comes to scheduling.
357 Unfortunately, liveout information is currently unavailable during DAG combine
360 2) Consider spliting a indexed load / store into a pair of add/sub + load/store
361 to solve #1 (in TwoAddressInstructionPass.cpp).
363 3) Enhance LSR to generate more opportunities for indexed ops.
365 4) Once we added support for multiple result patterns, write indexed loads
366 patterns instead of C++ instruction selection code.
368 5) Use FLDM / FSTM to emulate indexed FP load / store.
370 //===---------------------------------------------------------------------===//
372 We should add i64 support to take advantage of the 64-bit load / stores.
373 We can add a pseudo i64 register class containing pseudo registers that are
374 register pairs. All other ops (e.g. add, sub) would be expanded as usual.
376 We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers
377 from the i64 register. These are single moves which can be eliminated if the
378 destination register is a sub-register of the source. We should implement proper
379 subreg support in the register allocator to coalesce these away.
381 There are other minor issues such as multiple instructions for a spill / restore
384 //===---------------------------------------------------------------------===//
386 Implement support for some more tricky ways to materialize immediates. For
387 example, to get 0xffff8000, we can use:
392 //===---------------------------------------------------------------------===//
394 We sometimes generate multiple add / sub instructions to update sp in prologue
395 and epilogue if the inc / dec value is too large to fit in a single immediate
396 operand. In some cases, perhaps it might be better to load the value from a
397 constantpool instead.
399 //===---------------------------------------------------------------------===//
401 GCC generates significantly better code for this function.
403 int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
407 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
408 Line[i++] = Stack[--StackPtr];
411 while (StackPtr != 0 && i < LineLen)
421 //===---------------------------------------------------------------------===//
423 This should compile to the mlas instruction:
424 int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
426 //===---------------------------------------------------------------------===//
428 At some point, we should triage these to see if they still apply to us:
430 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
431 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
432 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
434 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
435 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
436 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
437 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
438 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
439 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
440 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
442 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
443 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
444 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
445 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
446 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
447 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
448 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
450 http://www.inf.u-szeged.hu/gcc-arm/
451 http://citeseer.ist.psu.edu/debus04linktime.html
453 //===---------------------------------------------------------------------===//