3 * implement do-loop -> bdnz transform
4 * implement powerpc-64 for darwin
5 * use stfiwx in float->int
7 * Fold add and sub with constant into non-extern, non-weak addresses so this:
8 lis r2, ha16(l2__ZTV4Cell)
9 la r2, lo16(l2__ZTV4Cell)(r2)
12 lis r2, ha16(l2__ZTV4Cell+8)
13 la r2, lo16(l2__ZTV4Cell+8)(r2)
16 * Teach LLVM how to codegen this:
17 unsigned short foo(float a) { return a; }
29 rlwinm r3, r2, 0, 16, 31
32 * Support 'update' load/store instructions. These are cracked on the G5, but
33 are still a codesize win.
35 * should hint to the branch select pass that it doesn't need to print the
36 second unconditional branch, so we don't end up with things like:
37 b .LBBl42__2E_expand_function_8_674 ; loopentry.24
38 b .LBBl42__2E_expand_function_8_42 ; NewDefault
39 b .LBBl42__2E_expand_function_8_42 ; NewDefault
41 ===-------------------------------------------------------------------------===
46 if (X == 0x12345678) bar();
62 ===-------------------------------------------------------------------------===
64 Lump the constant pool for each function into ONE pic object, and reference
65 pieces of it as offsets from the start. For functions like this (contrived
66 to have lots of constants obviously):
68 double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
73 lis r2, ha16(.CPI_X_0)
74 lfd f0, lo16(.CPI_X_0)(r2)
75 lis r2, ha16(.CPI_X_1)
76 lfd f2, lo16(.CPI_X_1)(r2)
78 lis r2, ha16(.CPI_X_2)
79 lfd f1, lo16(.CPI_X_2)(r2)
80 lis r2, ha16(.CPI_X_3)
81 lfd f2, lo16(.CPI_X_3)(r2)
85 It would be better to materialize .CPI_X into a register, then use immediates
86 off of the register to avoid the lis's. This is even more important in PIC
89 ===-------------------------------------------------------------------------===
91 Implement Newton-Rhapson method for improving estimate instructions to the
92 correct accuracy, and implementing divide as multiply by reciprocal when it has
93 more than one use. Itanium will want this too.
95 ===-------------------------------------------------------------------------===
97 #define ARRAY_LENGTH 16
102 unsigned int field0 : 6;
103 unsigned int field1 : 6;
104 unsigned int field2 : 6;
105 unsigned int field3 : 6;
106 unsigned int field4 : 3;
107 unsigned int field5 : 4;
108 unsigned int field6 : 1;
110 unsigned int field6 : 1;
111 unsigned int field5 : 4;
112 unsigned int field4 : 3;
113 unsigned int field3 : 6;
114 unsigned int field2 : 6;
115 unsigned int field1 : 6;
116 unsigned int field0 : 6;
125 typedef struct program_t {
126 union bitfield array[ARRAY_LENGTH];
132 void AdjustBitfields(program* prog, unsigned int fmt1)
134 unsigned int shift = 0;
135 unsigned int texCount = 0;
138 for (i = 0; i < 8; i++)
140 prog->array[i].bitfields.field0 = texCount;
141 prog->array[i].bitfields.field1 = texCount + 1;
142 prog->array[i].bitfields.field2 = texCount + 2;
143 prog->array[i].bitfields.field3 = texCount + 3;
145 texCount += (fmt1 >> shift) & 0x7;
150 In the loop above, the bitfield adds get generated as
151 (add (shl bitfield, C1), (shl C2, C1)) where C2 is 1, 2 or 3.
153 Since the input to the (or and, and) is an (add) rather than a (shl), the shift
154 doesn't get folded into the rlwimi instruction. We should ideally see through
155 things like this, rather than forcing llvm to generate the equivalent
157 (shl (add bitfield, C2), C1) with some kind of mask.
159 ===-------------------------------------------------------------------------===
163 int %f1(int %a, int %b) {
164 %tmp.1 = and int %a, 15 ; <int> [#uses=1]
165 %tmp.3 = and int %b, 240 ; <int> [#uses=1]
166 %tmp.4 = or int %tmp.3, %tmp.1 ; <int> [#uses=1]
170 without a copy. We make this currently:
173 rlwinm r2, r4, 0, 24, 27
174 rlwimi r2, r3, 0, 28, 31
178 The two-addr pass or RA needs to learn when it is profitable to commute an
179 instruction to avoid a copy AFTER the 2-addr instruction. The 2-addr pass
180 currently only commutes to avoid inserting a copy BEFORE the two addr instr.
182 ===-------------------------------------------------------------------------===
184 176.gcc contains a bunch of code like this (this occurs dozens of times):
186 int %test(uint %mode.0.i.0) {
187 %tmp.79 = cast uint %mode.0.i.0 to sbyte ; <sbyte> [#uses=1]
188 %tmp.80 = cast sbyte %tmp.79 to int ; <int> [#uses=1]
189 %tmp.81 = shl int %tmp.80, ubyte 16 ; <int> [#uses=1]
190 %tmp.82 = and int %tmp.81, 16711680
198 rlwinm r3, r2, 16, 8, 15
201 The extsb is obviously dead. This can be handled by a future thing like
202 MaskedValueIsZero that checks to see if bits are ever demanded (in this case,
203 the sign bits are never used, so we can fold the sext_inreg to nothing).
205 I'm seeing code like this:
209 rlwimi r4, r3, 16, 8, 15
211 in which the extsb is preventing the srwi from being nuked.
213 ===-------------------------------------------------------------------------===
215 Another example that occurs is:
217 uint %test(int %specbits.6.1) {
218 %tmp.2540 = shr int %specbits.6.1, ubyte 11 ; <int> [#uses=1]
219 %tmp.2541 = cast int %tmp.2540 to uint ; <uint> [#uses=1]
220 %tmp.2542 = shl uint %tmp.2541, ubyte 13 ; <uint> [#uses=1]
221 %tmp.2543 = and uint %tmp.2542, 8192 ; <uint> [#uses=1]
229 rlwinm r3, r2, 13, 18, 18
232 the srawi can be nuked by turning the SAR into a logical SHR (the sext bits are
233 dead), which I think can then be folded into the rlwinm.
235 ===-------------------------------------------------------------------------===
237 Compile offsets from allocas:
240 %X = alloca { int, int }
241 %Y = getelementptr {int,int}* %X, int 0, uint 1
245 into a single add, not two:
252 --> important for C++.
254 ===-------------------------------------------------------------------------===
256 int test3(int a, int b) { return (a < 0) ? a : 0; }
258 should be branch free code. LLVM is turning it into < 1 because of the RHS.
260 ===-------------------------------------------------------------------------===
262 No loads or stores of the constants should be needed:
264 struct foo { double X, Y; };
265 void xxx(struct foo F);
266 void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
268 ===-------------------------------------------------------------------------===
270 Darwin Stub LICM optimization:
276 Have to go through an indirect stub if bar is external or linkonce. It would
277 be better to compile it as:
282 which only computes the address of bar once (instead of each time through the
283 stub). This is Darwin specific and would have to be done in the code generator.
284 Probably not a win on x86.
286 ===-------------------------------------------------------------------------===
288 PowerPC i1/setcc stuff (depends on subreg stuff):
290 Check out the PPC code we get for 'compare' in this testcase:
291 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19672
293 oof. on top of not doing the logical crnand instead of (mfcr, mfcr,
294 invert, invert, or), we then have to compare it against zero instead of
295 using the value already in a CR!
297 that should be something like
301 bne cr0, LBB_compare_4
309 rlwinm r7, r7, 30, 31, 31
310 rlwinm r8, r8, 30, 31, 31
316 bne cr0, LBB_compare_4 ; loopexit
318 ===-------------------------------------------------------------------------===
320 Simple IPO for argument passing, change:
321 void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
323 the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
324 of arguments get assigned to r3 through r10. That is, if you have a function
325 foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
326 argument bytes for r4 and r5. The trick then would be to shuffle the argument
327 order for functions we can internalize so that the maximum number of
328 integers/pointers get passed in regs before you see any of the fp arguments.
330 Instead of implementing this, it would actually probably be easier to just
331 implement a PPC fastcc, where we could do whatever we wanted to the CC,
332 including having this work sanely.
334 ===-------------------------------------------------------------------------===
336 Fix Darwin FP-In-Integer Registers ABI
338 Darwin passes doubles in structures in integer registers, which is very very
339 bad. Add something like a BIT_CONVERT to LLVM, then do an i-p transformation
340 that percolates these things out of functions.
342 Check out how horrible this is:
343 http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
345 This is an extension of "interprocedural CC unmunging" that can't be done with
348 ===-------------------------------------------------------------------------===
350 Code Gen IPO optimization:
352 Squish small scalar globals together into a single global struct, allowing the
353 address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
354 of the GOT on targets with one).
356 ===-------------------------------------------------------------------------===
358 Generate lwbrx and other byteswapping load/store instructions when reasonable.
360 ===-------------------------------------------------------------------------===
362 Implement TargetConstantVec, and set up PPC to custom lower ConstantVec into
363 TargetConstantVec's if it's one of the many forms that are algorithmically
364 computable using the spiffy altivec instructions.
366 ===-------------------------------------------------------------------------===
370 double %test(double %X) {
371 %Y = cast double %X to long
372 %Z = cast long %Y to double
389 without the lwz/stw's.
391 ===-------------------------------------------------------------------------===
398 return b * 3; // ignore the fact that this is always 3.
404 into something not this:
409 rlwinm r2, r2, 29, 31, 31
411 bgt cr0, LBB1_2 ; UnifiedReturnBlock
413 rlwinm r2, r2, 0, 31, 31
416 LBB1_2: ; UnifiedReturnBlock
420 In particular, the two compares (marked 1) could be shared by reversing one.
421 This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
422 same operands (but backwards) exists. In this case, this wouldn't save us
423 anything though, because the compares still wouldn't be shared.
425 ===-------------------------------------------------------------------------===
427 The legalizer should lower this:
429 bool %test(ulong %x) {
430 %tmp = setlt ulong %x, 4294967296
434 into "if x.high == 0", not:
450 noticed in 2005-05-11-Popcount-ffs-fls.c.
453 ===-------------------------------------------------------------------------===
455 We should custom expand setcc instead of pretending that we have it. That
456 would allow us to expose the access of the crbit after the mfcr, allowing
457 that access to be trivially folded into other ops. A simple example:
459 int foo(int a, int b) { return (a < b) << 4; }
466 rlwinm r2, r2, 29, 31, 31