1 //===---------------------------------------------------------------------===//
2 // Random ideas for the X86 backend.
3 //===---------------------------------------------------------------------===//
6 - Support for SSE4: http://www.intel.com/software/penryn
7 http://softwarecommunity.intel.com/isn/Downloads/Intel%20SSE4%20Programming%20Reference.pdf
11 //===---------------------------------------------------------------------===//
13 Add a MUL2U and MUL2S nodes to represent a multiply that returns both the
14 Hi and Lo parts (combination of MUL and MULH[SU] into one node). Add this to
15 X86, & make the dag combiner produce it when needed. This will eliminate one
16 imul from the code generated for:
18 long long test(long long X, long long Y) { return X*Y; }
20 by using the EAX result from the mul. We should add a similar node for
25 long long test(int X, int Y) { return (long long)X*Y; }
27 ... which should only be one imul instruction.
31 unsigned long long int t2(unsigned int a, unsigned int b) {
32 return (unsigned long long)a * b;
35 ... which should be one mul instruction.
38 This can be done with a custom expander, but it would be nice to move this to
41 //===---------------------------------------------------------------------===//
43 CodeGen/X86/lea-3.ll:test3 should be a single LEA, not a shift/move. The X86
44 backend knows how to three-addressify this shift, but it appears the register
45 allocator isn't even asking it to do so in this case. We should investigate
46 why this isn't happening, it could have significant impact on other important
47 cases for X86 as well.
49 //===---------------------------------------------------------------------===//
51 This should be one DIV/IDIV instruction, not a libcall:
53 unsigned test(unsigned long long X, unsigned Y) {
57 This can be done trivially with a custom legalizer. What about overflow
58 though? http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14224
60 //===---------------------------------------------------------------------===//
62 Improvements to the multiply -> shift/add algorithm:
63 http://gcc.gnu.org/ml/gcc-patches/2004-08/msg01590.html
65 //===---------------------------------------------------------------------===//
67 Improve code like this (occurs fairly frequently, e.g. in LLVM):
68 long long foo(int x) { return 1LL << x; }
70 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01109.html
71 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01128.html
72 http://gcc.gnu.org/ml/gcc-patches/2004-09/msg01136.html
74 Another useful one would be ~0ULL >> X and ~0ULL << X.
76 One better solution for 1LL << x is:
85 But that requires good 8-bit subreg support.
87 64-bit shifts (in general) expand to really bad code. Instead of using
88 cmovs, we should expand to a conditional branch like GCC produces.
90 //===---------------------------------------------------------------------===//
93 _Bool f(_Bool a) { return a!=1; }
100 //===---------------------------------------------------------------------===//
104 1. Dynamic programming based approach when compile time if not an
106 2. Code duplication (addressing mode) during isel.
107 3. Other ideas from "Register-Sensitive Selection, Duplication, and
108 Sequencing of Instructions".
109 4. Scheduling for reduced register pressure. E.g. "Minimum Register
110 Instruction Sequence Problem: Revisiting Optimal Code Generation for DAGs"
111 and other related papers.
112 http://citeseer.ist.psu.edu/govindarajan01minimum.html
114 //===---------------------------------------------------------------------===//
116 Should we promote i16 to i32 to avoid partial register update stalls?
118 //===---------------------------------------------------------------------===//
120 Leave any_extend as pseudo instruction and hint to register
121 allocator. Delay codegen until post register allocation.
123 //===---------------------------------------------------------------------===//
125 Count leading zeros and count trailing zeros:
127 int clz(int X) { return __builtin_clz(X); }
128 int ctz(int X) { return __builtin_ctz(X); }
130 $ gcc t.c -S -o - -O3 -fomit-frame-pointer -masm=intel
132 bsr %eax, DWORD PTR [%esp+4]
136 bsf %eax, DWORD PTR [%esp+4]
139 however, check that these are defined for 0 and 32. Our intrinsics are, GCC's
142 Another example (use predsimplify to eliminate a select):
144 int foo (unsigned long j) {
146 return __builtin_ffs (j) - 1;
151 //===---------------------------------------------------------------------===//
153 It appears icc use push for parameter passing. Need to investigate.
155 //===---------------------------------------------------------------------===//
157 Only use inc/neg/not instructions on processors where they are faster than
158 add/sub/xor. They are slower on the P4 due to only updating some processor
161 //===---------------------------------------------------------------------===//
163 The instruction selector sometimes misses folding a load into a compare. The
164 pattern is written as (cmp reg, (load p)). Because the compare isn't
165 commutative, it is not matched with the load on both sides. The dag combiner
166 should be made smart enough to cannonicalize the load into the RHS of a compare
167 when it can invert the result of the compare for free.
169 //===---------------------------------------------------------------------===//
171 How about intrinsics? An example is:
172 *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
175 pmuludq (%eax), %xmm0
180 The transformation probably requires a X86 specific pass or a DAG combiner
181 target specific hook.
183 //===---------------------------------------------------------------------===//
185 In many cases, LLVM generates code like this:
194 on some processors (which ones?), it is more efficient to do this:
203 Doing this correctly is tricky though, as the xor clobbers the flags.
205 //===---------------------------------------------------------------------===//
207 We should generate bts/btr/etc instructions on targets where they are cheap or
208 when codesize is important. e.g., for:
210 void setbit(int *target, int bit) {
211 *target |= (1 << bit);
213 void clearbit(int *target, int bit) {
214 *target &= ~(1 << bit);
217 //===---------------------------------------------------------------------===//
219 Instead of the following for memset char*, 1, 10:
221 movl $16843009, 4(%edx)
222 movl $16843009, (%edx)
225 It might be better to generate
232 when we can spare a register. It reduces code size.
234 //===---------------------------------------------------------------------===//
236 Evaluate what the best way to codegen sdiv X, (2^C) is. For X/8, we currently
253 GCC knows several different ways to codegen it, one of which is this:
263 which is probably slower, but it's interesting at least :)
265 //===---------------------------------------------------------------------===//
267 The first BB of this code:
271 %V = call bool %foo()
272 br bool %V, label %T, label %F
289 It would be better to emit "cmp %al, 1" than a xor and test.
291 //===---------------------------------------------------------------------===//
293 We are currently lowering large (1MB+) memmove/memcpy to rep/stosl and rep/movsl
294 We should leave these as libcalls for everything over a much lower threshold,
295 since libc is hand tuned for medium and large mem ops (avoiding RFO for large
296 stores, TLB preheating, etc)
298 //===---------------------------------------------------------------------===//
300 Optimize this into something reasonable:
301 x * copysign(1.0, y) * copysign(1.0, z)
303 //===---------------------------------------------------------------------===//
305 Optimize copysign(x, *y) to use an integer load from y.
307 //===---------------------------------------------------------------------===//
309 %X = weak global int 0
312 %N = cast int %N to uint
313 %tmp.24 = setgt int %N, 0
314 br bool %tmp.24, label %no_exit, label %return
317 %indvar = phi uint [ 0, %entry ], [ %indvar.next, %no_exit ]
318 %i.0.0 = cast uint %indvar to int
319 volatile store int %i.0.0, int* %X
320 %indvar.next = add uint %indvar, 1
321 %exitcond = seteq uint %indvar.next, %N
322 br bool %exitcond, label %return, label %no_exit
336 jl LBB_foo_4 # return
337 LBB_foo_1: # no_exit.preheader
340 movl L_X$non_lazy_ptr, %edx
344 jne LBB_foo_2 # no_exit
345 LBB_foo_3: # return.loopexit
349 We should hoist "movl L_X$non_lazy_ptr, %edx" out of the loop after
350 remateralization is implemented. This can be accomplished with 1) a target
351 dependent LICM pass or 2) makeing SelectDAG represent the whole function.
353 //===---------------------------------------------------------------------===//
355 The following tests perform worse with LSR:
357 lambda, siod, optimizer-eval, ackermann, hash2, nestedloop, strcat, and Treesor.
359 //===---------------------------------------------------------------------===//
361 We are generating far worse code than gcc:
367 for (i = 0; i < N; i++) { X = i; Y = i*4; }
370 LBB1_1: #bb.preheader
374 movl L_X$non_lazy_ptr, %esi
378 movl L_Y$non_lazy_ptr, %edi
388 movl L_X$non_lazy_ptr-"L00000000001$pb"(%ebx), %esi
389 movl L_Y$non_lazy_ptr-"L00000000001$pb"(%ebx), %ecx
392 leal 0(,%edx,4), %eax
400 1. Lack of post regalloc LICM.
401 2. LSR unable to reused IV for a different type (i16 vs. i32) even though
402 the cast would be free.
404 //===---------------------------------------------------------------------===//
406 Teach the coalescer to coalesce vregs of different register classes. e.g. FR32 /
409 //===---------------------------------------------------------------------===//
417 Obviously it would have been better for the first mov (or any op) to store
418 directly %esp[0] if there are no other uses.
420 //===---------------------------------------------------------------------===//
422 Adding to the list of cmp / test poor codegen issues:
424 int test(__m128 *A, __m128 *B) {
425 if (_mm_comige_ss(*A, *B))
445 Note the setae, movzbl, cmpl, cmove can be replaced with a single cmovae. There
446 are a number of issues. 1) We are introducing a setcc between the result of the
447 intrisic call and select. 2) The intrinsic is expected to produce a i32 value
448 so a any extend (which becomes a zero extend) is added.
450 We probably need some kind of target DAG combine hook to fix this.
452 //===---------------------------------------------------------------------===//
454 We generate significantly worse code for this than GCC:
455 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21150
456 http://gcc.gnu.org/bugzilla/attachment.cgi?id=8701
458 There is also one case we do worse on PPC.
460 //===---------------------------------------------------------------------===//
462 If shorter, we should use things like:
467 The former can also be used when the two-addressy nature of the 'and' would
468 require a copy to be inserted (in X86InstrInfo::convertToThreeAddress).
470 //===---------------------------------------------------------------------===//
474 typedef struct pair { float A, B; } pair;
475 void pairtest(pair P, float *FP) {
479 We currently generate this code with llvmgcc4:
491 we should be able to generate:
499 The issue is that llvmgcc4 is forcing the struct to memory, then passing it as
500 integer chunks. It does this so that structs like {short,short} are passed in
501 a single 32-bit integer stack slot. We should handle the safe cases above much
502 nicer, while still handling the hard cases.
504 While true in general, in this specific case we could do better by promoting
505 load int + bitcast to float -> load fload. This basically needs alignment info,
506 the code is already implemented (but disabled) in dag combine).
508 //===---------------------------------------------------------------------===//
510 Another instruction selector deficiency:
513 %tmp = load int (int)** %foo
514 %tmp = tail call int %tmp( int 3 )
520 movl L_foo$non_lazy_ptr, %eax
526 The current isel scheme will not allow the load to be folded in the call since
527 the load's chain result is read by the callseq_start.
529 //===---------------------------------------------------------------------===//
539 imull $3, 4(%esp), %eax
541 Perhaps this is what we really should generate is? Is imull three or four
542 cycles? Note: ICC generates this:
544 leal (%eax,%eax,2), %eax
546 The current instruction priority is based on pattern complexity. The former is
547 more "complex" because it folds a load so the latter will not be emitted.
549 Perhaps we should use AddedComplexity to give LEA32r a higher priority? We
550 should always try to match LEA first since the LEA matching code does some
551 estimate to determine whether the match is profitable.
553 However, if we care more about code size, then imull is better. It's two bytes
554 shorter than movl + leal.
556 //===---------------------------------------------------------------------===//
558 Implement CTTZ, CTLZ with bsf and bsr. GCC produces:
560 int ctz_(unsigned X) { return __builtin_ctz(X); }
561 int clz_(unsigned X) { return __builtin_clz(X); }
562 int ffs_(unsigned X) { return __builtin_ffs(X); }
578 //===---------------------------------------------------------------------===//
580 It appears gcc place string data with linkonce linkage in
581 .section __TEXT,__const_coal,coalesced instead of
582 .section __DATA,__const_coal,coalesced.
583 Take a look at darwin.h, there are other Darwin assembler directives that we
586 //===---------------------------------------------------------------------===//
588 int %foo(int* %a, int %t) {
592 cond_true: ; preds = %cond_true, %entry
593 %x.0.0 = phi int [ 0, %entry ], [ %tmp9, %cond_true ]
594 %t_addr.0.0 = phi int [ %t, %entry ], [ %tmp7, %cond_true ]
595 %tmp2 = getelementptr int* %a, int %x.0.0
596 %tmp3 = load int* %tmp2 ; <int> [#uses=1]
597 %tmp5 = add int %t_addr.0.0, %x.0.0 ; <int> [#uses=1]
598 %tmp7 = add int %tmp5, %tmp3 ; <int> [#uses=2]
599 %tmp9 = add int %x.0.0, 1 ; <int> [#uses=2]
600 %tmp = setgt int %tmp9, 39 ; <bool> [#uses=1]
601 br bool %tmp, label %bb12, label %cond_true
603 bb12: ; preds = %cond_true
607 is pessimized by -loop-reduce and -indvars
609 //===---------------------------------------------------------------------===//
611 u32 to float conversion improvement:
613 float uint32_2_float( unsigned u ) {
614 float fl = (int) (u & 0xffff);
615 float fh = (int) (u >> 16);
620 00000000 subl $0x04,%esp
621 00000003 movl 0x08(%esp,1),%eax
622 00000007 movl %eax,%ecx
623 00000009 shrl $0x10,%ecx
624 0000000c cvtsi2ss %ecx,%xmm0
625 00000010 andl $0x0000ffff,%eax
626 00000015 cvtsi2ss %eax,%xmm1
627 00000019 mulss 0x00000078,%xmm0
628 00000021 addss %xmm1,%xmm0
629 00000025 movss %xmm0,(%esp,1)
630 0000002a flds (%esp,1)
631 0000002d addl $0x04,%esp
634 //===---------------------------------------------------------------------===//
636 When using fastcc abi, align stack slot of argument of type double on 8 byte
637 boundary to improve performance.
639 //===---------------------------------------------------------------------===//
643 int f(int a, int b) {
644 if (a == 4 || a == 6)
656 //===---------------------------------------------------------------------===//
658 GCC's ix86_expand_int_movcc function (in i386.c) has a ton of interesting
659 simplifications for integer "x cmp y ? a : b". For example, instead of:
662 void f(int X, int Y) {
688 //===---------------------------------------------------------------------===//
690 Currently we don't have elimination of redundant stack manipulations. Consider
695 call fastcc void %test1( )
696 call fastcc void %test2( sbyte* cast (void ()* %test1 to sbyte*) )
700 declare fastcc void %test1()
702 declare fastcc void %test2(sbyte*)
705 This currently compiles to:
715 The add\sub pair is really unneeded here.
717 //===---------------------------------------------------------------------===//
719 We currently compile sign_extend_inreg into two shifts:
722 return (long)(signed char)X;
739 //===---------------------------------------------------------------------===//
741 Consider the expansion of:
743 uint %test3(uint %X) {
744 %tmp1 = rem uint %X, 255
748 Currently it compiles to:
751 movl $2155905153, %ecx
757 This could be "reassociated" into:
759 movl $2155905153, %eax
763 to avoid the copy. In fact, the existing two-address stuff would do this
764 except that mul isn't a commutative 2-addr instruction. I guess this has
765 to be done at isel time based on the #uses to mul?
767 //===---------------------------------------------------------------------===//
769 Make sure the instruction which starts a loop does not cross a cacheline
770 boundary. This requires knowning the exact length of each machine instruction.
771 That is somewhat complicated, but doable. Example 256.bzip2:
773 In the new trace, the hot loop has an instruction which crosses a cacheline
774 boundary. In addition to potential cache misses, this can't help decoding as I
775 imagine there has to be some kind of complicated decoder reset and realignment
776 to grab the bytes from the next cacheline.
778 532 532 0x3cfc movb (1809(%esp, %esi), %bl <<<--- spans 2 64 byte lines
779 942 942 0x3d03 movl %dh, (1809(%esp, %esi)
780 937 937 0x3d0a incl %esi
781 3 3 0x3d0b cmpb %bl, %dl
782 27 27 0x3d0d jnz 0x000062db <main+11707>
784 //===---------------------------------------------------------------------===//
786 In c99 mode, the preprocessor doesn't like assembly comments like #TRUNCATE.
788 //===---------------------------------------------------------------------===//
790 This could be a single 16-bit load.
793 if ((p[0] == 1) & (p[1] == 2)) return 1;
797 //===---------------------------------------------------------------------===//
799 We should inline lrintf and probably other libc functions.
801 //===---------------------------------------------------------------------===//
803 Start using the flags more. For example, compile:
805 int add_zf(int *x, int y, int a, int b) {
829 int add_zf(int *x, int y, int a, int b) {
853 //===---------------------------------------------------------------------===//
857 int foo(double X) { return isnan(X); }
868 the pxor is not needed, we could compare the value against itself.
870 //===---------------------------------------------------------------------===//
872 These two functions have identical effects:
874 unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return i;}
875 unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
877 We currently compile them to:
885 jne LBB1_2 #UnifiedReturnBlock
889 LBB1_2: #UnifiedReturnBlock
899 leal 1(%ecx,%eax), %eax
902 both of which are inferior to GCC's:
920 //===---------------------------------------------------------------------===//
928 is currently compiled to:
939 It would be better to produce:
948 This can be applied to any no-return function call that takes no arguments etc.
949 Alternatively, the stack save/restore logic could be shrink-wrapped, producing
960 Both are useful in different situations. Finally, it could be shrink-wrapped
961 and tail called, like this:
968 pop %eax # realign stack.
971 Though this probably isn't worth it.
973 //===---------------------------------------------------------------------===//
975 We need to teach the codegen to convert two-address INC instructions to LEA
976 when the flags are dead (likewise dec). For example, on X86-64, compile:
978 int foo(int A, int B) {
997 ;; X's live range extends beyond the shift, so the register allocator
998 ;; cannot coalesce it with Y. Because of this, a copy needs to be
999 ;; emitted before the shift to save the register value before it is
1000 ;; clobbered. However, this copy is not needed if the register
1001 ;; allocator turns the shift into an LEA. This also occurs for ADD.
1003 ; Check that the shift gets turned into an LEA.
1004 ; RUN: llvm-upgrade < %s | llvm-as | llc -march=x86 -x86-asm-syntax=intel | \
1005 ; RUN: not grep {mov E.X, E.X}
1007 %G = external global int
1009 int %test1(int %X, int %Y) {
1011 volatile store int %Y, int* %G
1012 volatile store int %Z, int* %G
1016 int %test2(int %X) {
1017 %Z = add int %X, 1 ;; inc
1018 volatile store int %Z, int* %G
1022 //===---------------------------------------------------------------------===//
1025 #include <xmmintrin.h>
1026 unsigned test(float f) {
1027 return _mm_cvtsi128_si32( (__m128i) _mm_set_ss( f ));
1032 movss 4(%esp), %xmm0
1036 it should compile to a move from the stack slot directly into eax. DAGCombine
1037 has this xform, but it is currently disabled until the alignment fields of
1038 the load/store nodes are trustworthy.
1040 //===---------------------------------------------------------------------===//
1042 Sometimes it is better to codegen subtractions from a constant (e.g. 7-x) with
1043 a neg instead of a sub instruction. Consider:
1045 int test(char X) { return 7-X; }
1047 we currently produce:
1050 movsbl 4(%esp), %ecx
1054 We would use one fewer register if codegen'd as:
1056 movsbl 4(%esp), %eax
1061 Note that this isn't beneficial if the load can be folded into the sub. In
1062 this case, we want a sub:
1064 int test(int X) { return 7-X; }
1070 //===---------------------------------------------------------------------===//
1075 We get an implicit def on the undef side. If the phi is spilled, we then get:
1079 It should be possible to teach the x86 backend to "fold" the store into the
1080 implicitdef, which just deletes the implicit def.
1082 These instructions should go away:
1084 movaps %xmm1, 192(%esp)
1085 movaps %xmm1, 224(%esp)
1086 movaps %xmm1, 176(%esp)
1088 //===---------------------------------------------------------------------===//
1090 This is a "commutable two-address" register coallescing deficiency:
1092 define <4 x float> @test1(<4 x float> %V) {
1094 %tmp8 = shufflevector <4 x float> %V, <4 x float> undef,
1095 <4 x i32> < i32 3, i32 2, i32 1, i32 0 >
1096 %add = add <4 x float> %tmp8, %V
1097 ret <4 x float> %add
1103 pshufd $27, %xmm0, %xmm1
1111 pshufd $27, %xmm0, %xmm1
1115 //===---------------------------------------------------------------------===//
1117 Leaf functions that require one 4-byte spill slot have a prolog like this:
1123 and an epilog like this:
1128 It would be smaller, and potentially faster, to push eax on entry and to
1129 pop into a dummy register instead of using addl/subl of esp. Just don't pop
1130 into any return registers :)
1132 //===---------------------------------------------------------------------===//
1134 The X86 backend should fold (branch (or (setcc, setcc))) into multiple
1135 branches. We generate really poor code for:
1137 double testf(double a) {
1138 return a == 0.0 ? 0.0 : (a > 0.0 ? 1.0 : -1.0);
1141 For example, the entry BB is:
1146 movsd 24(%esp), %xmm1
1147 ucomisd %xmm0, %xmm1
1151 jne LBB1_5 # UnifiedReturnBlock
1155 it would be better to replace the last four instructions with:
1161 We also codegen the inner ?: into a diamond:
1163 cvtss2sd LCPI1_0(%rip), %xmm2
1164 cvtss2sd LCPI1_1(%rip), %xmm3
1165 ucomisd %xmm1, %xmm0
1166 ja LBB1_3 # cond_true
1173 We should sink the load into xmm3 into the LBB1_2 block. This should
1174 be pretty easy, and will nuke all the copies.
1176 //===---------------------------------------------------------------------===//
1179 #include <algorithm>
1180 inline std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
1181 { return std::make_pair(a + b, a + b < a); }
1182 bool no_overflow(unsigned a, unsigned b)
1183 { return !full_add(a, b).second; }
1203 //===---------------------------------------------------------------------===//
1205 Re-materialize MOV32r0 etc. with xor instead of changing them to moves if the
1206 condition register is dead. xor reg reg is shorter than mov reg, #0.
1208 //===---------------------------------------------------------------------===//
1210 We aren't matching RMW instructions aggressively
1211 enough. Here's a reduced testcase (more in PR1160):
1213 define void @test(i32* %huge_ptr, i32* %target_ptr) {
1214 %A = load i32* %huge_ptr ; <i32> [#uses=1]
1215 %B = load i32* %target_ptr ; <i32> [#uses=1]
1216 %C = or i32 %A, %B ; <i32> [#uses=1]
1217 store i32 %C, i32* %target_ptr
1221 $ llvm-as < t.ll | llc -march=x86-64
1229 That should be something like:
1236 //===---------------------------------------------------------------------===//
1240 bb114.preheader: ; preds = %cond_next94
1241 %tmp231232 = sext i16 %tmp62 to i32 ; <i32> [#uses=1]
1242 %tmp233 = sub i32 32, %tmp231232 ; <i32> [#uses=1]
1243 %tmp245246 = sext i16 %tmp65 to i32 ; <i32> [#uses=1]
1244 %tmp252253 = sext i16 %tmp68 to i32 ; <i32> [#uses=1]
1245 %tmp254 = sub i32 32, %tmp252253 ; <i32> [#uses=1]
1246 %tmp553554 = bitcast i16* %tmp37 to i8* ; <i8*> [#uses=2]
1247 %tmp583584 = sext i16 %tmp98 to i32 ; <i32> [#uses=1]
1248 %tmp585 = sub i32 32, %tmp583584 ; <i32> [#uses=1]
1249 %tmp614615 = sext i16 %tmp101 to i32 ; <i32> [#uses=1]
1250 %tmp621622 = sext i16 %tmp104 to i32 ; <i32> [#uses=1]
1251 %tmp623 = sub i32 32, %tmp621622 ; <i32> [#uses=1]
1256 LBB3_5: # bb114.preheader
1257 movswl -68(%ebp), %eax
1259 movl %ecx, -80(%ebp)
1260 subl %eax, -80(%ebp)
1261 movswl -52(%ebp), %eax
1262 movl %ecx, -84(%ebp)
1263 subl %eax, -84(%ebp)
1264 movswl -70(%ebp), %eax
1265 movl %ecx, -88(%ebp)
1266 subl %eax, -88(%ebp)
1267 movswl -50(%ebp), %eax
1269 movl %ecx, -76(%ebp)
1270 movswl -42(%ebp), %eax
1271 movl %eax, -92(%ebp)
1272 movswl -66(%ebp), %eax
1273 movl %eax, -96(%ebp)
1276 This appears to be bad because the RA is not folding the store to the stack
1277 slot into the movl. The above instructions could be:
1282 This seems like a cross between remat and spill folding.
1284 This has redundant subtractions of %eax from a stack slot. However, %ecx doesn't
1285 change, so we could simply subtract %eax from %ecx first and then use %ecx (or
1288 //===---------------------------------------------------------------------===//
1292 cond_next603: ; preds = %bb493, %cond_true336, %cond_next599
1293 %v.21050.1 = phi i32 [ %v.21050.0, %cond_next599 ], [ %tmp344, %cond_true336 ], [ %v.2, %bb493 ] ; <i32> [#uses=1]
1294 %maxz.21051.1 = phi i32 [ %maxz.21051.0, %cond_next599 ], [ 0, %cond_true336 ], [ %maxz.2, %bb493 ] ; <i32> [#uses=2]
1295 %cnt.01055.1 = phi i32 [ %cnt.01055.0, %cond_next599 ], [ 0, %cond_true336 ], [ %cnt.0, %bb493 ] ; <i32> [#uses=2]
1296 %byteptr.9 = phi i8* [ %byteptr.12, %cond_next599 ], [ %byteptr.0, %cond_true336 ], [ %byteptr.10, %bb493 ] ; <i8*> [#uses=9]
1297 %bitptr.6 = phi i32 [ %tmp5571104.1, %cond_next599 ], [ %tmp4921049, %cond_true336 ], [ %bitptr.7, %bb493 ] ; <i32> [#uses=4]
1298 %source.5 = phi i32 [ %tmp602, %cond_next599 ], [ %source.0, %cond_true336 ], [ %source.6, %bb493 ] ; <i32> [#uses=7]
1299 %tmp606 = getelementptr %struct.const_tables* @tables, i32 0, i32 0, i32 %cnt.01055.1 ; <i8*> [#uses=1]
1300 %tmp607 = load i8* %tmp606, align 1 ; <i8> [#uses=1]
1304 LBB4_70: # cond_next603
1305 movl -20(%ebp), %esi
1306 movl L_tables$non_lazy_ptr-"L4$pb"(%esi), %esi
1308 However, ICC caches this information before the loop and produces this:
1310 movl 88(%esp), %eax #481.12
1312 //===---------------------------------------------------------------------===//
1316 %tmp659 = icmp slt i16 %tmp654, 0 ; <i1> [#uses=1]
1317 br i1 %tmp659, label %cond_true662, label %cond_next715
1323 jns LBB4_109 # cond_next715
1325 Shark tells us that using %cx in the testw instruction is sub-optimal. It
1326 suggests using the 32-bit register (which is what ICC uses).
1328 //===---------------------------------------------------------------------===//
1330 rdar://5506677 - We compile this:
1332 define i32 @foo(double %x) {
1333 %x14 = bitcast double %x to i64 ; <i64> [#uses=1]
1334 %tmp713 = trunc i64 %x14 to i32 ; <i32> [#uses=1]
1335 %tmp8 = and i32 %tmp713, 2147483647 ; <i32> [#uses=1]
1345 movl $2147483647, %eax
1351 It would be much better to eliminate the fldl/fstpl by folding the bitcast
1352 into the load SDNode. That would give us:
1355 movl $2147483647, %eax
1359 //===---------------------------------------------------------------------===//
1363 void compare (long long foo) {
1364 if (foo < 4294967297LL)
1381 je LBB1_2 # cond_true
1383 (also really horrible code on ppc). This is due to the expand code for 64-bit
1384 compares. GCC produces multiple branches, which is much nicer:
1400 //===---------------------------------------------------------------------===//