1 Target Independent Opportunities:
3 ===-------------------------------------------------------------------------===
5 FreeBench/mason contains code like this:
7 static p_type m0u(p_type p) {
8 int m[]={0, 8, 1, 2, 16, 5, 13, 7, 14, 9, 3, 4, 11, 12, 15, 10, 17, 6};
16 We currently compile this into a memcpy from a static array into 'm', then
17 a bunch of loads from m. It would be better to avoid the memcpy and just do
18 loads from the static array.
20 //===---------------------------------------------------------------------===//
22 Make the PPC branch selector target independant
24 //===---------------------------------------------------------------------===//
26 Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
27 precision don't matter (ffastmath). Misc/mandel will like this. :)
29 //===---------------------------------------------------------------------===//
31 Solve this DAG isel folding deficiency:
49 The problem is the store's chain operand is not the load X but rather
50 a TokenFactor of the load X and load Y, which prevents the folding.
52 There are two ways to fix this:
54 1. The dag combiner can start using alias analysis to realize that y/x
55 don't alias, making the store to X not dependent on the load from Y.
56 2. The generated isel could be made smarter in the case it can't
57 disambiguate the pointers.
59 Number 1 is the preferred solution.
61 This has been "fixed" by a TableGen hack. But that is a short term workaround
62 which will be removed once the proper fix is made.
64 //===---------------------------------------------------------------------===//
66 Turn this into a signed shift right in instcombine:
69 return x >> 31 ? -1 : 0;
72 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25600
73 http://gcc.gnu.org/ml/gcc-patches/2006-02/msg01492.html
75 //===---------------------------------------------------------------------===//
77 On targets with expensive 64-bit multiply, we could LSR this:
84 for (i = ...; ++i, tmp+=tmp)
87 This would be a win on ppc32, but not x86 or ppc64.
89 //===---------------------------------------------------------------------===//
91 Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
93 //===---------------------------------------------------------------------===//
95 Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
97 //===---------------------------------------------------------------------===//
99 Interesting? testcase for add/shift/mul reassoc:
101 int bar(int x, int y) {
102 return x*x*x+y+x*x*x*x*x*y*y*y*y;
104 int foo(int z, int n) {
105 return bar(z, n) + bar(2*z, 2*n);
108 //===---------------------------------------------------------------------===//
110 These two functions should generate the same code on big-endian systems:
112 int g(int *j,int *l) { return memcmp(j,l,4); }
113 int h(int *j, int *l) { return *j - *l; }
115 this could be done in SelectionDAGISel.cpp, along with other special cases,
118 //===---------------------------------------------------------------------===//
121 int rot(unsigned char b) { int a = ((b>>1) ^ (b<<7)) & 0xff; return a; }
123 Can be improved in two ways:
125 1. The instcombiner should eliminate the type conversions.
126 2. The X86 backend should turn this into a rotate by one bit.
128 //===---------------------------------------------------------------------===//
130 Add LSR exit value substitution. It'll probably be a win for Ackermann, etc.
132 //===---------------------------------------------------------------------===//
134 It would be nice to revert this patch:
135 http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
137 And teach the dag combiner enough to simplify the code expanded before
138 legalize. It seems plausible that this knowledge would let it simplify other
141 //===---------------------------------------------------------------------===//
143 For packed types, TargetData.cpp::getTypeInfo() returns alignment that is equal
144 to the type size. It works but can be overly conservative as the alignment of
145 specific packed types are target dependent.
147 //===---------------------------------------------------------------------===//
149 We should add 'unaligned load/store' nodes, and produce them from code like
152 v4sf example(float *P) {
153 return (v4sf){P[0], P[1], P[2], P[3] };
156 //===---------------------------------------------------------------------===//
158 We should constant fold packed type casts at the LLVM level, regardless of the
159 cast. Currently we cannot fold some casts because we don't have TargetData
160 information in the constant folder, so we don't know the endianness of the
163 //===---------------------------------------------------------------------===//
167 unsigned short swap_16(unsigned short v) { return (v>>8) | (v<<8); }
169 Compiled with the ppc backend:
175 rlwinm r3, r2, 0, 16, 31
178 The rlwinm (an and by 65535) is dead. The dag combiner should propagate bits
179 better than that to see this.
181 //===---------------------------------------------------------------------===//
183 Add support for conditional increments, and other related patterns. Instead
188 je LBB16_2 #cond_next
199 //===---------------------------------------------------------------------===//
201 Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
203 Expand these to calls of sin/cos and stores:
204 double sincos(double x, double *sin, double *cos);
205 float sincosf(float x, float *sin, float *cos);
206 long double sincosl(long double x, long double *sin, long double *cos);
208 Doing so could allow SROA of the destination pointers. See also:
209 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
211 //===---------------------------------------------------------------------===//
213 Scalar Repl cannot currently promote this testcase to 'ret long cst':
215 %struct.X = type { int, int }
216 %struct.Y = type { %struct.X }
218 %retval = alloca %struct.Y, align 8 ; <%struct.Y*> [#uses=3]
219 %tmp12 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 0 ; <int*> [#uses=1]
220 store int 0, int* %tmp12
221 %tmp15 = getelementptr %struct.Y* %retval, int 0, uint 0, uint 1 ; <int*> [#uses=1]
222 store int 1, int* %tmp15
223 %retval = cast %struct.Y* %retval to ulong* ; <ulong*> [#uses=1]
224 %retval = load ulong* %retval ; <ulong> [#uses=1]
228 it should be extended to do so.
230 //===---------------------------------------------------------------------===//
232 Turn this into a single byte store with no load (the other 3 bytes are
235 void %test(uint* %P) {
237 %tmp14 = or uint %tmp, 3305111552
238 %tmp15 = and uint %tmp14, 3321888767
239 store uint %tmp15, uint* %P
243 //===---------------------------------------------------------------------===//
245 dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
251 int t = __builtin_clz(x);