[AArch64] Improve codegen of store lane instructions by avoiding GPR usage.
authorAhmed Bougacha <ahmed.bougacha@gmail.com>
Mon, 5 Jan 2015 17:10:26 +0000 (17:10 +0000)
committerAhmed Bougacha <ahmed.bougacha@gmail.com>
Mon, 5 Jan 2015 17:10:26 +0000 (17:10 +0000)
commit3c9fb6e1adfbb220b1c60ebf78025f1a745ce6c8
treeaa35c4a1e9ca5fc368826607c016de6846fa914c
parentc52cd839b99cd7577ae871c46d478c1957972c03
[AArch64] Improve codegen of store lane instructions by avoiding GPR usage.

We used to generate code similar to:

  umov.b        w8, v0[2]
  strb  w8, [x0, x1]

because the STR*ro* patterns were preferred to ST1*.
Instead, we can avoid going through GPRs, and generate:

  add   x8, x0, x1
  st1.b { v0 }[2], [x8]

This patch increases the ST1* AddedComplexity to achieve that.

rdar://16372710
Differential Revision: http://reviews.llvm.org/D6202

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@225183 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/AArch64/AArch64InstrInfo.td
test/CodeGen/AArch64/arm64-st1.ll