Add intrinsics @llvm.arm.neon.vmulls and @llvm.arm.neon.vmullu.* back. Frontends
authorEvan Cheng <evan.cheng@apple.com>
Tue, 29 Mar 2011 23:06:19 +0000 (23:06 +0000)
committerEvan Cheng <evan.cheng@apple.com>
Tue, 29 Mar 2011 23:06:19 +0000 (23:06 +0000)
commit92e3916c3b750f7eb4f41e14e401434b713e558b
tree0c10c6ff7bc874bb3f979faab5d232c3aae3ed71
parent75c7563f834b06cfc71ef53bd4c37e58d2d96ff6
Add intrinsics @llvm.arm.neon.vmulls and @llvm.arm.neon.vmullu.* back. Frontends
was lowering them to sext / uxt + mul instructions. Unfortunately the
optimization passes may hoist the extensions out of the loop and separate them.
When that happens, the long multiplication instructions can be broken into
several scalar instructions, causing significant performance issue.

Note the vmla and vmls intrinsics are not added back. Frontend will codegen them
as intrinsics vmull* + add / sub. Also note the isel optimizations for catching
mul + sext / zext are not changed either.

First part of rdar://8832507, rdar://9203134

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@128502 91177308-0d34-0410-b5e6-96231b3b80d8
include/llvm/IntrinsicsARM.td
lib/Target/ARM/ARMISelLowering.cpp
lib/VMCore/AutoUpgrade.cpp
test/Bitcode/neon-intrinsics.ll
test/CodeGen/ARM/vmul.ll