[X86] Improve shift combining
authorMichael Kuperstein <michael.m.kuperstein@intel.com>
Wed, 16 Dec 2015 11:22:37 +0000 (11:22 +0000)
committerMichael Kuperstein <michael.m.kuperstein@intel.com>
Wed, 16 Dec 2015 11:22:37 +0000 (11:22 +0000)
commit586219957f81f6dfb53f65847aee9838d2977716
tree5e83ad539e784b43188fcc8ecb0ba54008e3f7e6
parentf04cdf9dd9078a3c10cb1d821e929183235e06a7
[X86] Improve shift combining

This folds (ashr (shl a, [56,48,32,24,16]), SarConst)
into       (shl, (sext (a), [56,48,32,24,16] - SarConst))
or into    (lshr, (sext (a), SarConst - [56,48,32,24,16]))
depending on sign of (SarConst - [56,48,32,24,16])

sexts in X86 are MOVs.
The MOVs have the same code size as above SHIFTs (only SHIFT by 1 has lower code size).
However the MOVs have 2 advantages to SHIFTs on x86:
1. MOVs can write to a register that differs from source.
2. MOVs accept memory operands.

This fixes PR24373.

Patch by: evgeny.v.stupachenko@intel.com
Differential Revision: http://reviews.llvm.org/D13161

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255761 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/X86/X86ISelLowering.cpp
test/CodeGen/X86/2009-05-23-dagcombine-shifts.ll
test/CodeGen/X86/sar_fold.ll [new file with mode: 0644]
test/CodeGen/X86/sar_fold64.ll [new file with mode: 0644]
test/CodeGen/X86/vector-sext.ll