[avx] Adjust the VINSERTF128rm pattern to allow for unaligned loads.
authorChad Rosier <mcrosier@apple.com>
Tue, 20 Mar 2012 17:08:51 +0000 (17:08 +0000)
committerChad Rosier <mcrosier@apple.com>
Tue, 20 Mar 2012 17:08:51 +0000 (17:08 +0000)
commit33e528d44d8c9c9ad2ae49816a7ddb234446c08e
tree3bd3cb7dd63060ee16e39d5f878bdf6b32387828
parent5c062ad92672f22e61a4b20a9954af3db3b72bd6
[avx] Adjust the VINSERTF128rm pattern to allow for unaligned loads.

This results in things such as

vmovups 16(%rdi), %xmm0
vinsertf128 $1, %xmm0, %ymm0, %ymm0

to be combined to

    vinsertf128 $1, 16(%rdi), %ymm0, %ymm0

rdar://11076953

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153092 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/X86/X86InstrSSE.td
test/CodeGen/X86/avx-vinsertf128.ll