arm64: spinlock: retry trylock operation if strex fails on free lock
authorCatalin Marinas <catalin.marinas@arm.com>
Fri, 31 May 2013 15:30:58 +0000 (16:30 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 9 Jan 2014 20:24:20 +0000 (12:24 -0800)
commit 4ecf7ccb1973fd826456b6ab1e6dfafe9023c753 upstream.

An exclusive store instruction may fail for reasons other than lock
contention (e.g. a cache eviction during the critical section) so, in
line with other architectures using similar exclusive instructions
(alpha, mips, powerpc), retry the trylock operation if the lock appears
to be free but the strex reported failure.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Tony Thompson <anthony.thompson@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm64/include/asm/spinlock.h

index 7065e920149d3d0ec6e9ed549e71a5040aa0a5d2..0defa0728a9b85f6c82d104f8c3bdb6ea3bf716f 100644 (file)
@@ -59,9 +59,10 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
        unsigned int tmp;
 
        asm volatile(
-       "       ldaxr   %w0, %1\n"
+       "2:     ldaxr   %w0, %1\n"
        "       cbnz    %w0, 1f\n"
        "       stxr    %w0, %w2, %1\n"
+       "       cbnz    %w0, 2b\n"
        "1:\n"
        : "=&r" (tmp), "+Q" (lock->lock)
        : "r" (1)