x86/ticketlock: Fix spin_unlock_wait() livelock
authorOleg Nesterov <oleg@redhat.com>
Mon, 1 Dec 2014 21:34:17 +0000 (22:34 +0100)
committerIngo Molnar <mingo@kernel.org>
Mon, 8 Dec 2014 10:36:44 +0000 (11:36 +0100)
arch_spin_unlock_wait() looks very suboptimal, to the point I
think this is just wrong and can lead to livelock: if the lock
is heavily contended we can never see head == tail.

But we do not need to wait for arch_spin_is_locked() == F. If it
is locked we only need to wait until the current owner drops
this lock. So we could simply spin until old_head !=
lock->tickets.head in this case, but .head can overflow and thus
we can't check "unlocked" only once before the main loop.

Also, the "unlocked" check can ignore TICKET_SLOWPATH_FLAG bit.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Paul E.McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <Waiman.Long@hp.com>
Link: http://lkml.kernel.org/r/20141201213417.GA5842@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/spinlock.h

index bf156ded74b56006a76cc02b8917984117af8afd..abc34e95398d561bad992f626db3642ca4b54c3d 100644 (file)
@@ -184,8 +184,20 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
-       while (arch_spin_is_locked(lock))
+       __ticket_t head = ACCESS_ONCE(lock->tickets.head);
+
+       for (;;) {
+               struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
+               /*
+                * We need to check "unlocked" in a loop, tmp.head == head
+                * can be false positive because of overflow.
+                */
+               if (tmp.head == (tmp.tail & ~TICKET_SLOWPATH_FLAG) ||
+                   tmp.head != head)
+                       break;
+
                cpu_relax();
+       }
 }
 
 /*