locking/percpu-rwsem: Clean up the lockdep annotations in percpu_down_read()
authorOleg Nesterov <oleg@redhat.com>
Fri, 21 Aug 2015 17:43:03 +0000 (19:43 +0200)
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Tue, 6 Oct 2015 18:25:40 +0000 (11:25 -0700)
Based on Peter Zijlstra's earlier patch.

Change percpu_down_read() to use __down_read(), this way we can
do rwsem_acquire_read() unconditionally at the start to make this
code more symmetric and clean.

Originally-From: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
kernel/locking/percpu-rwsem.c

index 02a726dd9adc1f35e6481646beb83b91b3db70be..f231e0bb311ce0827d281d34f737a3a06405c072 100644 (file)
@@ -70,14 +70,14 @@ static bool update_fast_ctr(struct percpu_rw_semaphore *brw, unsigned int val)
 void percpu_down_read(struct percpu_rw_semaphore *brw)
 {
        might_sleep();
-       if (likely(update_fast_ctr(brw, +1))) {
-               rwsem_acquire_read(&brw->rw_sem.dep_map, 0, 0, _RET_IP_);
+       rwsem_acquire_read(&brw->rw_sem.dep_map, 0, 0, _RET_IP_);
+
+       if (likely(update_fast_ctr(brw, +1)))
                return;
-       }
 
-       down_read(&brw->rw_sem);
+       /* Avoid rwsem_acquire_read() and rwsem_release() */
+       __down_read(&brw->rw_sem);
        atomic_inc(&brw->slow_read_ctr);
-       /* avoid up_read()->rwsem_release() */
        __up_read(&brw->rw_sem);
 }
 EXPORT_SYMBOL_GPL(percpu_down_read);