sched: hmp: fix spinlock recursion in active migration
authorKevin Hilman <khilman@linaro.org>
Wed, 8 Apr 2015 21:32:07 +0000 (14:32 -0700)
committerJon Medhurst <tixy@linaro.org>
Tue, 14 Apr 2015 11:08:16 +0000 (12:08 +0100)
Commit cd5c2cc93d3d (hmp: Remove potential for task_struct access
race) introduced a put_task_struct() to prevent races, but in doing so
introduced potential spinlock recursion.  (This change was further
consolidated in commit 0baa5811bacf -- sched: hmp: unify active
migration code.)

Unfortunately, the put_task_struct() is done while the runqueue
spinlock is held, but put_task_struct() can also cause a reschedule
causing the runqueue lock to be acquired recursively.

To fix, move the put_task_struct() outside the runqueue spinlock.

Reported-by: Victor Lixin <victor.lixin@hisilicon.com>
Cc: Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org>
Cc: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Jon Medhurst <tixy@linaro.org>
Reviewed-by: Alex Shi <alex.shi@linaro.org>
Reviewed-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
kernel/sched/fair.c

index fd57f0be5b4e95fc97f83f5798a1c95e685a6a77..22ce83eb73f8df7aab49aedb705a902bae67ba80 100644 (file)
@@ -6462,10 +6462,10 @@ static int __do_active_load_balance_cpu_stop(void *data, bool check_sd_lb_flag)
        rcu_read_unlock();
        double_unlock_balance(busiest_rq, target_rq);
 out_unlock:
-       if (!check_sd_lb_flag)
-               put_task_struct(p);
        busiest_rq->active_balance = 0;
        raw_spin_unlock_irq(&busiest_rq->lock);
+       if (!check_sd_lb_flag)
+               put_task_struct(p);
        return 0;
 }