From: Kevin Hilman Date: Wed, 8 Apr 2015 21:32:07 +0000 (-0700) Subject: sched: hmp: fix spinlock recursion in active migration X-Git-Tag: firefly_0821_release~3680^2~16^2^2 X-Git-Url: http://plrg.eecs.uci.edu/git/?a=commitdiff_plain;ds=sidebyside;h=c1f0c1f51bf7b9111de27c3cdbea9b647351bf7b;p=firefly-linux-kernel-4.4.55.git sched: hmp: fix spinlock recursion in active migration Commit cd5c2cc93d3d (hmp: Remove potential for task_struct access race) introduced a put_task_struct() to prevent races, but in doing so introduced potential spinlock recursion. (This change was further consolidated in commit 0baa5811bacf -- sched: hmp: unify active migration code.) Unfortunately, the put_task_struct() is done while the runqueue spinlock is held, but put_task_struct() can also cause a reschedule causing the runqueue lock to be acquired recursively. To fix, move the put_task_struct() outside the runqueue spinlock. Reported-by: Victor Lixin Cc: Jorge Ramirez-Ortiz Cc: Liviu Dudau Signed-off-by: Kevin Hilman Reviewed-by: Jon Medhurst Reviewed-by: Alex Shi Reviewed-by: Chris Redpath Signed-off-by: Jon Medhurst --- diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fd57f0be5b4e..22ce83eb73f8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6462,10 +6462,10 @@ static int __do_active_load_balance_cpu_stop(void *data, bool check_sd_lb_flag) rcu_read_unlock(); double_unlock_balance(busiest_rq, target_rq); out_unlock: - if (!check_sd_lb_flag) - put_task_struct(p); busiest_rq->active_balance = 0; raw_spin_unlock_irq(&busiest_rq->lock); + if (!check_sd_lb_flag) + put_task_struct(p); return 0; }