UPSTREAM: sched/fair: Propagate asynchrous detach
authorVincent Guittot <vincent.guittot@linaro.org>
Tue, 8 Nov 2016 09:53:46 +0000 (10:53 +0100)
committerAmit Pundir <amit.pundir@linaro.org>
Wed, 21 Jun 2017 11:07:38 +0000 (16:37 +0530)
A task can be asynchronously detached from cfs_rq when migrating
between CPUs. The load of the migrated task is then removed from
source cfs_rq during its next update. We use this event to set
propagation flag.

During the load balance, we take advantage of the update of blocked
load to propagate any pending changes.

The propagation relies on patch:

  "sched: Fix hierarchical order in rq->leaf_cfs_rq_list"

... which orders children and parents, to ensure that it's done in one pass.

Change-Id: I33782e35fc4711f5901e8c23d6aa7ec5f2ff7ee5
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: kernellwp@gmail.com
Cc: pjt@google.com
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1478598827-32372-6-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 4e5160766fcc9f41bbd38bac11f92dce993644aa)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
kernel/sched/fair.c

index a8f86f214b45f2958da453ab4c401d93e15fdf9f..e49408d0f59033dfd824fb1a53f4849e6ad96e62 100644 (file)
@@ -3131,6 +3131,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
                sub_positive(&sa->load_avg, r);
                sub_positive(&sa->load_sum, r * LOAD_AVG_MAX);
                removed = 1;
+               set_tg_cfs_propagate(cfs_rq);
        }
 
        if (atomic_long_read(&cfs_rq->removed_util_avg)) {
@@ -3138,6 +3139,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
                sub_positive(&sa->util_avg, r);
                sub_positive(&sa->util_sum, r * LOAD_AVG_MAX);
                removed_util = 1;
+               set_tg_cfs_propagate(cfs_rq);
        }
 
        decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa,
@@ -7347,6 +7349,10 @@ static void update_blocked_averages(int cpu)
                if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq,
                                           true))
                        update_tg_load_avg(cfs_rq, 0);
+
+               /* Propagate pending load changes to the parent */
+               if (cfs_rq->tg->se[cpu])
+                       update_load_avg(cfs_rq->tg->se[cpu], 0);
        }
        raw_spin_unlock_irqrestore(&rq->lock, flags);
 }