DEBUG: sched/fair: Fix sched_load_avg_cpu events for task_groups
authorBrendan Jackman <brendan.jackman@arm.com>
Tue, 10 Jan 2017 11:31:01 +0000 (11:31 +0000)
committerAmit Pundir <amit.pundir@linaro.org>
Mon, 16 Jan 2017 09:33:08 +0000 (15:03 +0530)
The current sched_load_avg_cpu event traces the load for any cfs_rq that is
updated. This is not representative of the CPU load - instead we should only
trace this event when the cfs_rq being updated is in the root_task_group.

Change-Id: I345c2f13f6b5718cb4a89beb247f7887ce97ed6b
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
kernel/sched/fair.c

index ad1507e420e807d6f34809d9889a5cc3cc5226e4..3331f453a17f00716ddb20366d535f2fafcb192c 100644 (file)
@@ -2757,7 +2757,9 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
        cfs_rq->load_last_update_time_copy = sa->last_update_time;
 #endif
 
-       trace_sched_load_avg_cpu(cpu_of(rq_of(cfs_rq)), cfs_rq);
+       /* Trace CPU load, unless cfs_rq belongs to a non-root task_group */
+       if (cfs_rq == &rq_of(cfs_rq)->cfs)
+               trace_sched_load_avg_cpu(cpu_of(rq_of(cfs_rq)), cfs_rq);
 
        return decayed || removed;
 }