sched: Call tick_check_idle before __irq_enter
authorVenkatesh Pallipadi <venki@google.com>
Tue, 5 Oct 2010 00:03:23 +0000 (17:03 -0700)
committerIngo Molnar <mingo@elte.hu>
Mon, 18 Oct 2010 18:52:29 +0000 (20:52 +0200)
When CPU is idle and on first interrupt, irq_enter calls tick_check_idle()
to notify interruption from idle. But, there is a problem if this call
is done after __irq_enter, as all routines in __irq_enter may find
stale time due to yet to be done tick_check_idle.

Specifically, trace calls in __irq_enter when they use global clock and also
account_system_vtime change in this patch as it wants to use sched_clock_cpu()
to do proper irq timing.

But, tick_check_idle was moved after __irq_enter intentionally to
prevent problem of unneeded ksoftirqd wakeups by the commit ee5f80a:

    irq: call __irq_enter() before calling the tick_idle_check
    Impact: avoid spurious ksoftirqd wakeups

Moving tick_check_idle() before __irq_enter and wrapping it with
local_bh_enable/disable would solve both the problems.

Fixed-by: Yong Zhang <yong.zhang0@gmail.com>
Signed-off-by: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1286237003-12406-9-git-send-email-venki@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/sched.c
kernel/softirq.c

index bff9ef537df0467943d88be20ebc227e06d8a590..567f5cb9808cb1dc94c11f5e9d8c34aa465fd876 100644 (file)
@@ -1974,8 +1974,8 @@ void account_system_vtime(struct task_struct *curr)
 
        local_irq_save(flags);
 
-       now = sched_clock();
        cpu = smp_processor_id();
+       now = sched_clock_cpu(cpu);
        delta = now - per_cpu(irq_start_time, cpu);
        per_cpu(irq_start_time, cpu) = now;
        /*
index 267f7b763ebb390defa77a3667d26a8aefc1e095..79ee8f1fc0e71a343cf7de924c8ae5fa69b554b3 100644 (file)
@@ -296,10 +296,16 @@ void irq_enter(void)
 
        rcu_irq_enter();
        if (idle_cpu(cpu) && !in_interrupt()) {
-               __irq_enter();
+               /*
+                * Prevent raise_softirq from needlessly waking up ksoftirqd
+                * here, as softirq will be serviced on return from interrupt.
+                */
+               local_bh_disable();
                tick_check_idle(cpu);
-       } else
-               __irq_enter();
+               _local_bh_enable();
+       }
+
+       __irq_enter();
 }
 
 #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED