firefly-linux-kernel-4.4.55.git
8 years agofixup! sched: scheduler-driven cpu frequency selection
Juri Lelli [Fri, 11 Dec 2015 11:55:51 +0000 (11:55 +0000)]
fixup! sched: scheduler-driven cpu frequency selection

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
8 years agosched: rt scheduler sets capacity requirement
Vincent Guittot [Mon, 26 Oct 2015 17:14:50 +0000 (18:14 +0100)]
sched: rt scheduler sets capacity requirement

RT tasks don't provide any running constraints like deadline ones
except their running priority. The only current usable input to
estimate the capacity needed by RT tasks is the rt_avg metric. We use
it to estimate the CPU capacity needed for the RT scheduler class.

In order to monitor the evolution for RT task load, we must
peridiocally check it during the tick.

Then, we use the estimated capacity of the last activity to estimate
the next one which can not be that accurate but is a good starting
point without any impact on the wake up path of RT tasks.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched: deadline: use deadline bandwidth in scale_rt_capacity
Vincent Guittot [Tue, 3 Nov 2015 09:39:01 +0000 (10:39 +0100)]
sched: deadline: use deadline bandwidth in scale_rt_capacity

Instead of monitoring the exec time of deadline tasks to evaluate the
CPU capacity consumed by deadline scheduler class, we can directly
calculate it thanks to the sum of utilization of deadline tasks on the
CPU.  We can remove deadline tasks from rt_avg metric and directly use
the average bandwidth of deadline scheduler in scale_rt_capacity.

Based in part on a similar patch from Luca Abeni <luca.abeni@unitn.it>.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched: remove call of sched_avg_update from sched_rt_avg_update
Vincent Guittot [Tue, 20 Oct 2015 08:46:26 +0000 (10:46 +0200)]
sched: remove call of sched_avg_update from sched_rt_avg_update

rt_avg is only used to scale the available CPU's capacity for CFS
tasks.  As the update of this scaling is done during periodic load
balance, we only have to ensure that sched_avg_update has been called
before any periodic load balancing. This requirement is already
fulfilled by __update_cpu_load so the call in sched_rt_avg_update,
which is part of the hotpath, is useless.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched/cpufreq_sched: add trace events
Steve Muckle [Wed, 25 Nov 2015 23:59:25 +0000 (15:59 -0800)]
sched/cpufreq_sched: add trace events

Trace events will aid in debugging, profiling and tuning.

Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched/fair: jump to max OPP when crossing UP threshold
Steve Muckle [Thu, 25 Jun 2015 13:12:33 +0000 (14:12 +0100)]
sched/fair: jump to max OPP when crossing UP threshold

Since the true utilization of a long running task is not detectable
while it is running and might be bigger than the current cpu capacity,
create the maximum cpu capacity head room by requesting the maximum
cpu capacity once the cpu usage plus the capacity margin exceeds the
current capacity. This is also done to try to harm the performance of
a task the least.

Original fair-class only version authored by Juri Lelli
<juri.lelli@arm.com>.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched/fair: cpufreq_sched triggers for load balancing
Juri Lelli [Thu, 25 Jun 2015 13:37:27 +0000 (14:37 +0100)]
sched/fair: cpufreq_sched triggers for load balancing

As we don't trigger freq changes from {en,de}queue_task_fair() during load
balancing, we need to do explicitly so on load balancing paths.

[smuckle@linaro.org: move update_capacity_of calls so rq lock is held]

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched/{core,fair}: trigger OPP change request on fork()
Juri Lelli [Fri, 26 Jun 2015 11:14:23 +0000 (12:14 +0100)]
sched/{core,fair}: trigger OPP change request on fork()

Patch "sched/fair: add triggers for OPP change requests" introduced OPP
change triggers for enqueue_task_fair(), but the trigger was operating only
for wakeups. Fact is that it makes sense to consider wakeup_new also (i.e.,
fork()), as we don't know anything about a newly created task and thus we
most certainly want to jump to max OPP to not harm performance too much.

However, it is not currently possible (or at least it wasn't evident to me
how to do so :/) to tell new wakeups from other (non wakeup) operations.

This patch introduces an additional flag in sched.h that is only set at
fork() time and it is then consumed in enqueue_task_fair() for our purpose.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched/fair: add triggers for OPP change requests
Juri Lelli [Wed, 19 Aug 2015 18:47:12 +0000 (19:47 +0100)]
sched/fair: add triggers for OPP change requests

Each time a task is {en,de}queued we might need to adapt the current
frequency to the new usage. Add triggers on {en,de}queue_task_fair() for
this purpose.  Only trigger a freq request if we are effectively waking up
or going to sleep.  Filter out load balancing related calls to reduce the
number of triggers.

[smuckle@linaro.org: resolve merge conflicts, define task_new,
 use renamed static key sched_freq]

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched: scheduler-driven cpu frequency selection
Michael Turquette [Tue, 30 Jun 2015 11:45:48 +0000 (12:45 +0100)]
sched: scheduler-driven cpu frequency selection

Scheduler-driven CPU frequency selection hopes to exploit both
per-task and global information in the scheduler to improve frequency
selection policy, achieving lower power consumption, improved
responsiveness/performance, and less reliance on heuristics and
tunables. For further discussion on the motivation of this integration
see [0].

This patch implements a shim layer between the Linux scheduler and the
cpufreq subsystem. The interface accepts capacity requests from the
CFS, RT and deadline sched classes. The requests from each sched class
are summed on each CPU with a margin applied to the CFS and RT
capacity requests to provide some headroom. Deadline requests are
expected to be precise enough given their nature to not require
headroom. The maximum total capacity request for a CPU in a frequency
domain drives the requested frequency for that domain.

Policy is determined by both the sched classes and this shim layer.

Note that this algorithm is event-driven. There is no polling loop to
check cpu idle time nor any other method which is unsynchronized with
the scheduler, aside from a throttling mechanism to ensure frequency
changes are not attempted faster than the hardware can accommodate them.

Thanks to Juri Lelli <juri.lelli@arm.com> for contributing design ideas,
code and test results, and to Ricky Liang <jcliang@chromium.org>
for initialization and static key inc/dec fixes.

[0] http://article.gmane.org/gmane.linux.kernel/1499836

[smuckle@linaro.org: various additions and fixes, revised commit text]

CC: Ricky Liang <jcliang@chromium.org>
Signed-off-by: Michael Turquette <mturquette@baylibre.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agocpufreq: introduce cpufreq_driver_is_slow
Michael Turquette [Tue, 30 Jun 2015 11:45:27 +0000 (12:45 +0100)]
cpufreq: introduce cpufreq_driver_is_slow

Some architectures and platforms perform CPU frequency transitions
through a non-blocking method, while some might block or sleep. Even
when frequency transitions do not block or sleep they may be very slow.
This distinction is important when trying to change frequency from
a non-interruptible context in a scheduler hot path.

Describe this distinction with a cpufreq driver flag,
CPUFREQ_DRIVER_FAST. The default is to not have this flag set,
thus erring on the side of caution.

cpufreq_driver_is_slow() is also introduced in this patch. Setting
the above flag will allow this function to return false.

[smuckle@linaro.org: change flag/API to include drivers that are too
 slow for scheduler hot paths, in addition to those that block/sleep]

Cc: Rafael J. Wysocki <rafael@kernel.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Michael Turquette <mturquette@baylibre.com>
Signed-off-by: Steve Muckle <smuckle@linaro.org>
8 years agosched: Consider misfit tasks when load-balancing
Morten Rasmussen [Thu, 25 Feb 2016 12:51:35 +0000 (12:51 +0000)]
sched: Consider misfit tasks when load-balancing

With the new group_misfit_task load-balancing scenario additional policy
conditions are needed when load-balancing. Misfit task balancing only
makes sense between source group with lower capacity than the target
group. If capacities are the same, fallback to normal group_other
balancing. The aim is to balance tasks such that no task has its
throughput hindered by compute capacity if a cpu with more capacity is
available. Load-balancing is generally based on average load in the
sched_groups, but for misfitting tasks it is necessary to introduce
exceptions to migrate tasks against usual metrics and optimize
throughput.

This patch ensures the following load-balance for mixed capacity systems
(e.g. ARM big.LITTLE) for always-running tasks:

1. Place a task on each cpu starting in order from cpus with highest
capacity to lowest until all cpus are in use (i.e. one task on each
cpu).

2. Once all cpus are in use balance according to compute capacity such
that load per capacity is approximately the same regardless of the
compute capacity (i.e. big cpus get more tasks than little cpus).

Necessary changes are introduced in find_busiest_group(),
calculate_imbalance(), and find_busiest_queue(). This includes passing
the group_type on to find_busiest_queue() through struct lb_env, which
is currently only considers imbalance and not the imbalance situation
(group_type).

To avoid taking remote rq locks to examine source sched_groups for
misfit tasks, each cpu is responsible for tracking misfit tasks
themselves and update the rq->misfit_task flag. This means checking task
utilization when tasks are scheduled and on sched_tick.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Add group_misfit_task load-balance type
Morten Rasmussen [Thu, 25 Feb 2016 12:47:54 +0000 (12:47 +0000)]
sched: Add group_misfit_task load-balance type

To maximize throughput in systems with reduced capacity cpus (e.g.
high RT/IRQ load and/or ARM big.LITTLE) load-balancing has to consider
task and cpu utilization as well as per-cpu compute capacity when
load-balancing in addition to the current average load based
load-balancing policy. Tasks that are scheduled on a reduced capacity
cpu need to be identified and migrated to a higher capacity cpu if
possible.

To implement this additional policy an additional group_type
(load-balance scenario) is added: group_misfit_task. This represents
scenarios where a sched_group has tasks that are not suitable for its
per-cpu capacity. group_misfit_task is only considered if the system is
not overloaded in any other way (group_imbalanced or group_overloaded).

Identifying misfit tasks requires the rq lock to be held. To avoid
taking remote rq locks to examine source sched_groups for misfit tasks,
each cpu is responsible for tracking misfit tasks themselves and update
the rq->misfit_task flag. This means checking task utilization when
tasks are scheduled and on sched_tick.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Add per-cpu max capacity to sched_group_capacity
Morten Rasmussen [Thu, 25 Feb 2016 12:43:49 +0000 (12:43 +0000)]
sched: Add per-cpu max capacity to sched_group_capacity

struct sched_group_capacity currently represents the compute capacity
sum of all cpus in the sched_group. Unless it is divided by the
group_weight to get the average capacity per cpu it hides differences in
cpu capacity for mixed capacity systems (e.g. high RT/IRQ utilization or
ARM big.LITTLE). But even the average may not be sufficient if the group
covers cpus of different capacities. Instead, by extending struct
sched_group_capacity to indicate max per-cpu capacity in the group a
suitable group for a given task utilization can easily be found such
that cpus with reduced capacity can be avoided for tasks with high
utilization (not implemented by this patch).

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Do eas idle balance regardless of the rq avg idle value
Dietmar Eggemann [Wed, 13 Jan 2016 15:49:44 +0000 (15:49 +0000)]
sched: Do eas idle balance regardless of the rq avg idle value

EAS relies on idle balance to migrate a misfit task towards a cpu with
higher capacity.

When such a cpu becomes idle, idle balance should happen even if the rq
avg idle is smaller than the sched migration cost (default 500us).

The rq avg idle is updated during the wakeup of a task in case the rq has
a non-null idle_stamp. This value stays unchanged and valid until the next
task wakes up on this cpu after an idle period.

So rq avg idle could be smaller than sched migration cost preventing the
idle balance from happening. In this case we would be at the mercy of
wakeup, periodic or nohz-idle load balancing to put another task on this
cpu.

To break this dependency towards rq avg idle make EAS idle balance
independent from this rq avg idle has to be larger than sched migration
cost.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agoarm64: Enable max freq invariant scheduler load-tracking and capacity support
Dietmar Eggemann [Fri, 25 Sep 2015 16:34:15 +0000 (17:34 +0100)]
arm64: Enable max freq invariant scheduler load-tracking and capacity support

Maximum Frequency Invariance has to be part of Cpu Invariance because
Frequency Invariance deals only with differences in load-tracking
introduces by Dynamic Frequency Scaling and not with limiting the
possible range of cpu frequency.

By placing Maximum Frequency Invariance into Cpu Invariance,
load-tracking is scaled via arch_scale_cpu_capacity()
in __update_load_avg() and cpu capacity is scaled via
arch_scale_cpu_capacity() in update_cpu_capacity().

To be able to save the extra multiplication in the scheduler hotpath
(__update_load_avg()) we could:

  1 Inform cpufreq about base cpu capacity at boot and let it handle
    scale_cpu_capacity() as well.
  2 Use the cpufreq policy callback which would update a per-cpu current
    cpu_scale and this value would be return in scale_cpu_capacity().
  3 Use per-cpu current max_freq_scale and current cpu_scale with the
    current patch.

Including <linux/cpufreq.h> in topology.h like for the arm arch doesn't
work because of CONFIG_COMPAT=y (Kernel support for 32-bit EL0).
That's why cpufreq_scale_max_freq_capacity() has to be declared extern
in topology.h.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agoarm: Enable max freq invariant scheduler load-tracking and capacity support
Dietmar Eggemann [Wed, 23 Sep 2015 16:59:55 +0000 (17:59 +0100)]
arm: Enable max freq invariant scheduler load-tracking and capacity support

Maximum Frequency Invariance has to be part of Cpu Invariance because
Frequency Invariance deals only with differences in load-tracking
introduces by Dynamic Frequency Scaling and not with limiting the
possible range of cpu frequency.

By placing Maximum Frequency Invariance into Cpu Invariance,
load-tracking is scaled via arch_scale_cpu_capacity()
in __update_load_avg() and cpu capacity is scaled via
arch_scale_cpu_capacity() in update_cpu_capacity().

To be able to save the extra multiplication in the scheduler hotpath
(__update_load_avg()) we could:

 1 Inform cpufreq about base cpu capacity at boot and let it handle
   scale_cpu_capacity() as well.
 2 Use the cpufreq policy callback which would update a per-cpu current
   cpu_scale and this value would be return in scale_cpu_capacity().
 3 Use per-cpu current max_freq_scale and current cpu_scale with the
   current patch.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched: Update max cpu capacity in case of max frequency constraints
Dietmar Eggemann [Sat, 26 Sep 2015 17:19:54 +0000 (18:19 +0100)]
sched: Update max cpu capacity in case of max frequency constraints

Wakeup balancing uses cpu capacity awareness and needs to know the
system-wide maximum cpu capacity.

Patch "sched: Store system-wide maximum cpu capacity in root domain"
finds the system-wide maximum cpu capacity during scheduler domain
hierarchy setup. This is sufficient as long as maximum frequency
invariance is not enabled.

If it is enabled, the system-wide maximum cpu capacity can change
between scheduler domain hierarchy setups due to frequency capping.

The cpu capacity is changed in update_cpu_capacity() which is called in
load balance on the lowest scheduler domain hierarchy level. To be able
to know if a change in cpu capacity for a certain cpu also has an effect
on the system-wide maximum cpu capacity it is normally necessary to
iterate over all cpus. This would be way too costly. That's why this
patch follows a different approach.

The unsigned long max_cpu_capacity value in struct root_domain is
replaced with a struct max_cpu_capacity, containing value (the
max_cpu_capacity) and cpu (the cpu index of the cpu providing the
maximum cpu_capacity).

Changes to the system-wide maximum cpu capacity and the cpu index are
made if:

 1 System-wide maximum cpu capacity < cpu capacity
 2 System-wide maximum cpu capacity > cpu capacity and cpu index == cpu

There are no changes to the system-wide maximum cpu capacity in all
other cases.

Atomic read and write access to the pair (max_cpu_capacity.val,
max_cpu_capacity.cpu) is enforced by max_cpu_capacity.lock.

The access to max_cpu_capacity.val in task_fits_max() is still performed
without taking the max_cpu_capacity.lock.

The code to set max cpu capacity in build_sched_domains() has been
removed because the whole functionality is now provided by
update_cpu_capacity() instead.

This approach can introduce errors temporarily, e.g. in case the cpu
currently providing the max cpu capacity has its cpu capacity lowered
due to frequency capping and calls update_cpu_capacity() before any cpu
which might provide the max cpu now.

There is also an outstanding question:

Should the cpu capacity of a cpu going idle be set to a very small
value?

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agocpufreq: Max freq invariant scheduler load-tracking and cpu capacity support
Dietmar Eggemann [Tue, 22 Sep 2015 15:47:48 +0000 (16:47 +0100)]
cpufreq: Max freq invariant scheduler load-tracking and cpu capacity support

Implements cpufreq_scale_max_freq_capacity() to provide the scheduler
with a maximum frequency scaling correction factor for more accurate
load-tracking and cpu capacity handling by being able to deal with
frequency capping.

This scaling factor describes the influence of running a cpu with a
current maximum frequency lower than the absolute possible maximum
frequency on load tracking and cpu capacity.

The factor is:

current_max_freq(cpu) << SCHED_CAPACITY_SHIFT / max_freq(cpu)

In fact, max_freq_scale should be a struct cpufreq_policy data member.
But this would require that the scheduler hot path (__update_load_avg())
would have to grab the cpufreq lock. This can be avoided by using per-cpu
data initialized to SCHED_CAPACITY_SCALE for max_freq_scale.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agoarm64, topology: Updates to use DT bindings for EAS costing data
Robin Randhawa [Tue, 9 Jun 2015 14:10:00 +0000 (15:10 +0100)]
arm64, topology: Updates to use DT bindings for EAS costing data

With the bindings and the associated accessors to extract data from the
bindings in place, remove the static hard-coded data from topology.c and
use the accesors instead.

Signed-off-by: Robin Randhawa <robin.randhawa@arm.com>
8 years agosched: Support for extracting EAS energy costs from DT
Robin Randhawa [Mon, 29 Jun 2015 17:01:58 +0000 (18:01 +0100)]
sched: Support for extracting EAS energy costs from DT

This patch implements support for extracting energy cost data from DT.
The data should conform to the DT bindings for energy cost data needed
by EAS (energy aware scheduling).

Signed-off-by: Robin Randhawa <robin.randhawa@arm.com>
8 years agoDocumentation: DT bindings for energy model cost data required by EAS
Robin Randhawa [Mon, 29 Jun 2015 16:56:20 +0000 (17:56 +0100)]
Documentation: DT bindings for energy model cost data required by EAS

EAS (energy aware scheduling) provides the scheduler with an alternative
objective - energy efficiency - as opposed to it's current performance
oriented objectives. EAS relies on a simple platform energy cost model
to guide scheduling decisions. The model only considers the CPU
subsystem.

This patch adds documentation describing DT bindings that should be used to
supply the scheduler with an energy cost model.

Signed-off-by: Robin Randhawa <robin.randhawa@arm.com>
8 years agosched: Disable energy-unfriendly nohz kicks
Morten Rasmussen [Tue, 3 Feb 2015 13:54:11 +0000 (13:54 +0000)]
sched: Disable energy-unfriendly nohz kicks

With energy-aware scheduling enabled nohz_kick_needed() generates many
nohz idle-balance kicks which lead to nothing when multiple tasks get
packed on a single cpu to save energy. This causes unnecessary wake-ups
and hence wastes energy. Make these conditions depend on !energy_aware()
for now until the energy-aware nohz story gets sorted out.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Consider a not over-utilized energy-aware system as balanced
Dietmar Eggemann [Sun, 10 May 2015 14:17:32 +0000 (15:17 +0100)]
sched: Consider a not over-utilized energy-aware system as balanced

In case the system operates below the tipping point indicator,
introduced in ("sched: Add over-utilization/tipping point
indicator"), bail out in find_busiest_group after the dst and src
group statistics have been checked.

There is simply no need to move usage around because all involved
cpus still have spare cycles available.

For an energy-aware system below its tipping point,  we rely on the
task placement of the wakeup path. This works well for short running
tasks.

The existence of long running tasks on one of the involved cpus lets
the system operate over its tipping point. To be able to move such
a task (whose load can't be used to average the load among the cpus)
from a src cpu with lower capacity than the dst_cpu, an additional
rule has to be implemented in need_active_balance.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched: Energy-aware wake-up task placement
Morten Rasmussen [Sat, 9 May 2015 19:03:19 +0000 (20:03 +0100)]
sched: Energy-aware wake-up task placement

Let available compute capacity and estimated energy impact select
wake-up target cpu when energy-aware scheduling is enabled and the
system in not over-utilized (above the tipping point).

energy_aware_wake_cpu() attempts to find group of cpus with sufficient
compute capacity to accommodate the task and find a cpu with enough spare
capacity to handle the task within that group. Preference is given to
cpus with enough spare capacity at the current OPP. Finally, the energy
impact of the new target and the previous task cpu is compared to select
the wake-up target cpu.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Determine the current sched_group idle-state
Dietmar Eggemann [Tue, 27 Jan 2015 14:04:17 +0000 (14:04 +0000)]
sched: Determine the current sched_group idle-state

To estimate the energy consumption of a sched_group in
sched_group_energy() it is necessary to know which idle-state the group
is in when it is idle. For now, it is assumed that this is the current
idle-state (though it might be wrong). Based on the individual cpu
idle-states group_idle_state() finds the group idle-state.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched, cpuidle: Track cpuidle state index in the scheduler
Morten Rasmussen [Tue, 27 Jan 2015 13:48:07 +0000 (13:48 +0000)]
sched, cpuidle: Track cpuidle state index in the scheduler

The idle-state of each cpu is currently pointed to by rq->idle_state but
there isn't any information in the struct cpuidle_state that can used to
look up the idle-state energy model data stored in struct
sched_group_energy. For this purpose is necessary to store the idle
state index as well. Ideally, the idle-state data should be unified.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Add over-utilization/tipping point indicator
Morten Rasmussen [Sat, 9 May 2015 15:49:57 +0000 (16:49 +0100)]
sched: Add over-utilization/tipping point indicator

Energy-aware scheduling is only meant to be active while the system is
_not_ over-utilized. That is, there are spare cycles available to shift
tasks around based on their actual utilization to get a more
energy-efficient task distribution without depriving any tasks. When
above the tipping point task placement is done the traditional way based
on load_avg, spreading the tasks across as many cpus as possible based
on priority scaled load to preserve smp_nice. Below the tipping point we
want to use util_avg instead. We need to define a criteria for when we
make the switch.

The util_avg for each cpu converges towards 100% (1024) regardless of
how many task additional task we may put on it. If we define
over-utilized as:

sum_{cpus}(rq.cfs.avg.util_avg) + margin > sum_{cpus}(rq.capacity)

some individual cpus may be over-utilized running multiple tasks even
when the above condition is false. That should be okay as long as we try
to spread the tasks out to avoid per-cpu over-utilization as much as
possible and if all tasks have the _same_ priority. If the latter isn't
true, we have to consider priority to preserve smp_nice.

For example, we could have n_cpus nice=-10 util_avg=55% tasks and
n_cpus/2 nice=0 util_avg=60% tasks. Balancing based on util_avg we are
likely to end up with nice=-10 tasks sharing cpus and nice=0 tasks
getting their own as we 1.5*n_cpus tasks in total and 55%+55% is less
over-utilized than 55%+60% for those cpus that have to be shared. The
system utilization is only 85% of the system capacity, but we are
breaking smp_nice.

To be sure not to break smp_nice, we have defined over-utilization
conservatively as when any cpu in the system is fully utilized at it's
highest frequency instead:

cpu_rq(any).cfs.avg.util_avg + margin > cpu_rq(any).capacity

IOW, as soon as one cpu is (nearly) 100% utilized, we switch to load_avg
to factor in priority to preserve smp_nice.

With this definition, we can skip periodic load-balance as no cpu has an
always-running task when the system is not over-utilized. All tasks will
be periodic and we can balance them at wake-up. This conservative
condition does however mean that some scenarios that could benefit from
energy-aware decisions even if one cpu is fully utilized would not get
those benefits.

For system where some cpus might have reduced capacity on some cpus
(RT-pressure and/or big.LITTLE), we want periodic load-balance checks as
soon a just a single cpu is fully utilized as it might one of those with
reduced capacity and in that case we want to migrate it.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Estimate energy impact of scheduling decisions
Morten Rasmussen [Tue, 6 Jan 2015 17:34:05 +0000 (17:34 +0000)]
sched: Estimate energy impact of scheduling decisions

Adds a generic energy-aware helper function, energy_diff(), that
calculates energy impact of adding, removing, and migrating utilization
in the system.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Extend sched_group_energy to test load-balancing decisions
Morten Rasmussen [Fri, 2 Jan 2015 14:21:56 +0000 (14:21 +0000)]
sched: Extend sched_group_energy to test load-balancing decisions

Extended sched_group_energy() to support energy prediction with usage
(tasks) added/removed from a specific cpu or migrated between a pair of
cpus. Useful for load-balancing decision making.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Calculate energy consumption of sched_group
Morten Rasmussen [Thu, 18 Dec 2014 14:47:18 +0000 (14:47 +0000)]
sched: Calculate energy consumption of sched_group

For energy-aware load-balancing decisions it is necessary to know the
energy consumption estimates of groups of cpus. This patch introduces a
basic function, sched_group_energy(), which estimates the energy
consumption of the cpus in the group and any resources shared by the
members of the group.

NOTE: The function has five levels of identation and breaks the 80
character limit. Refactoring is necessary.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Highest energy aware balancing sched_domain level pointer
Morten Rasmussen [Fri, 2 Jan 2015 17:08:52 +0000 (17:08 +0000)]
sched: Highest energy aware balancing sched_domain level pointer

Add another member to the family of per-cpu sched_domain shortcut
pointers. This one, sd_ea, points to the highest level at which energy
model is provided. At this level and all levels below all sched_groups
have energy model data attached.

Partial energy model information is possible but restricted to providing
energy model data for lower level sched_domains (sd_ea and below) and
leaving load-balancing on levels above to non-energy-aware
load-balancing. For example, it is possible to apply energy-aware
scheduling within each socket on a multi-socket system and let normal
scheduling handle load-balancing between sockets.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Relocated cpu_util() and change return type
Morten Rasmussen [Thu, 11 Dec 2014 15:25:29 +0000 (15:25 +0000)]
sched: Relocated cpu_util() and change return type

Move cpu_util() to an earlier position in fair.c and change return
type to unsigned long as negative usage doesn't make much sense. All
other load and capacity related functions use unsigned long including
the caller of cpu_util().

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Compute cpu capacity available at current frequency
Morten Rasmussen [Tue, 13 Jan 2015 14:11:28 +0000 (14:11 +0000)]
sched: Compute cpu capacity available at current frequency

capacity_orig_of() returns the max available compute capacity of a cpu.
For scale-invariant utilization tracking and energy-aware scheduling
decisions it is useful to know the compute capacity available at the
current OPP of a cpu.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agoarm64: Cpu invariant scheduler load-tracking and capacity support
Juri Lelli [Thu, 30 Apr 2015 10:53:48 +0000 (11:53 +0100)]
arm64: Cpu invariant scheduler load-tracking and capacity support

Provides the scheduler with a cpu scaling correction factor for more
accurate load-tracking and cpu capacity handling.

The Energy Model (EM) (in fact the capacity value of the last element
of the capacity states vector of the core (MC) level sched_group_energy
structure) is used as the source for this cpu scaling factor.

The cpu capacity value depends on the micro-architecture and the
maximum frequency of the cpu.

The maximum frequency part should not be confused with the frequency
invariant scheduler load-tracking support which deals with frequency
related scaling due to DFVS functionality.

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agoarm: Cpu invariant scheduler load-tracking and capacity support
Dietmar Eggemann [Fri, 10 Jul 2015 12:57:19 +0000 (13:57 +0100)]
arm: Cpu invariant scheduler load-tracking and capacity support

Provides the scheduler with a cpu scaling correction factor for more
accurate load-tracking and cpu capacity handling.

The Energy Model (EM) (in fact the capacity value of the last element
of the capacity states vector of the core (MC) level sched_group_energy
structure) is used instead of the arm arch specific cpu_efficiency and
dtb property 'clock-frequency' values as the source for this cpu
scaling factor.

The cpu capacity value depends on the micro-architecture and the
maximum frequency of the cpu.

The maximum frequency part should not be confused with the frequency
invariant scheduler load-tracking support which deals with frequency
related scaling due to DFVS functionality.

Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched: Introduce SD_SHARE_CAP_STATES sched_domain flag
Morten Rasmussen [Tue, 13 Jan 2015 13:50:46 +0000 (13:50 +0000)]
sched: Introduce SD_SHARE_CAP_STATES sched_domain flag

cpufreq is currently keeping it a secret which cpus are sharing
clock source. The scheduler needs to know about clock domains as well
to become more energy aware. The SD_SHARE_CAP_STATES domain flag
indicates whether cpus belonging to the sched_domain share capacity
states (P-states).

There is no connection with cpufreq (yet). The flag must be set by
the arch specific topology code.

cc: Russell King <linux@arm.linux.org.uk>
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Initialize energy data structures
Dietmar Eggemann [Fri, 14 Nov 2014 16:20:20 +0000 (16:20 +0000)]
sched: Initialize energy data structures

The sched_group_energy (sge) pointer of the first sched_group (sg) in
the sched_domain (sd) is initialized to point to the appropriate (in
terms of sd level and cpu) sge data defined in the arch and so to the
correct part of the Energy Model (EM).

Energy-aware scheduling allows that a system has only EM data up to a
certain sd level (so called highest energy aware balancing sd level).
A check in init_sched_energy() enforces that all sd's below this sd
level contain EM data.

The 'int cpu' parameter of sched_domain_energy_f requires that
check_sched_energy_data() makes sure that all cpus spanned by a sg
are provisioned with the same EM data.

This patch has also been tested with feature FORCE_SD_OVERLAP enabled.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched: Introduce energy data structures
Dietmar Eggemann [Fri, 14 Nov 2014 16:08:45 +0000 (16:08 +0000)]
sched: Introduce energy data structures

The struct sched_group_energy represents the per sched_group related
data which is needed for energy aware scheduling. It contains:

  (1) number of elements of the idle state array
  (2) pointer to the idle state array which comprises 'power consumption'
      for each idle state
  (3) number of elements of the capacity state array
  (4) pointer to the capacity state array which comprises 'compute
      capacity and power consumption' tuples for each capacity state

The struct sched_group obtains a pointer to a struct sched_group_energy.

The function pointer sched_domain_energy_f is introduced into struct
sched_domain_topology_level which will allow the arch to pass a particular
struct sched_group_energy from the topology shim layer into the scheduler
core.

The function pointer sched_domain_energy_f has an 'int cpu' parameter
since the folding of two adjacent sd levels via sd degenerate doesn't work
for all sd levels. I.e. it is not possible for example to use this feature
to provide per-cpu energy in sd level DIE on ARM's TC2 platform.

It was discussed that the folding of sd levels approach is preferable
over the cpu parameter approach, simply because the user (the arch
specifying the sd topology table) can introduce less errors. But since
it is not working, the 'int cpu' parameter is the only way out. It's
possible to use the folding of sd levels approach for
sched_domain_flags_f and the cpu parameter approach for the
sched_domain_energy_f at the same time though. With the use of the
'int cpu' parameter, an extra check function has to be provided to make
sure that all cpus spanned by a sched group are provisioned with the same
energy data.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched: Make energy awareness a sched feature
Morten Rasmussen [Tue, 13 Jan 2015 13:45:51 +0000 (13:45 +0000)]
sched: Make energy awareness a sched feature

This patch introduces the ENERGY_AWARE sched feature, which is
implemented using jump labels when SCHED_DEBUG is defined. It is
statically set false when SCHED_DEBUG is not defined. Hence this doesn't
allow energy awareness to be enabled without SCHED_DEBUG. This
sched_feature knob will be replaced later with a more appropriate
control knob when things have matured a bit.

ENERGY_AWARE is based on per-entity load-tracking hence FAIR_GROUP_SCHED
must be enable. This dependency isn't checked at compile time yet.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Documentation for scheduler energy cost model
Morten Rasmussen [Tue, 13 Jan 2015 13:43:28 +0000 (13:43 +0000)]
sched: Documentation for scheduler energy cost model

This documentation patch provides an overview of the experimental
scheduler energy costing model, associated data structures, and a
reference recipe on how platforms can be characterized to derive energy
models.

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Prevent unnecessary active balance of single task in sched group
Morten Rasmussen [Thu, 2 Jul 2015 16:16:34 +0000 (17:16 +0100)]
sched: Prevent unnecessary active balance of single task in sched group

Scenarios with the busiest group having just one task and the local
being idle on topologies with sched groups with different numbers of
cpus manage to dodge all load-balance bailout conditions resulting the
nr_balance_failed counter to be incremented. This eventually causes a
pointless active migration of the task. This patch prevents this by not
incrementing the counter when the busiest group only has one task.
ASYM_PACKING migrations and migrations due to reduced capacity should
still take place as these are explicitly captured by
need_active_balance().

A better solution would be to not attempt the load-balance in the first
place, but that requires significant changes to the order of bailout
conditions and statistics gathering.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Enable idle balance to pull single task towards cpu with higher capacity
Dietmar Eggemann [Mon, 26 Jan 2015 19:47:28 +0000 (19:47 +0000)]
sched: Enable idle balance to pull single task towards cpu with higher capacity

We do not want to miss out on the ability to pull a single remaining
task from a potential source cpu towards an idle destination cpu. Add an
extra criteria to need_active_balance() to kick off active load balance
if the source cpu is over-utilized and has lower capacity than the
destination cpu.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched: Consider spare cpu capacity at task wake-up
Morten Rasmussen [Mon, 6 Jul 2015 14:01:10 +0000 (15:01 +0100)]
sched: Consider spare cpu capacity at task wake-up

find_idlest_group() selects the wake-up target group purely
based on group load which leads to suboptimal choices in low load
scenarios. An idle group with reduced capacity (due to RT tasks or
different cpu type) isn't necessarily a better target than a lightly
loaded group with higher capacity.

The patch adds spare capacity as an additional group selection
parameter. The target group is now selected based on the following
criteria:

1. Return the group with the cpu with most spare capacity and this
capacity is significant if such group exists. Significant spare capacity
is currently at least 20% to spare.

2. Return the group with the lowest load, unless it is the local group
in which case NULL is returned and the search is continued at the next
(lower) level.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Add cpu capacity awareness to wakeup balancing
Morten Rasmussen [Sat, 9 May 2015 18:53:49 +0000 (19:53 +0100)]
sched: Add cpu capacity awareness to wakeup balancing

Wakeup balancing is completely unaware of cpu capacity, cpu utilization
and task utilization. The task is preferably placed on a cpu which is
idle in the instant the wakeup happens. New tasks
(SD_BALANCE_{FORK,EXEC} are placed on an idle cpu in the idlest group if
such can be found, otherwise it goes on the least loaded one. Existing
tasks (SD_BALANCE_WAKE) are placed on the previous cpu or an idle cpu
sharing the same last level cache unless the wakee_flips heuristic in
wake_wide() decides to fallback to considering cpus outside SD_LLC.
Hence existing tasks are not guaranteed to get a chance to migrate to a
different group at wakeup in case the current one has reduced cpu
capacity (due RT/IRQ pressure or different uarch e.g. ARM big.LITTLE).
They may eventually get pulled by other cpus doing
periodic/idle/nohz_idle balance, but it may take quite a while before it
happens.

This patch adds capacity awareness to find_idlest_{group,queue} (used by
SD_BALANCE_{FORK,EXEC} and SD_BALANCE_WAKE under certain circumstances)
such that groups/cpus that can accommodate the waking task based on task
utilization are preferred. In addition, wakeup of existing tasks
(SD_BALANCE_WAKE) is sent through find_idlest_{group,queue} also if the
task doesn't fit the capacity of the previous cpu to allow it to escape
(override wake_affine) when necessary instead of relying on
periodic/idle/nohz_idle balance to eventually sort it out.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agosched: Store system-wide maximum cpu capacity in root domain
Dietmar Eggemann [Thu, 7 May 2015 17:46:15 +0000 (18:46 +0100)]
sched: Store system-wide maximum cpu capacity in root domain

To be able to compare the capacity of the target cpu with the highest
cpu capacity of the system in the wakeup path, store the system-wide
maximum cpu capacity in the root domain.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agoarm: Update arch_scale_cpu_capacity() to reflect change to define
Morten Rasmussen [Tue, 14 Apr 2015 15:25:31 +0000 (16:25 +0100)]
arm: Update arch_scale_cpu_capacity() to reflect change to define

arch_scale_cpu_capacity() is no longer a weak function but a #define
instead. Include the #define in topology.h.

cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
8 years agoarm64: Enable frequency invariant scheduler load-tracking support
Dietmar Eggemann [Fri, 25 Sep 2015 16:15:11 +0000 (17:15 +0100)]
arm64: Enable frequency invariant scheduler load-tracking support

Defines arch_scale_freq_capacity() to use cpufreq implementation.

Including <linux/cpufreq.h> in topology.h like for the arm arch doesn't
work because of CONFIG_COMPAT=y (Kernel support for 32-bit EL0).
That's why cpufreq_scale_freq_capacity() has to be declared extern in
topology.h.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agoarm: Enable frequency invariant scheduler load-tracking support
Dietmar Eggemann [Wed, 23 Sep 2015 11:47:48 +0000 (12:47 +0100)]
arm: Enable frequency invariant scheduler load-tracking support

Defines arch_scale_freq_capacity() to use cpufreq implementation.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agocpufreq: Frequency invariant scheduler load-tracking support
Dietmar Eggemann [Thu, 17 Sep 2015 15:10:56 +0000 (16:10 +0100)]
cpufreq: Frequency invariant scheduler load-tracking support

Implements cpufreq_scale_freq_capacity() to provide the scheduler with a
frequency scaling correction factor for more accurate load-tracking.

The factor is:

current_freq(cpu) << SCHED_CAPACITY_SHIFT / max_freq(cpu)

In fact, freq_scale should be a struct cpufreq_policy data member. But
this would require that the scheduler hot path (__update_load_avg()) would
have to grab the cpufreq lock. This can be avoided by using per-cpu data
initialized to SCHED_CAPACITY_SCALE for freq_scale.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
8 years agosched/fair: Fix new task's load avg removed from source CPU in wake_up_new_task()
Yuyang Du [Wed, 16 Dec 2015 23:34:27 +0000 (07:34 +0800)]
sched/fair: Fix new task's load avg removed from source CPU in wake_up_new_task()

If a newly created task is selected to go to a different CPU in fork
balance when it wakes up the first time, its load averages should
not be removed from the source CPU since they are never added to
it before. The same is also applicable to a never used group entity.

Fix it in remove_entity_load_avg(): when entity's last_update_time
is 0, simply return. This should precisely identify the case in
question, because in other migrations, the last_update_time is set
to 0 after remove_entity_load_avg().

Reported-by: Steve Muckle <steve.muckle@linaro.org>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
[peterz: cfs_rq_last_update_time]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <Juri.Lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Link: http://lkml.kernel.org/r/20151216233427.GJ28098@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
8 years agoMerge remote-tracking branch 'lsk/linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4
Mark Brown [Fri, 18 Mar 2016 09:48:58 +0000 (09:48 +0000)]
Merge remote-tracking branch 'lsk/linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4

8 years agoMerge tag 'v4.4.5' into linux-linaro-lsk-v4.4
Mark Brown [Fri, 18 Mar 2016 09:45:54 +0000 (09:45 +0000)]
Merge tag 'v4.4.5' into linux-linaro-lsk-v4.4

This is the 4.4.5 stable release

# gpg: Signature made Wed 09 Mar 2016 23:36:03 GMT using RSA key ID 6092693E
# gpg: Good signature from "Greg Kroah-Hartman (Linux kernel stable release signing key) <greg@kroah.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 647F 2865 4894 E3BD 4571  99BE 38DB BDC8 6092 693E

8 years agoMerge remote-tracking branch 'lsk/v4.4/topic/ro-vdso' into linux-linaro-lsk-v4.4
Mark Brown [Thu, 17 Mar 2016 18:54:33 +0000 (18:54 +0000)]
Merge remote-tracking branch 'lsk/v4.4/topic/ro-vdso' into linux-linaro-lsk-v4.4

8 years agoARM/vdso: Mark the vDSO code read-only after init
David Brown [Wed, 16 Mar 2016 23:10:28 +0000 (17:10 -0600)]
ARM/vdso: Mark the vDSO code read-only after init

commit 11bf9b865898961cee60a41c483c9f27ec76e12e upstream.

Although the ARM vDSO is cleanly separated by code/data with the code
being read-only in userspace mappings, the code page is still writable
from the kernel.

There have been exploits (such as http://itszn.com/blog/?p=21) that
take advantage of this on x86 to go from a bad kernel write to full
root.

Prevent this specific exploit class on ARM as well by putting the vDSO
code page in post-init read-only memory as well.

Before:
vdso: 1 text pages at base 80927000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

After:
vdso: 1 text pages at base 8072b000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.

Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1455748879-21872-8-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
8 years agox86/vdso: Mark the vDSO code read-only after init
Kees Cook [Wed, 16 Mar 2016 23:10:27 +0000 (17:10 -0600)]
x86/vdso: Mark the vDSO code read-only after init

commit 018ef8dcf3de5f62e2cc1a9273cc27e1c6ba8de5 upstream.

The vDSO does not need to be writable after __init, so mark it as
__ro_after_init. The result kills the exploit method of writing to the
vDSO from kernel space resulting in userspace executing the modified code,
as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21

The memory map (with added vDSO address reporting) shows the vDSO moving
into read-only memory:

Before:
[    0.143067] vDSO @ ffffffff82004000
[    0.143551] vDSO @ ffffffff82006000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

After:
[    0.145062] vDSO @ ffffffff81da1000
[    0.146057] vDSO @ ffffffff81da4000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Andy Lutomirski <luto@kernel.org>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-7-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
8 years agolkdtm: Verify that '__ro_after_init' works correctly
Kees Cook [Wed, 16 Mar 2016 23:10:26 +0000 (17:10 -0600)]
lkdtm: Verify that '__ro_after_init' works correctly

commit 7cca071ccbd2a293ea69168ace6abbcdce53098e upstream.

The new __ro_after_init section should be writable before init, but
not after. Validate that it gets updated at init and can't be written
to afterwards.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-6-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
8 years agoarch: Introduce post-init read-only memory
Kees Cook [Wed, 16 Mar 2016 23:10:25 +0000 (17:10 -0600)]
arch: Introduce post-init read-only memory

commit c74ba8b3480da6ddaea17df2263ec09b869ac496 upstream.

One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.

Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.

This introduces __ro_after_init as a way to mark such memory, and adds
some documentation about the existing __read_mostly marking.

This improves the security of the Linux kernel by marking formerly
read-write memory regions as read-only on a fully booted up system.

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
8 years agox86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option
Kees Cook [Wed, 16 Mar 2016 23:10:24 +0000 (17:10 -0600)]
x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option

commit 9ccaf77cf05915f51231d158abfd5448aedde758 upstream.

This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.

This simplifies the code and also makes it clearer that read-only mapped
memory is just as fundamental a security feature in kernel-space as it is
in user-space.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-4-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
8 years agomm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings
Kees Cook [Wed, 16 Mar 2016 23:10:23 +0000 (17:10 -0600)]
mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings

commit d2aa1acad22f1bdd0cfa67b3861800e392254454 upstream.

It may be useful to debug writes to the readonly sections of memory,
so provide a cmdline "rodata=off" to allow for this. This can be
expanded in the future to support "log" and "write" modes, but that
will need to be architecture-specific.

This also makes KDB software breakpoints more usable, as read-only
mappings can now be disabled on any kernel.

Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-3-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
8 years agoasm-generic: Consolidate mark_rodata_ro()
Kees Cook [Wed, 16 Mar 2016 23:10:22 +0000 (17:10 -0600)]
asm-generic: Consolidate mark_rodata_ro()

commit e267d97b83d9cecc16c54825f9f3ac7f72dc1e1e upstream.

Instead of defining mark_rodata_ro() in each architecture, consolidate it.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Gross <agross@codeaurora.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ashok Kumar <ashoks@broadcom.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Brown <david.brown@linaro.org>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Helge Deller <deller@gmx.de>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-parisc@vger.kernel.org
Link: http://lkml.kernel.org/r/1455748879-21872-2-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
8 years ago Merge tag 'v4.4.6' into linux-linaro-lsk-v4.4
Alex Shi [Thu, 17 Mar 2016 04:51:14 +0000 (12:51 +0800)]
 Merge tag 'v4.4.6' into linux-linaro-lsk-v4.4

 This is the 4.4.6 stable release

8 years agoLinux 4.4.6
Greg Kroah-Hartman [Wed, 16 Mar 2016 15:43:17 +0000 (08:43 -0700)]
Linux 4.4.6

8 years agold-version: Fix awk regex compile failure
James Hogan [Tue, 8 Mar 2016 16:47:53 +0000 (16:47 +0000)]
ld-version: Fix awk regex compile failure

commit 4b7b1ef2c2f83d702272555e8adb839a50ba0f8e upstream.

The ld-version.sh script fails on some versions of awk with the
following error, resulting in build failures for MIPS:

awk: scripts/ld-version.sh: line 4: regular expression compile failed (missing '(')

This is due to the regular expression ".*)", meant to strip off the
beginning of the ld version string up to the close bracket, however
brackets have a meaning in regular expressions, so lets escape it so
that awk doesn't expect a corresponding open bracket.

Fixes: ccbef1674a15 ("Kbuild, lto: add ld-version and ld-ifversion ...")
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Tested-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
Cc: Michal Marek <mmarek@suse.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kbuild@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12838/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agotarget: Drop incorrect ABORT_TASK put for completed commands
Nicholas Bellinger [Sun, 6 Mar 2016 04:00:12 +0000 (20:00 -0800)]
target: Drop incorrect ABORT_TASK put for completed commands

commit 7f54ab5ff52fb0b91569bc69c4a6bc5cac1b768d upstream.

This patch fixes a recent ABORT_TASK regression associated
with commit febe562c, where a left-over target_put_sess_cmd()
would still be called when __target_check_io_state() detected
a command has already been completed, and explicit ABORT must
be avoided.

Note commit febe562c dropped the local kref_get_unless_zero()
check in core_tmr_abort_task(), but did not drop this extra
corresponding target_put_sess_cmd() in the failure path.

So go ahead and drop this now bogus target_put_sess_cmd(),
and avoid this potential use-after-free.

Reported-by: Dan Lane <dracodan@gmail.com>
Cc: Quinn Tran <quinn.tran@qlogic.com>
Cc: Himanshu Madhani <himanshu.madhani@qlogic.com>
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Andy Grover <agrover@redhat.com>
Cc: Mike Christie <mchristi@redhat.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoblock: don't optimize for non-cloned bio in bio_get_last_bvec()
Ming Lei [Sat, 12 Mar 2016 14:56:19 +0000 (22:56 +0800)]
block: don't optimize for non-cloned bio in bio_get_last_bvec()

commit 90d0f0f11588ec692c12f9009089b398be395184 upstream.

For !BIO_CLONED bio, we can use .bi_vcnt safely, but it
doesn't mean we can just simply return .bi_io_vec[.bi_vcnt - 1]
because the start postion may have been moved in the middle of
the bvec, such as splitting in the middle of bvec.

Fixes: 7bcd79ac50d9(block: bio: introduce helpers to get the 1st and last bvec)
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoMIPS: smp.c: Fix uninitialised temp_foreign_map
James Hogan [Fri, 4 Mar 2016 10:10:51 +0000 (10:10 +0000)]
MIPS: smp.c: Fix uninitialised temp_foreign_map

commit d825c06bfe8b885b797f917ad47365d0e9c21fbb upstream.

When calculate_cpu_foreign_map() recalculates the cpu_foreign_map
cpumask it uses the local variable temp_foreign_map without initialising
it to zero. Since the calculation only ever sets bits in this cpumask
any existing bits at that memory location will remain set and find their
way into cpu_foreign_map too. This could potentially lead to cache
operations suboptimally doing smp calls to multiple VPEs in the same
core, even though the VPEs share primary caches.

Therefore initialise temp_foreign_map using cpumask_clear() before use.

Fixes: cccf34e9411c ("MIPS: c-r4k: Fix cache flushing for MT cores")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12759/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoMIPS: Fix build error when SMP is used without GIC
Hauke Mehrtens [Sun, 6 Mar 2016 21:28:56 +0000 (22:28 +0100)]
MIPS: Fix build error when SMP is used without GIC

commit 7a50e4688dabb8005df39b2b992d76629b8af8aa upstream.

The MIPS_GIC_IPI should only be selected when MIPS_GIC is also
selected, otherwise it results in a compile error. smp-gic.c uses some
functions from include/linux/irqchip/mips-gic.h like
plat_ipi_call_int_xlate() which are only added to the header file when
MIPS_GIC is set. The Lantiq SoC does not use the GIC, but supports SMP.
The calls top the functions from smp-gic.c are already protected by
some #ifdefs

The first part of this was introduced in commit 72e20142b2bf ("MIPS:
Move GIC IPI functions out of smp-cmp.c")

Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12774/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoovl: fix getcwd() failure after unsuccessful rmdir
Rui Wang [Fri, 8 Jan 2016 15:09:59 +0000 (23:09 +0800)]
ovl: fix getcwd() failure after unsuccessful rmdir

commit ce9113bbcbf45a57c082d6603b9a9f342be3ef74 upstream.

ovl_remove_upper() should do d_drop() only after it successfully
removes the dir, otherwise a subsequent getcwd() system call will
fail, breaking userspace programs.

This is to fix: https://bugzilla.kernel.org/show_bug.cgi?id=110491

Signed-off-by: Rui Wang <rui.y.wang@intel.com>
Reviewed-by: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoovl: copy new uid/gid into overlayfs runtime inode
Konstantin Khlebnikov [Sun, 31 Jan 2016 13:21:29 +0000 (16:21 +0300)]
ovl: copy new uid/gid into overlayfs runtime inode

commit b81de061fa59f17d2730aabb1b84419ef3913810 upstream.

Overlayfs must update uid/gid after chown, otherwise functions
like inode_owner_or_capable() will check user against stale uid.
Catched by xfstests generic/087, it chowns file and calls utimes.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agouserfaultfd: don't block on the last VM updates at exit time
Linus Torvalds [Tue, 1 Mar 2016 19:56:22 +0000 (11:56 -0800)]
userfaultfd: don't block on the last VM updates at exit time

commit 39680f50ae54cbbb6e72ac38b8329dd3eb9105f4 upstream.

The exit path will do some final updates to the VM of an exiting process
to inform others of the fact that the process is going away.

That happens, for example, for robust futex state cleanup, but also if
the parent has asked for a TID update when the process exits (we clear
the child tid field in user space).

However, at the time we do those final VM accesses, we've already
stopped accepting signals, so the usual "stop waiting for userfaults on
signal" code in fs/userfaultfd.c no longer works, and the process can
become an unkillable zombie waiting for something that will never
happen.

To solve this, just make handle_userfault() abort any user fault
handling if we're already in the exit path past the signal handling
state being dead (marked by PF_EXITING).

This VM special case is pretty ugly, and it is possible that we should
look at finalizing signals later (or move the VM final accesses
earlier).  But in the meantime this is a fairly minimally intrusive fix.

Reported-and-tested-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agopowerpc/powernv: Fix OPAL_CONSOLE_FLUSH prototype and usages
Russell Currey [Wed, 13 Jan 2016 01:04:32 +0000 (12:04 +1100)]
powerpc/powernv: Fix OPAL_CONSOLE_FLUSH prototype and usages

commit c88c5d43732a0356f99e5e4d1ad62ab1ea516b81 upstream.

The recently added OPAL API call, OPAL_CONSOLE_FLUSH, originally took no
parameters and returned nothing.  The call was updated to accept the
terminal number to flush, and returned various values depending on the
state of the output buffer.

The prototype has been updated and its usage in the OPAL kmsg dumper has
been modified to support its new behaviour as an incremental flush.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agopowerpc/powernv: Add a kmsg_dumper that flushes console output on panic
Russell Currey [Fri, 27 Nov 2015 06:23:07 +0000 (17:23 +1100)]
powerpc/powernv: Add a kmsg_dumper that flushes console output on panic

commit affddff69c55eb68969448f35f59054a370bc7c1 upstream.

On BMC machines, console output is controlled by the OPAL firmware and is
only flushed when its pollers are called.  When the kernel is in a panic
state, it no longer calls these pollers and thus console output does not
completely flush, causing some output from the panic to be lost.

Output is only actually lost when the kernel is configured to not power off
or reboot after panic (i.e. CONFIG_PANIC_TIMEOUT is set to 0) since OPAL
flushes the console buffer as part of its power down routines.  Before this
patch, however, only partial output would be printed during the timeout wait.

This patch adds a new kmsg_dumper which gets called at panic time to ensure
panic output is not lost.  It accomplishes this by calling OPAL_CONSOLE_FLUSH
in the OPAL API, and if that is not available, the pollers are called enough
times to (hopefully) completely flush the buffer.

The flushing mechanism will only affect output printed at and before the
kmsg_dump call in kernel/panic.c:panic().  As such, the "end Kernel panic"
message may still be truncated as follows:

>Call Trace:
>[c000000f1f603b00] [c0000000008e9458] dump_stack+0x90/0xbc (unreliable)
>[c000000f1f603b30] [c0000000008e7e78] panic+0xf8/0x2c4
>[c000000f1f603bc0] [c000000000be4860] mount_block_root+0x288/0x33c
>[c000000f1f603c80] [c000000000be4d14] prepare_namespace+0x1f4/0x254
>[c000000f1f603d00] [c000000000be43e8] kernel_init_freeable+0x318/0x350
>[c000000f1f603dc0] [c00000000000bd74] kernel_init+0x24/0x130
>[c000000f1f603e30] [c0000000000095b0] ret_from_kernel_thread+0x5c/0xac
>---[ end Kernel panic - not

This functionality is implemented as a kmsg_dumper as it seems to be the
most sensible way to introduce platform-specific functionality to the
panic function.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agopowerpc: Fix dedotify for binutils >= 2.26
Andreas Schwab [Fri, 5 Feb 2016 18:50:03 +0000 (19:50 +0100)]
powerpc: Fix dedotify for binutils >= 2.26

commit f15838e9cac8f78f0cc506529bb9d3b9fa589c1f upstream.

Since binutils 2.26 BFD is doing suffix merging on STRTAB sections.  But
dedotify modifies the symbol names in place, which can also modify
unrelated symbols with a name that matches a suffix of a dotted name.  To
remove the leading dot of a symbol name we can just increment the pointer
into the STRTAB section instead.

Backport to all stables to avoid breakage when people update their
binutils - mpe.

Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoRevert "drm/radeon/pm: adjust display configuration after powerstate"
Alex Deucher [Tue, 8 Mar 2016 16:31:00 +0000 (11:31 -0500)]
Revert "drm/radeon/pm: adjust display configuration after powerstate"

commit d74e766e1916d0e09b86e4b5b9d0f819628fd546 upstream.

This reverts commit 39d4275058baf53e89203407bf3841ff2c74fa32.

This caused a regression on some older hardware.

bug:
https://bugzilla.kernel.org/show_bug.cgi?id=113891

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agodrm/radeon: Fix error handling in radeon_flip_work_func.
Mario Kleiner [Tue, 1 Mar 2016 20:31:17 +0000 (21:31 +0100)]
drm/radeon: Fix error handling in radeon_flip_work_func.

commit 1e1490a38504419e349caa1b7d55d5c141a9bccb upstream.

This is a port of the patch "drm/amdgpu: Fix error handling in amdgpu_flip_work_func."
to fix the following problem for radeon as well which was
reported against amdgpu:

The patch e1d09dc0ccc6: "drm/amdgpu: Don't hang in
amdgpu_flip_work_func on disabled crtc." from Feb 19, 2016, leads to
the following static checker warning, as reported by Dan Carpenter in
https://lists.freedesktop.org/archives/dri-devel/2016-February/101987.html

drivers/gpu/drm/amd/amdgpu/amdgpu_display.c:127 amdgpu_flip_work_func()     warn: should this be 'repcnt == -1'
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c:136 amdgpu_flip_work_func() error: double unlock 'spin_lock:&crtc->dev->event_lock'
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c:136 amdgpu_flip_work_func() error: double unlock 'irqsave:flags'

This patch fixes both reported problems:

Change post-decrement of repcnt to pre-decrement, so
it can't underflow anymore, but still performs up to
three repetitions - three is the maximum one could
expect in practice.

Move the spin_unlock_irqrestore to where it actually
belongs.

Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Michel Dänzer <michel.daenzer@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agodrm/amdgpu: Fix error handling in amdgpu_flip_work_func.
Mario Kleiner [Tue, 1 Mar 2016 20:31:16 +0000 (21:31 +0100)]
drm/amdgpu: Fix error handling in amdgpu_flip_work_func.

commit 90e94b160c7f647ddffda707f5e3c0c66c170df8 upstream.

The patch e1d09dc0ccc6: "drm/amdgpu: Don't hang in
amdgpu_flip_work_func on disabled crtc." from Feb 19, 2016, leads to
the following static checker warning, as reported by Dan Carpenter in
https://lists.freedesktop.org/archives/dri-devel/2016-February/101987.html

drivers/gpu/drm/amd/amdgpu/amdgpu_display.c:127 amdgpu_flip_work_func() warn: should this be 'repcnt == -1'
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c:136 amdgpu_flip_work_func() error: double unlock 'spin_lock:&crtc->dev->event_lock'
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c:136 amdgpu_flip_work_func() error: double unlock 'irqsave:flags'

This patch fixes both reported problems:

Change post-decrement of repcnt to pre-decrement, so
it can't underflow anymore, but still performs up to
three repetitions - three is the maximum one could
expect in practice.

Move the spin_unlock_irqrestore to where it actually
belongs.

Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Michel Dänzer <michel.daenzer@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoRevert "drm/radeon: call hpd_irq_event on resume"
Linus Torvalds [Mon, 7 Mar 2016 21:15:09 +0000 (13:15 -0800)]
Revert "drm/radeon: call hpd_irq_event on resume"

commit 256faedcfd646161477d47a1a78c32a562d2e845 upstream.

This reverts commit dbb17a21c131eca94eb31136eee9a7fe5aff00d9.

It turns out that commit can cause problems for systems with multiple
GPUs, and causes X to hang on at least a HP Pavilion dv7 with hybrid
graphics.

This got noticed originally in 4.4.4, where this patch had already
gotten back-ported, but 4.5-rc7 was verified to have the same problem.

Alexander Deucher says:
 "It looks like you have a muxed system so I suspect what's happening is
  that one of the display is being reported as connected for both the
  IGP and the dGPU and then the desktop environment gets confused or
  there some sort problem in the detect functions since the mux is not
  switched to the dGPU.  I don't see an easy fix unless Dave has any
  ideas.  I'd say just revert for now"

Reported-by: Jörg-Volker Peetz <jvpeetz@web.de>
Acked-by: Alexander Deucher <Alexander.Deucher@amd.com>
Cc: Dave Airlie <airlied@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agox86/mm: Fix slow_virt_to_phys() for X86_PAE again
Dexuan Cui [Thu, 25 Feb 2016 09:58:12 +0000 (01:58 -0800)]
x86/mm: Fix slow_virt_to_phys() for X86_PAE again

commit bf70e5513dfea29c3682e7eb3dbb45f0723bac09 upstream.

"d1cd12108346: x86, pageattr: Prevent overflow in slow_virt_to_phys() for
X86_PAE" was unintentionally removed by the recent "34437e67a672: x86/mm: Fix
slow_virt_to_phys() to handle large PAT bit".

And, the variable 'phys_addr' was defined as "unsigned long" by mistake -- it should
be "phys_addr_t".

As a result, Hyper-V network driver in 32-PAE Linux guest can't work again.

Fixes: commit 34437e67a672: "x86/mm: Fix slow_virt_to_phys() to handle large PAT bit"
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Reviewed-by: Toshi Kani <toshi.kani@hpe.com>
Cc: olaf@aepfle.de
Cc: jasowang@redhat.com
Cc: driverdev-devel@linuxdriverproject.org
Cc: linux-mm@kvack.org
Cc: apw@canonical.com
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: K. Y. Srinivasan <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Link: http://lkml.kernel.org/r/1456394292-9030-1-git-send-email-decui@microsoft.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agogpu: ipu-v3: Do not bail out on missing optional port nodes
Philipp Zabel [Mon, 4 Jan 2016 16:32:26 +0000 (17:32 +0100)]
gpu: ipu-v3: Do not bail out on missing optional port nodes

commit 17e0521750399205f432966e602e125294879cdd upstream.

The port nodes are documented as optional, treat them accordingly.

Reported-by: Martin Fuzzey <mfuzzey@parkeon.com>
Reported-by: Chris Healy <Chris.Healy@zii.aero>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Fixes: 304e6be652e2 ("gpu: ipu-v3: Assign of_node of child platform devices to corresponding ports")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agomac80211: Fix Public Action frame RX in AP mode
Jouni Malinen [Mon, 29 Feb 2016 22:29:00 +0000 (00:29 +0200)]
mac80211: Fix Public Action frame RX in AP mode

commit 1ec7bae8bec9b72e347e01330c745ab5cdd66f0e upstream.

Public Action frames use special rules for how the BSSID field (Address
3) is set. A wildcard BSSID is used in cases where the transmitter and
recipient are not members of the same BSS. As such, we need to accept
Public Action frames with wildcard BSSID.

Commit db8e17324553 ("mac80211: ignore frames between TDLS peers when
operating as AP") added a rule that drops Action frames to TDLS-peers
based on an Action frame having different DA (Address 1) and BSSID
(Address 3) values. This is not correct since it misses the possibility
of BSSID being a wildcard BSSID in which case the Address 1 would not
necessarily match.

Fix this by allowing mac80211 to accept wildcard BSSID in an Action
frame when in AP mode.

Fixes: db8e17324553 ("mac80211: ignore frames between TDLS peers when operating as AP")
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agomac80211: check PN correctly for GCMP-encrypted fragmented MPDUs
Johannes Berg [Fri, 26 Feb 2016 21:13:40 +0000 (22:13 +0100)]
mac80211: check PN correctly for GCMP-encrypted fragmented MPDUs

commit 9acc54beb474c81148e2946603d141cf8716b19f upstream.

Just like for CCMP we need to check that for GCMP the fragments
have PNs that increment by one; the spec was updated to fix this
security issue and now has the following text:

The receiver shall discard MSDUs and MMPDUs whose constituent
MPDU PN values are not incrementing in steps of 1.

Adapt the code for CCMP to work for GCMP as well, luckily the
relevant fields already alias each other so no code duplication
is needed (just check the aliasing with BUILD_BUG_ON.)

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agomac80211: minstrel_ht: fix a logic error in RTS/CTS handling
Felix Fietkau [Wed, 24 Feb 2016 11:07:17 +0000 (12:07 +0100)]
mac80211: minstrel_ht: fix a logic error in RTS/CTS handling

commit c36dd3eaf1a674a45b58b922258d6eaa8932e292 upstream.

RTS/CTS needs to be enabled if the rate is a fallback rate *or* if it's
a dual-stream rate and the sta is in dynamic SMPS mode.

Fixes: a3ebb4e1b763 ("mac80211: minstrel_ht: handle peers in dynamic SMPS")
Reported-by: Matías Richart <mrichart@fing.edu.uy>
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agomac80211: minstrel_ht: set default tx aggregation timeout to 0
Felix Fietkau [Thu, 18 Feb 2016 18:49:18 +0000 (19:49 +0100)]
mac80211: minstrel_ht: set default tx aggregation timeout to 0

commit 7a36b930e6ed4702c866dc74a5ad07318a57c688 upstream.

The value 5000 was put here with the addition of the timeout field to
ieee80211_start_tx_ba_session. It was originally added in mac80211 to
save resources for drivers like iwlwifi, which only supports a limited
number of concurrent aggregation sessions.

Since iwlwifi does not use minstrel_ht and other drivers don't need
this, 0 is a better default - especially since there have been
recent reports of aggregation setup related issues reproduced with
ath9k. This should improve stability without causing any adverse
effects.

Acked-by: Avery Pennarun <apenwarr@gmail.com>
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agomac80211: fix use of uninitialised values in RX aggregation
Chris Bainbridge [Wed, 27 Jan 2016 15:46:18 +0000 (15:46 +0000)]
mac80211: fix use of uninitialised values in RX aggregation

commit f39ea2690bd61efec97622c48323f40ed6e16317 upstream.

Use kzalloc instead of kmalloc for struct tid_ampdu_rx to
initialize the "removed" field (all others are initialized
manually). That fixes:

UBSAN: Undefined behaviour in net/mac80211/rx.c:932:29
load of value 2 is not a valid value for type '_Bool'
CPU: 3 PID: 1134 Comm: kworker/u16:7 Not tainted 4.5.0-rc1+ #265
Workqueue: phy0 rt2x00usb_work_rxdone
 0000000000000004 ffff880254a7ba50 ffffffff8181d866 0000000000000007
 ffff880254a7ba78 ffff880254a7ba68 ffffffff8188422d ffffffff8379b500
 ffff880254a7bab8 ffffffff81884747 0000000000000202 0000000348620032
Call Trace:
 [<ffffffff8181d866>] dump_stack+0x45/0x5f
 [<ffffffff8188422d>] ubsan_epilogue+0xd/0x40
 [<ffffffff81884747>] __ubsan_handle_load_invalid_value+0x67/0x70
 [<ffffffff82227b4d>] ieee80211_sta_reorder_release.isra.16+0x5ed/0x730
 [<ffffffff8222ca14>] ieee80211_prepare_and_rx_handle+0xd04/0x1c00
 [<ffffffff8222db03>] __ieee80211_rx_handle_packet+0x1f3/0x750
 [<ffffffff8222e4a7>] ieee80211_rx_napi+0x447/0x990

While at it, convert to use sizeof(*tid_agg_rx) instead.

Fixes: 788211d81bfdf ("mac80211: fix RX A-MPDU session reorder timer deletion")
Signed-off-by: Chris Bainbridge <chris.bainbridge@gmail.com>
[reword commit message, use sizeof(*tid_agg_rx)]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agomac80211: minstrel: Change expected throughput unit back to Kbps
Sven Eckelmann [Tue, 2 Feb 2016 07:12:26 +0000 (08:12 +0100)]
mac80211: minstrel: Change expected throughput unit back to Kbps

commit 212c5a5e6ba61678be6b5fee576e38bccb50b613 upstream.

The change from cur_tp to the function
minstrel_get_tp_avg/minstrel_ht_get_tp_avg changed the unit used for the
current throughput. For example in minstrel_ht the correct
conversion between them would be:

    mrs->cur_tp / 10 == minstrel_ht_get_tp_avg(..).

This factor 10 must also be included in the calculation of
minstrel_get_expected_throughput and minstrel_ht_get_expected_throughput to
return values with the unit [Kbps] instead of [10Kbps]. Otherwise routing
algorithms like B.A.T.M.A.N. V will make incorrect decision based on these
values. Its kernel based implementation expects expected_throughput always
to have the unit [Kbps] and not sometimes [10Kbps] and sometimes [Kbps].

The same requirement has iw or olsrdv2's nl80211 based statistics module
which retrieve the same data via NL80211_STA_INFO_TX_BITRATE.

Fixes: 6a27b2c40b48 ("mac80211: restructure per-rate throughput calculation into function")
Signed-off-by: Sven Eckelmann <sven@open-mesh.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoiwlwifi: mvm: inc pending frames counter also when txing non-sta
Liad Kaufman [Sun, 14 Feb 2016 13:32:58 +0000 (15:32 +0200)]
iwlwifi: mvm: inc pending frames counter also when txing non-sta

commit fb896c44f88a75843a072cd6961b1615732f7811 upstream.

Until this patch, when TXing non-sta the pending_frames counter
wasn't increased, but it WAS decreased in
iwl_mvm_rx_tx_cmd_single(), what makes it negative in certain
conditions. This in turn caused much trouble when we need to
remove the station since we won't be waiting forever until
pending_frames gets 0. In certain cases, we were exhausting
the station table even in BSS mode, because we had a lot of
stale stations.

Increase the counter also in iwl_mvm_tx_skb_non_sta() after a
successful TX to avoid this outcome.

Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agocan: gs_usb: fixed disconnect bug by removing erroneous use of kfree()
Maximilain Schneider [Tue, 23 Feb 2016 01:17:28 +0000 (01:17 +0000)]
can: gs_usb: fixed disconnect bug by removing erroneous use of kfree()

commit e9a2d81b1761093386a0bb8a4f51642ac785ef63 upstream.

gs_destroy_candev() erroneously calls kfree() on a struct gs_can *, which is
allocated through alloc_candev() and should instead be freed using
free_candev() alone.

The inappropriate use of kfree() causes the kernel to hang when
gs_destroy_candev() is called.

Only the struct gs_usb * which is allocated through kzalloc() should be freed
using kfree() when the device is disconnected.

Signed-off-by: Maximilian Schneider <max@schneidersoft.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agocfg80211/wext: fix message ordering
Johannes Berg [Wed, 27 Jan 2016 12:29:34 +0000 (13:29 +0100)]
cfg80211/wext: fix message ordering

commit cb150b9d23be6ee7f3a0fff29784f1c5b5ac514d upstream.

Since cfg80211 frequently takes actions from its netdev notifier
call, wireless extensions messages could still be ordered badly
since the wext netdev notifier, since wext is built into the
kernel, runs before the cfg80211 netdev notifier. For example,
the following can happen:

5: wlan1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default
    link/ether 02:00:00:00:01:00 brd ff:ff:ff:ff:ff:ff
5: wlan1: <BROADCAST,MULTICAST,UP>
    link/ether

when setting the interface down causes the wext message.

To also fix this, export the wireless_nlevent_flush() function
and also call it from the cfg80211 notifier.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agowext: fix message delay/ordering
Johannes Berg [Wed, 27 Jan 2016 11:37:52 +0000 (12:37 +0100)]
wext: fix message delay/ordering

commit 8bf862739a7786ae72409220914df960a0aa80d8 upstream.

Beniamino reported that he was getting an RTM_NEWLINK message for a
given interface, after the RTM_DELLINK for it. It turns out that the
message is a wireless extensions message, which was sent because the
interface had been connected and disconnection while it was deleted
caused a wext message.

For its netlink messages, wext uses RTM_NEWLINK, but the message is
without all the regular rtnetlink attributes, so "ip monitor link"
prints just rudimentary information:

5: wlan1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default
    link/ether 02:00:00:00:01:00 brd ff:ff:ff:ff:ff:ff
Deleted 5: wlan1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
    link/ether 02:00:00:00:01:00 brd ff:ff:ff:ff:ff:ff
5: wlan1: <BROADCAST,MULTICAST,UP>
    link/ether
(from my hwsim reproduction)

This can cause userspace to get confused since it doesn't expect an
RTM_NEWLINK message after RTM_DELLINK.

The reason for this is that wext schedules a worker to send out the
messages, and the scheduling delay can cause the messages to get out
to userspace in different order.

To fix this, have wext register a netdevice notifier and flush out
any pending messages when netdevice state changes. This fixes any
ordering whenever the original message wasn't sent by a notifier
itself.

Reported-by: Beniamino Galvani <bgalvani@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoovl: fix working on distributed fs as lower layer
Konstantin Khlebnikov [Sun, 31 Jan 2016 13:22:16 +0000 (16:22 +0300)]
ovl: fix working on distributed fs as lower layer

commit b5891cfab08fe3144a616e8e734df7749fb3b7d0 upstream.

This adds missing .d_select_inode into alternative dentry_operations.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Fixes: 7c03b5d45b8e ("ovl: allow distributed fs as lower layer")
Fixes: 4bacc9c9234c ("overlayfs: Make f_path always point to the overlay and f_inode to the underlay")
Reviewed-by: Nikolay Borisov <kernel@kyup.com>
Tested-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoovl: ignore lower entries when checking purity of non-directory entries
Konstantin Khlebnikov [Sun, 31 Jan 2016 13:17:53 +0000 (16:17 +0300)]
ovl: ignore lower entries when checking purity of non-directory entries

commit 45d11738969633ec07ca35d75d486bf2d8918df6 upstream.

After rename file dentry still holds reference to lower dentry from
previous location. This doesn't matter for data access because data comes
from upper dentry. But this stale lower dentry taints dentry at new
location and turns it into non-pure upper. Such file leaves visible
whiteout entry after remove in directory which shouldn't have whiteouts at
all.

Overlayfs already tracks pureness of file location in oe->opaque.  This
patch just uses that for detecting actual path type.

Comment from Vivek Goyal's patch:

Here are the details of the problem. Do following.

$ mkdir upper lower work merged upper/dir/
$ touch lower/test
$ sudo mount -t overlay overlay -olowerdir=lower,upperdir=upper,workdir=
work merged
$ mv merged/test merged/dir/
$ rm merged/dir/test
$ ls -l merged/dir/
/usr/bin/ls: cannot access merged/dir/test: No such file or directory
total 0
c????????? ? ? ? ?            ? test

Basic problem seems to be that once a file has been unlinked, a whiteout
has been left behind which was not needed and hence it becomes visible.

Whiteout is visible because parent dir is of not type MERGE, hence
od->is_real is set during ovl_dir_open(). And that means ovl_iterate()
passes on iterate handling directly to underlying fs. Underlying fs does
not know/filter whiteouts so it becomes visible to user.

Why did we leave a whiteout to begin with when we should not have.
ovl_do_remove() checks for OVL_TYPE_PURE_UPPER() and does not leave
whiteout if file is pure upper. In this case file is not found to be pure
upper hence whiteout is left.

So why file was not PURE_UPPER in this case? I think because dentry is
still carrying some leftover state which was valid before rename. For
example, od->numlower was set to 1 as it was a lower file. After rename,
this state is not valid anymore as there is no such file in lower.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Reported-by: Viktor Stanchev <me@viktorstanchev.com>
Suggested-by: Vivek Goyal <vgoyal@redhat.com>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=109611
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoASoC: wm8958: Fix enum ctl accesses in a wrong type
Takashi Iwai [Mon, 29 Feb 2016 17:01:12 +0000 (18:01 +0100)]
ASoC: wm8958: Fix enum ctl accesses in a wrong type

commit d0784829ae3b0beeb69b476f017d5c8a2eb95198 upstream.

"MBC Mode", "VSS Mode", "VSS HPF Mode" and "Enhanced EQ Mode" ctls in
wm8958 codec driver are enum, while the current driver accesses
wrongly via value.integer.value[].  They have to be via
value.enumerated.item[] instead.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoASoC: wm8994: Fix enum ctl accesses in a wrong type
Takashi Iwai [Mon, 29 Feb 2016 17:01:15 +0000 (18:01 +0100)]
ASoC: wm8994: Fix enum ctl accesses in a wrong type

commit 8019c0b37cd5a87107808300a496388b777225bf upstream.

The DRC Mode like "AIF1DRC1 Mode" and EQ Mode like "AIF1.1 EQ Mode" in
wm8994 codec driver are enum ctls, while the current driver accesses
wrongly via value.integer.value[].  They have to be via
value.enumerated.item[] instead.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoASoC: samsung: Use IRQ safe spin lock calls
Charles Keepax [Thu, 18 Feb 2016 15:47:13 +0000 (15:47 +0000)]
ASoC: samsung: Use IRQ safe spin lock calls

commit 316fa9e09ad76e095b9d7e9350c628b918370a22 upstream.

Lockdep warns of a potential lock inversion, i2s->lock is held numerous
times whilst we are under the substream lock (snd_pcm_stream_lock). If
we use the IRQ unsafe spin lock calls, you can also end up locking
snd_pcm_stream_lock whilst under i2s->lock (if an IRQ happens whilst we
are holding i2s->lock). This could result in deadlock.

[   18.147001]        CPU0                    CPU1
[   18.151509]        ----                    ----
[   18.156022]   lock(&(&pri_dai->spinlock)->rlock);
[   18.160701]                                local_irq_disable();
[   18.166622]                                lock(&(&substream->self_group.lock)->rlock);
[   18.174595]                                lock(&(&pri_dai->spinlock)->rlock);
[   18.181806]   <Interrupt>
[   18.184408]     lock(&(&substream->self_group.lock)->rlock);
[   18.190045]
[   18.190045]  *** DEADLOCK ***

This patch changes to using the irq safe spinlock calls, to avoid this
issue.

Fixes: ce8bcdbb61d9 ("ASoC: samsung: i2s: Protect more registers with a spinlock")
Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Tested-by: Anand Moon <linux.amoon@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoASoC: dapm: Fix ctl value accesses in a wrong type
Takashi Iwai [Mon, 29 Feb 2016 16:20:48 +0000 (17:20 +0100)]
ASoC: dapm: Fix ctl value accesses in a wrong type

commit 741338f99f16dc24d2d01ac777b0798ae9d10a90 upstream.

snd_soc_dapm_dai_link_get() and _put() access the associated ctl
values as value.integer.value[].  However, this is an enum ctl, and it
has to be accessed via value.enumerated.item[].  The former is long
while the latter is unsigned int, so they don't align.

Fixes: c66150824b8a ('ASoC: dapm: add code to configure dai link parameters')
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agoncpfs: fix a braino in OOM handling in ncp_fill_cache()
Al Viro [Tue, 8 Mar 2016 03:17:07 +0000 (22:17 -0500)]
ncpfs: fix a braino in OOM handling in ncp_fill_cache()

commit 803c00123a8012b3a283c0530910653973ef6d8f upstream.

Failing to allocate an inode for child means that cache for *parent* is
incompletely populated.  So it's parent directory inode ('dir') that
needs NCPI_DIR_CACHE flag removed, *not* the child inode ('inode', which
is what we'd failed to allocate in the first place).

Fucked-up-in: commit 5e993e25 ("ncpfs: get rid of d_validate() nonsense")
Fucked-up-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agojffs2: reduce the breakage on recovery from halfway failed rename()
Al Viro [Tue, 8 Mar 2016 04:07:10 +0000 (23:07 -0500)]
jffs2: reduce the breakage on recovery from halfway failed rename()

commit f93812846f31381d35c04c6c577d724254355e7f upstream.

d_instantiate(new_dentry, old_inode) is absolutely wrong thing to
do - it will oops if new_dentry used to be positive, for starters.
What we need is d_invalidate() the target and be done with that.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agodmaengine: at_xdmac: fix residue computation
Ludovic Desroches [Thu, 10 Mar 2016 09:17:55 +0000 (10:17 +0100)]
dmaengine: at_xdmac: fix residue computation

commit 25c5e9626ca4d40928dc9c44f009ce2ed0a739e7 upstream.

When computing the residue we need two pieces of information: the current
descriptor and the remaining data of the current descriptor. To get
that information, we need to read consecutively two registers but we
can't do it in an atomic way. For that reason, we have to check manually
that current descriptor has not changed.

Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Suggested-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Reported-by: David Engraf <david.engraf@sysgo.com>
Tested-by: David Engraf <david.engraf@sysgo.com>
Fixes: e1f7c9eee707 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
8 years agotracing: Fix check for cpu online when event is disabled
Steven Rostedt (Red Hat) [Wed, 9 Mar 2016 16:58:41 +0000 (11:58 -0500)]
tracing: Fix check for cpu online when event is disabled

commit dc17147de328a74bbdee67c1bf37d2f1992de756 upstream.

Commit f37755490fe9b ("tracepoints: Do not trace when cpu is offline") added
a check to make sure that tracepoints only get called when the cpu is
online, as it uses rcu_read_lock_sched() for protection.

Commit 3a630178fd5f3 ("tracing: generate RCU warnings even when tracepoints
are disabled") added lockdep checks (including rcu checks) for events that
are not enabled to catch possible RCU issues that would only be triggered if
a trace event was enabled. Commit f37755490fe9b only stopped the warnings
when the trace event was enabled but did not prevent warnings if the trace
event was called when disabled.

To fix this, the cpu online check is moved to where the condition is added
to the trace event. This will place the cpu online check in all places that
it may be used now and in the future.

Fixes: f37755490fe9b ("tracepoints: Do not trace when cpu is offline")
Fixes: 3a630178fd5f3 ("tracing: generate RCU warnings even when tracepoints are disabled")
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>