firefly-linux-kernel-4.4.55.git
10 years agoarm64: simplify pgd_alloc
Mark Rutland [Wed, 5 Feb 2014 10:24:13 +0000 (10:24 +0000)]
arm64: simplify pgd_alloc

Currently pgd_alloc has a redundant NULL check in its return path that
can be removed with no ill effects. With that removed it's also possible
to return early and eliminate the new_pgd temporary variable.

This patch applies said modifications, making the logic of pgd_alloc
correspond 1-1 with that of pgd_free.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 883d50a0ed403446437444a495356ce31e1197a3)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: fix typo: s/SERRROR/SERROR/
Mark Rutland [Wed, 5 Feb 2014 10:24:12 +0000 (10:24 +0000)]
arm64: fix typo: s/SERRROR/SERROR/

Somehow SERROR has acquired an additional 'R' in a couple of headers.
This patch removes them before they spread further. As neither instance
is in use yet, no other sites need to be fixed up.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit bfb67a5606376bb32cb6f93dc05cda2e8c2038a5)
Signed-off-by: Mark Brown <broonie@linaro.org>
Conflicts:
arch/arm64/include/asm/kvm_arm.h

10 years agoarm64: mm: fix the function name in comment of cpu_do_switch_mm
Jingoo Han [Mon, 27 Jan 2014 07:19:32 +0000 (07:19 +0000)]
arm64: mm: fix the function name in comment of cpu_do_switch_mm

Fix the function name of comment of cpu_do_switch_mm,
because cpu_do_switch_mm is the correct name.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 812944e91dbbfeadaeeb4443a5560a7f45648f0b)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: mm: fix the function name in comment of __flush_dcache_area
Jingoo Han [Tue, 21 Jan 2014 01:17:47 +0000 (01:17 +0000)]
arm64: mm: fix the function name in comment of __flush_dcache_area

Fix the function name of comment of __flush_dcache_area,
because __flush_dcache_area is the correct name. Also,
the missing variable 'size' is added to the comment.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 03324e6e6e66ebd171d9b4b90fd6a2655980dc13)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: mm: use ubfm for dcache_line_size
Jingoo Han [Mon, 20 Jan 2014 05:00:21 +0000 (05:00 +0000)]
arm64: mm: use ubfm for dcache_line_size

Use 'ubfm' for the bitfield move instruction; thus, single
instruction can be used instead of two instructions, when
getting the minimum D-cache line size from CTR_EL0 register.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit bd5f6dc304a054ccdc8dab43bef5e41d9a575b61)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Remove unused __data_loc variable
Geoff Levand [Sat, 14 Dec 2013 00:20:13 +0000 (00:20 +0000)]
arm64: Remove unused __data_loc variable

The __data_loc variable is an unused left over from the 32 bit arm implementation.
Remove that variable and adjust the __mmap_switched startup routine accordingly.

Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit b22cf637bbaf99d4caf9908997a32f91cdcfae52)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Remove outdated comment
Liviu Dudau [Tue, 17 Dec 2013 18:19:46 +0000 (18:19 +0000)]
arm64: Remove outdated comment

Code referenced in the comment has moved to arch/arm64/kernel/cputable.c

Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 81cac699440fc3707fd80f16bf34a7e506d41487)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoARM64: fix framepointer check in unwind_frame
Konstantin Khlebnikov [Thu, 5 Dec 2013 13:30:16 +0000 (13:30 +0000)]
ARM64: fix framepointer check in unwind_frame

We need at least 24 bytes above frame pointer.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 26920dd2da79a3207803da9453c0e6c82ac968ca)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoARM64: check stack pointer in get_wchan
Konstantin Khlebnikov [Thu, 5 Dec 2013 13:30:10 +0000 (13:30 +0000)]
ARM64: check stack pointer in get_wchan

get_wchan() is lockless. Task may wakeup at any time and change its own stack,
thus each next stack frame may be overwritten and filled with random stuff.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 408c3658b0d49315974ce8b5aed385c8e1527595)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: kconfig: select HAVE_EFFICIENT_UNALIGNED_ACCESS
Will Deacon [Mon, 16 Dec 2013 17:50:08 +0000 (17:50 +0000)]
arm64: kconfig: select HAVE_EFFICIENT_UNALIGNED_ACCESS

ARMv8 CPUs can perform efficient unaligned memory accesses in hardware
and this feature is relied up on by code such as the dcache
word-at-a-time name hashing.

This patch selects HAVE_EFFICIENT_UNALIGNED_ACCESS for arm64.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 50afc33a90e710c02d9bbf2f3673936365f0e690)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: dcache: select DCACHE_WORD_ACCESS for little-endian CPUs
Will Deacon [Wed, 6 Nov 2013 19:32:13 +0000 (19:32 +0000)]
arm64: dcache: select DCACHE_WORD_ACCESS for little-endian CPUs

DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string
comparisons in the vfs layer.

This patch implements support for load_unaligned_zeropad in much the
same way as has been done for ARM, although big-endian systems are also
supported.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 7bc13fd33adb9536bd73965cd46bbf7377df097c)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: futex: ensure .fixup entries are sufficiently aligned
Will Deacon [Wed, 6 Nov 2013 19:31:24 +0000 (19:31 +0000)]
arm64: futex: ensure .fixup entries are sufficiently aligned

AArch64 instructions must be 4-byte aligned, so make sure this is true
for the futex .fixup section.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 4da7a56c59f28e27e8dcff61b5d7b05f6e203606)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: use generic strnlen_user and strncpy_from_user functions
Will Deacon [Wed, 6 Nov 2013 17:20:22 +0000 (17:20 +0000)]
arm64: use generic strnlen_user and strncpy_from_user functions

This patch implements the word-at-a-time interface for arm64 using the
same algorithm as ARM. We use the fls64 macro, which expands to a clz
instruction via a compiler builtin. Big-endian configurations make use
of the implementation from asm-generic.

With this implemented, we can replace our byte-at-a-time strnlen_user
and strncpy_from_user functions with the optimised generic versions.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 12a0ef7b0ac38677bd2d85f33df5ca0a57868819)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: percpu: implement optimised pcpu access using tpidr_el1
Will Deacon [Tue, 5 Nov 2013 18:10:47 +0000 (18:10 +0000)]
arm64: percpu: implement optimised pcpu access using tpidr_el1

This patch implements optimised percpu variable accesses using the
el1 r/w thread register (tpidr_el1) along the same lines as arch/arm/.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 7158627686f02319c50c8d9d78f75d4c8d126ff2)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoMerge remote-tracking branch 'lsk/v3.10/topic/arm64-cpu' into lsk-v3.10-arm64-misc
Mark Brown [Fri, 16 May 2014 14:36:15 +0000 (15:36 +0100)]
Merge remote-tracking branch 'lsk/v3.10/topic/arm64-cpu' into lsk-v3.10-arm64-misc

Conflicts:
arch/arm64/kernel/smp.c

10 years agoarm64: make default NR_CPUS 8
Rob Herring [Fri, 22 Nov 2013 21:07:31 +0000 (21:07 +0000)]
arm64: make default NR_CPUS 8

Rather than continue to add per platform defaults, make the default a
likely common core count. 8 is also the default for x86.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 62aceb8ff4b3f442575eb7e23629da36020dca77)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: ensure completion of TLB invalidatation
Mark Rutland [Mon, 2 Dec 2013 16:11:00 +0000 (16:11 +0000)]
arm64: ensure completion of TLB invalidatation

Currently there is no dsb between the tlbi in __cpu_setup and the write
to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the
TLB invalidation is not guaranteed to have completed at the point
address translation is enabled, leading to a number of possible issues
including incorrect translations and TLB conflict faults.

This patch moves the tlbi in __cpu_setup above an existing dsb used to
synchronise I-cache invalidation, ensuring that the TLBs have been
invalidated at the point the MMU is enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 3cea71bc6b470372ae407881b87128aadf0afec0)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoARM64: /proc/interrupts: display IPIs of online CPUs only
Sudeep KarkadaNagesha [Thu, 7 Nov 2013 15:25:44 +0000 (15:25 +0000)]
ARM64: /proc/interrupts: display IPIs of online CPUs only

The non-IPI interrupts are displayed only for the online cpus from
show_interrupts in kernel/irq/proc.c before calling arch_show_interrupts().
As a result, the column headers and the IPI count don't match if any
CPU is offline.

This patch fixes show_ipi_list to display IPIs for online CPUs only.

Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 67317c2689567c24d18e0dd43ab6d409fd42dc6e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: locks: Remove CONFIG_GENERIC_LOCKBREAK
Catalin Marinas [Wed, 6 Nov 2013 11:42:41 +0000 (11:42 +0000)]
arm64: locks: Remove CONFIG_GENERIC_LOCKBREAK

Commit 52ea2a560a9d (arm64: locks: introduce ticket-based spinlock
implementation) introduces the arch_spin_is_contended() function making
CONFIG_GENERIC_LOCKBREAK unnecessary.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit 61c77e0802719efce8966619cdc4234de7f252c1)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: compat: Clear the IT state independent of the 32-bit ARM or Thumb-2 mode
T.J. Purtell [Tue, 5 Nov 2013 17:07:18 +0000 (17:07 +0000)]
arm64: compat: Clear the IT state independent of the 32-bit ARM or Thumb-2 mode

The ARM architecture reference specifies that the IT state bits in the
PSR must be all zeros in ARM mode or behavior is unspecified. If an ARM
function is registered as a signal handler, and that signal is delivered
inside a block of instructions following an IT instruction, some of the
instructions at the beginning of the signal handler may be skipped if
the IT state bits of the Program Status Register are not cleared by the
kernel.

Signed-off-by: T.J. Purtell <tj@mobisocial.us>
[catalin.marinas@arm.com: code comment and commit log updated]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit aa62c2091129af81a172350b718eb35d5448cebc)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Use 42-bit address space with 64K pages
Catalin Marinas [Wed, 23 Oct 2013 15:50:07 +0000 (16:50 +0100)]
arm64: Use 42-bit address space with 64K pages

This patch expands the VA_BITS to 42 when the 64K page configuration is
enabled allowing 2TB kernel linear mapping. Linux still uses 2 levels of
page tables in this configuration with pgd now being a full page.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
(cherry picked from commit 847264fb7e73ade5b5e4b6eea3daa243a1f5217e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: fix access to preempt_count from assembly code
Marc Zyngier [Mon, 4 Nov 2013 20:14:58 +0000 (20:14 +0000)]
arm64: fix access to preempt_count from assembly code

preempt_count is defined as an int. Oddly enough, we access it
as a 64bit value. Things become interesting when running a BE
kernel, and looking at the current CPU number, which is stored
as an int next to preempt_count. Like in a per-cpu interrupt
handler, for example...

Using a 32bit access fixes the issue for good.

Cc: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 717321fcb58ed95169bf344ae47ac6098ba5dfbe)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: use generic RW_DATA_SECTION macro in linker script
Mark Salter [Mon, 4 Nov 2013 16:38:47 +0000 (16:38 +0000)]
arm64: use generic RW_DATA_SECTION macro in linker script

The .data section in the arm64 linker script currently lacks a
definition for page-aligned data. This leads to a .page_aligned
section being placed between the end of data and start of bss.
This patch corrects that by using the generic RW_DATA_SECTION
macro which includes support for page-aligned data.

Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 3c620626c0cd4cfca856d70a846398275b48a768)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: update 32-bit kuser helpers to ARMv8
Robin Murphy [Mon, 7 Oct 2013 17:30:34 +0000 (18:30 +0100)]
arm64: update 32-bit kuser helpers to ARMv8

This patch updates the barrier semantics in the kuser helper functions
to take advantage of the ARMv8 additions to AArch32, which are
guaranteed to be available in situations where these functions will be
called.

Note that this slightly changes the cmpxchg functions in that they are
no longer necessarily full barriers if they return 1. However, the
documentation only states they include their own barriers "as needed",
not that they are obligated to act as a full barrier for the caller.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
CC: Matthew Leach <matthew.leach@arm.com>
CC: Dave Martin <dave.martin@arm.com>
CC: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit d0f38f9130b7683e39611c5a661349e301ee43c8)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Export __copy_in_user() to modules
Catalin Marinas [Mon, 7 Oct 2013 15:42:05 +0000 (16:42 +0100)]
arm64: Export __copy_in_user() to modules

This function may be called from loadable modules, so it needs
exporting.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Loc Ho <lho@apm.com>
(cherry picked from commit 2a3f912c782f2364f5e5813ab66ca6c92fb43acb)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: cmpxchg: implement cmpxchg64_relaxed
Will Deacon [Wed, 9 Oct 2013 14:54:28 +0000 (15:54 +0100)]
arm64: cmpxchg: implement cmpxchg64_relaxed

This patch introduces cmpxchg64_relaxed for arm64 using the existing
cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits)
without barrier semantics.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit cf10b79a7d88edc689479af989b3a88e9adf07ff)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: lockref: add support for lockless lockrefs using cmpxchg
Will Deacon [Wed, 9 Oct 2013 14:54:27 +0000 (15:54 +0100)]
arm64: lockref: add support for lockless lockrefs using cmpxchg

Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can
deal with 8-bytes (as one would hope!).

This patch wires up the cmpxchg-based lockless lockref implementation
for arm64.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 5686b06cea34e31ec0a549d9b5ac00776e8e8d6d)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: locks: introduce ticket-based spinlock implementation
Will Deacon [Wed, 9 Oct 2013 14:54:26 +0000 (15:54 +0100)]
arm64: locks: introduce ticket-based spinlock implementation

This patch introduces a ticket lock implementation for arm64, along the
same lines as the implementation for arch/arm/.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 52ea2a560a9dba57fe5fd6b4726b1089751accf2)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoclk: arm64: Add DTS clock entry for APM X-Gene Storm SoC
Loc Ho [Wed, 26 Jun 2013 17:56:10 +0000 (11:56 -0600)]
clk: arm64: Add DTS clock entry for APM X-Gene Storm SoC

clk: arm64: Add DTS clock entry for APM X-Gene Storm SoC with reference to
reference, PCP PLL, SoC PLL, and Ethernet clocks.

Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Vinayak Kale <vkale@apm.com>
Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
(cherry picked from commit 3eb15d84e355f4ea1acf0eaba11ca173de342107)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Widen hwcap to be 64 bit
Steve Capper [Wed, 18 Sep 2013 15:14:28 +0000 (16:14 +0100)]
arm64: Widen hwcap to be 64 bit

Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in
a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit.

This patch widens elf_hwcap to be 64 bit.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 25804e6a96681d5d2142058948e218999e4f547c)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Correctly report LR and SP for compat tasks
Catalin Marinas [Tue, 17 Sep 2013 17:49:46 +0000 (18:49 +0100)]
arm64: Correctly report LR and SP for compat tasks

When a task crashes and we print debugging information, ensure that
compat tasks show the actual AArch32 LR and SP registers rather than the
AArch64 ones.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 6ca68e802612c87c31aa83d50c37ed0d88774a46)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Make do_bad_area() function static
Catalin Marinas [Mon, 16 Sep 2013 14:18:28 +0000 (15:18 +0100)]
arm64: Make do_bad_area() function static

This function is only called from arch/arm64/mm/fault.c.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 59f67e16e6b79697241c3fd030e3da300377893e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: mm: permit use of tagged pointers at EL0
Will Deacon [Wed, 12 Jun 2013 15:28:04 +0000 (16:28 +0100)]
arm64: mm: permit use of tagged pointers at EL0

TCR.TBI0 can be used to cause hardware address translation to ignore the
top byte of userspace virtual addresses. Whilst not especially useful in
standard C programs, this can be used by JITs to `tag' pointers with
various pieces of metadata.

This patch enables this bit for AArch64 Linux, and adds a new file to
Documentation/arm64/ which describes some potential caveats when using
tagged virtual addresses.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit d50240a5f6ceaf690a77b0fccb17be51cfa151c2)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoMove the EM_ARM and EM_AARCH64 definitions to uapi/linux/elf-em.h
Dan Aloni [Wed, 28 Aug 2013 13:24:53 +0000 (14:24 +0100)]
Move the EM_ARM and EM_AARCH64 definitions to uapi/linux/elf-em.h

Signed-off-by: Dan Aloni <alonid@stratoscale.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 909e3ee4119f87b85c6e1b8534b2287ed1ea3ca2)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: delay: don't bother reporting bogomips in /proc/cpuinfo
Will Deacon [Fri, 30 Aug 2013 17:06:48 +0000 (18:06 +0100)]
arm64: delay: don't bother reporting bogomips in /proc/cpuinfo

We always use a timer-backed delay loop for arm64, so don't bother
reporting a bogomips value which appears to confuse some people.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 326b16db9f69fd0d279be873c6c00f88c0a4aad5)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Fix mapping of memory banks not ending on a PMD_SIZE boundary
Catalin Marinas [Fri, 23 Aug 2013 17:04:44 +0000 (18:04 +0100)]
arm64: Fix mapping of memory banks not ending on a PMD_SIZE boundary

The map_mem() function limits the current memblock limit to PGDIR_SIZE
(the initial swapper_pg_dir mapping) to avoid create_mapping()
allocating memory from unmapped areas. However, if the first block is
within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page
configuration is enabled, create_mapping() will try to allocate a pte
page. Such page may be returned by memblock_alloc() from the end of such
bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet.

The patch limits the current memblock limit to the aligned end of the
first bank and gradually increases it as more memory is mapped. It also
ensures that the start of the first bank is aligned to PMD_SIZE to avoid
pte page allocation for this mapping.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: "Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
Tested-by: "Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
(cherry picked from commit e25208f77c2dad5a9f2ab3d3df61252a90b71afa)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoMerge remote-tracking branch 'lsk/v3.10/topic/arm64-hugepages' into lsk-v3.10-arm64...
Mark Brown [Thu, 15 May 2014 19:00:16 +0000 (20:00 +0100)]
Merge remote-tracking branch 'lsk/v3.10/topic/arm64-hugepages' into lsk-v3.10-arm64-misc

10 years agoarm64: move elf notes into readonly segment
Mark Salter [Fri, 23 Aug 2013 15:16:42 +0000 (16:16 +0100)]
arm64: move elf notes into readonly segment

The current vmlinux.lds.S places the notes sections between the
end of rw data and start of bss. This means that _edata doesn't
really point to the end of data. Since notes are read-only, this
patch moves them to the read-only segment so that _edata does
point to the end of initialized rw data.

Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit c80b7ee8520606f77fbc8ced870c96659053269e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Enable interrupts in the EL0 undef handler
Catalin Marinas [Thu, 22 Aug 2013 10:47:37 +0000 (11:47 +0100)]
arm64: Enable interrupts in the EL0 undef handler

do_undefinstr() has to be called with interrupts disabled since it may
read the instruction from the user address space which could lead to a
data abort and subsequent might_sleep() warning in do_page_fault().

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 2600e130b3c90c8d6c13229d3d3a14dcb898a87b)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Expand arm64 image header
Roy Franz [Wed, 14 Aug 2013 23:10:00 +0000 (00:10 +0100)]
arm64: Expand arm64 image header

Expand the arm64 image header to allow for co-existance with
PE/COFF header required by the EFI stub.  The PE/COFF format
requires the "MZ" header to be at offset 0, and the offset
to the PE/COFF header to be at offset 0x3c.  The image
header is expanded to allow 2 instructions at the beginning
to accommodate a benign intruction at offset 0 that includes
the "MZ" header, a magic number, and the offset to the PE/COFF
header.

Signed-off-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 4370eec05a887b0cd4392cd5dc5b2713174745c0)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoARM64: include: asm: include "asm/types.h" in "pgtable-2level-types.h" and "pgtable...
Chen Gang [Wed, 21 Aug 2013 09:50:17 +0000 (10:50 +0100)]
ARM64: include: asm: include "asm/types.h" in "pgtable-2level-types.h" and "pgtable-3level-types.h"

Need include "asm/types.h", just like arm has done, or can not pass
compiling, the related error:

  In file included from arch/arm64/include/asm/page.h:37:0,
                   from drivers/staging/lustre/include/linux/lnet/linux/lib-lnet.h:42,
                   from drivers/staging/lustre/include/linux/lnet/lib-lnet.h:44,
                   from drivers/staging/lustre/lnet/lnet/api-ni.c:38:
  arch/arm64/include/asm/pgtable-2level-types.h:19:1: error: unknown type name â€˜u64
  arch/arm64/include/asm/pgtable-2level-types.h:20:1: error: unknown type name â€˜u64’

Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 360b35a874f130305715b1b854b67dc40826fc91)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoof: Specify initrd location using 64-bit
Santosh Shilimkar [Mon, 1 Jul 2013 18:20:35 +0000 (14:20 -0400)]
of: Specify initrd location using 64-bit

On some PAE architectures, the entire range of physical memory could reside
outside the 32-bit limit.  These systems need the ability to specify the
initrd location using 64-bit numbers.

This patch globally modifies the early_init_dt_setup_initrd_arch() function to
use 64-bit numbers instead of the current unsigned long.

There has been quite a bit of debate about whether to use u64 or phys_addr_t.
It was concluded to stick to u64 to be consistent with rest of the device
tree code. As summarized by Geert, "The address to load the initrd is decided
by the bootloader/user and set at that point later in time. The dtb should not
be tied to the kernel you are booting"

More details on the discussion can be found here:
https://lkml.org/lkml/2013/6/20/690
https://lkml.org/lkml/2012/9/13/544

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Signed-off-by: Grant Likely <grant.likely@linaro.org>
(cherry picked from commit 374d5c9964c10373ba39bbe934f4262eb87d7114)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: add '#ifdef CONFIG_COMPAT' for aarch32_break_handler()
Chen Gang [Mon, 24 Jun 2013 09:27:49 +0000 (10:27 +0100)]
arm64: add '#ifdef CONFIG_COMPAT' for aarch32_break_handler()

If 'COMPAT' not defined, aarch32_break_handler() cannot pass compiling,
and it can work independent with 'COMPAT', so remove dummy definition.

The related error:

  arch/arm64/kernel/debug-monitors.c:249:5: error: redefinition of â€˜aarch32_break_handler’
  In file included from arch/arm64/kernel/debug-monitors.c:29:0:
  /root/linux-next/arch/arm64/include/asm/debug-monitors.h:89:12: note: previous definition of â€˜aarch32_break_handler’ was here

Signed-off-by: Chen Gang <gang.chen@asianux.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit c783c2815e13bbb0c0b99997cc240bd7e91b6bb8)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Add initial DTS for APM X-Gene Storm SOC and APM Mustang board
Vinayak Kale [Wed, 24 Apr 2013 09:07:00 +0000 (10:07 +0100)]
arm64: Add initial DTS for APM X-Gene Storm SOC and APM Mustang board

This patch adds initial DTS files required for APM Mustang board.

Signed-off-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit ee877b5321c4dfee9dc9f2a12b19ddcd33149f6a)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Add defines for APM ARMv8 implementation
Vinayak Kale [Wed, 24 Apr 2013 09:06:59 +0000 (10:06 +0100)]
arm64: Add defines for APM ARMv8 implementation

This patch adds defines for APM CPU implementer ID and APM CPU part numbers in asm/cputype.h

Signed-off-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 4ad637a452d5683ca7ff9e9eb994ac4b7a517073)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Enable APM X-Gene SOC family in the defconfig
Vinayak Kale [Wed, 24 Apr 2013 09:06:58 +0000 (10:06 +0100)]
arm64: Enable APM X-Gene SOC family in the defconfig

This patch enables APM X-Gene SOC family in the defconfig. It also enables 8250 serial driver needed by X-Gene SOC family.

Signed-off-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit c1db16dc9e74addba4ef8c954702f13e457f0c7e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Add Kconfig option for APM X-Gene SOC family
Vinayak Kale [Wed, 24 Apr 2013 09:06:57 +0000 (10:06 +0100)]
arm64: Add Kconfig option for APM X-Gene SOC family

This patch adds arm64/Kconfig option for APM X-Gene SOC family.

Signed-off-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Loc Ho <lho@apm.com>
Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 159428538323188158a6058956c16c199909d844)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64/Makefile: provide vdso_install target
Kyle McMartin [Sun, 16 Jun 2013 19:32:44 +0000 (20:32 +0100)]
arm64/Makefile: provide vdso_install target

Provide a vdso_install target in the arm64 Makefile, as other architectures
with a vdso do.

Signed-off-by: Kyle McMartin <kyle@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 3c01742a8ac93a3abf9b099758db970410427afd)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: debug: consolidate software breakpoint handlers
Will Deacon [Sat, 16 Mar 2013 08:48:13 +0000 (08:48 +0000)]
arm64: debug: consolidate software breakpoint handlers

The software breakpoint handlers are hooked in directly from ptrace,
which makes it difficult to add additional handlers for things like
kprobes and kgdb.

This patch moves the handling code into debug-monitors.c, where we can
dispatch to different debug subsystems more easily.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 1442b6ed249d2b3d2cfcf45b65ac64393495c96c)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: extable: sort the exception table at build time
Will Deacon [Wed, 8 May 2013 16:29:24 +0000 (17:29 +0100)]
arm64: extable: sort the exception table at build time

As is done for other architectures, sort the exception table at
build-time rather than during boot.

Since sortextable appears to be a standalone C program relying on the
host elf.h to provide EM_AARCH64, I've had to add a conditional check in
order to allow cross-compilation on machines that aren't running a
bleeding-edge libc-dev.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit adace89562c7a9645b8dc84f6e1ac7ba8756094e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: device: add iommu pointer to device archdata
Will Deacon [Mon, 10 Jun 2013 18:34:42 +0000 (19:34 +0100)]
arm64: device: add iommu pointer to device archdata

When using an IOMMU for device mappings, it is necessary to keep a
pointer between the device and the IOMMU to which it is attached in
order to obtain the correct IOMMU when attaching the device to a domain.

This patch adds an iommu pointer to the dev_archdata structure, in a
similar manner to other architectures (ARM, PowerPC, x86, ...).

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 73150c983ac1f9b7653cfd3823b1ad4a44aad3bf)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: pgtable: use pte_index instead of __pte_index
Will Deacon [Mon, 10 Jun 2013 18:34:41 +0000 (19:34 +0100)]
arm64: pgtable: use pte_index instead of __pte_index

pte_index is a useful helper outside of arch/arm64, for things like the
ARM SMMU driver, so rename __pte_index to pte_index to be consistent
with both arch/arm/ and also the definitions of pmd_index and pgd_index.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 9ab6d02fddc6831b166812956ff387d7112ff626)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: kernel: compiling issue, need delete read_current_timer()
Chen Gang [Tue, 21 May 2013 09:46:05 +0000 (10:46 +0100)]
arm64: kernel: compiling issue, need delete read_current_timer()

Under arm64, we will calibrate the delay loop statically using a known
timer frequency, so delete read_current_timer(), or it will cause
compiling issue with allmodconfig.

The related error:
  ERROR: "read_current_timer" [lib/rbtree_test.ko] undefined!
  ERROR: "read_current_timer" [lib/interval_tree_test.ko] undefined!
  ERROR: "read_current_timer" [fs/ext4/ext4.ko] undefined!
  ERROR: "read_current_timer" [crypto/tcrypt.ko] undefined!

Signed-off-by: Chen Gang <gang.chen@asianux.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 6916b14ea140ff5c915895eefe9431888a39a84d)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: mm: don't bother invalidating the icache in switch_mm
Will Deacon [Wed, 5 Jun 2013 11:40:34 +0000 (12:40 +0100)]
arm64: mm: don't bother invalidating the icache in switch_mm

We don't support software broadcast of cache maintenance operations, so
this flush is not required (__sync_icache_dcache will always affect all
CPUs).

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 737c16dffc458e58ee7840556d43b874cb8e16a0)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Fix build for __PAGE_NONE define
Mark Brown [Mon, 14 Apr 2014 17:25:05 +0000 (18:25 +0100)]
arm64: Fix build for __PAGE_NONE define

Simple typo.

Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: mm: Add double logical invert to pte accessors
Steve Capper [Tue, 25 Feb 2014 11:38:53 +0000 (11:38 +0000)]
arm64: mm: Add double logical invert to pte accessors

Page table entries on ARM64 are 64 bits, and some pte functions such as
pte_dirty return a bitwise-and of a flag with the pte value. If the
flag to be tested resides in the upper 32 bits of the pte, then we run
into the danger of the result being dropped if downcast.

For example:
gather_stats(page, md, pte_dirty(*pte), 1);
where pte_dirty(*pte) is downcast to an int.

This patch adds a double logical invert to all the pte_ accessors to
ensure predictable downcasting.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agoarm64: mm: Introduce PTE_WRITE
Steve Capper [Wed, 15 Jan 2014 14:07:13 +0000 (14:07 +0000)]
arm64: mm: Introduce PTE_WRITE

We have the following means for encoding writable or dirty ptes:

                                PTE_DIRTY       PTE_RDONLY
!pte_dirty && !pte_write        0               1
!pte_dirty && pte_write         0               1
pte_dirty && !pte_write         1               1
pte_dirty && pte_write          1               0

So we can't distinguish between writable clean ptes and read only
ptes. This can cause problems with ptes being incorrectly flagged as
read only when they are writable but not dirty.

This patch introduces a new software bit PTE_WRITE which allows us to
correctly identify writable ptes. PTE_RDONLY is now only clear for
valid ptes where a page is both writable and dirty.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm64/include/asm/pgtable.h

10 years agoarm64: mm: Remove PTE_BIT_FUNC macro
Steve Capper [Wed, 15 Jan 2014 14:07:12 +0000 (14:07 +0000)]
arm64: mm: Remove PTE_BIT_FUNC macro

Expand out the pte manipulation functions. This makes our life easier
when using things like tags and cscope.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agomm/hugetlb.c: call MMU notifiers when copying a hugetlb page range
Andreas Sandberg [Tue, 21 Jan 2014 23:49:09 +0000 (15:49 -0800)]
mm/hugetlb.c: call MMU notifiers when copying a hugetlb page range

When copy_hugetlb_page_range() is called to copy a range of hugetlb
mappings, the secondary MMUs are not notified if there is a protection
downgrade, which breaks COW semantics in KVM.

This patch adds the necessary MMU notifier calls.

Signed-off-by: Andreas Sandberg <andreas@sandberg.pp.se>
Acked-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
10 years agoarm64: mm: Fix PMD_SECT_PROT_NONE definition
Steve Capper [Thu, 5 Dec 2013 12:04:51 +0000 (12:04 +0000)]
arm64: mm: Fix PMD_SECT_PROT_NONE definition

Modify the value of PMD_SECT_PROT_NONE to match that of PTE_NONE. This
should have been in commit 3676f9ef5481 (Move PTE_PROT_NONE higher up).

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Cc: <stable@vger.kernel.org> # 3.11+: 3676f9ef5481: arm64: Move PTE_PROT_NONE higher up
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agoarm64: Move PTE_PROT_NONE higher up
Catalin Marinas [Wed, 27 Nov 2013 16:59:27 +0000 (16:59 +0000)]
arm64: Move PTE_PROT_NONE higher up

PTE_PROT_NONE means that a pte is present but does not have any
read/write attributes. However, setting the memory type like
pgprot_writecombine() is allowed and such bits overlap with
PTE_PROT_NONE. This causes mmap/munmap issues in drivers that change the
vma->vm_pg_prot on PROT_NONE mappings.

This patch reverts the PTE_FILE/PTE_PROT_NONE shift in commit
59911ca4325d (ARM64: mm: Move PTE_PROT_NONE bit) and moves PTE_PROT_NONE
together with the other software bits.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Steve Capper <steve.capper@linaro.org>
Cc: <stable@vger.kernel.org> # 3.11+
10 years agoARM64: mm: THP support.
Steve Capper [Fri, 19 Apr 2013 15:23:57 +0000 (16:23 +0100)]
ARM64: mm: THP support.

Bring Transparent HugePage support to ARM. The size of a
transparent huge page depends on the normal page size. A
transparent huge page is always represented as a pmd.

If PAGE_SIZE is 4KB, THPs are 2MB.
If PAGE_SIZE is 64KB, THPs are 512MB.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agoARM64: mm: Raise MAX_ORDER for 64KB pages and THP.
Steve Capper [Thu, 25 Apr 2013 14:19:21 +0000 (15:19 +0100)]
ARM64: mm: Raise MAX_ORDER for 64KB pages and THP.

The buddy allocator has a default MAX_ORDER of 11, which is too
low to allocate enough memory for 512MB Transparent HugePages if
our base page size is 64KB.

This patch introduces MAX_ZONE_ORDER and sets it to 14 when 64KB
pages are used in conjuction with THP, otherwise the default value
of 11 is used.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agoARM64: mm: HugeTLB support.
Steve Capper [Wed, 10 Apr 2013 12:48:00 +0000 (13:48 +0100)]
ARM64: mm: HugeTLB support.

Add huge page support to ARM64, different huge page sizes are
supported depending on the size of normal pages:

PAGE_SIZE is 4KB:
   2MB - (pmds) these can be allocated at any time.
1024MB - (puds) usually allocated on bootup with the command line
         with something like: hugepagesz=1G hugepages=6

PAGE_SIZE is 64KB:
 512MB - (pmds) usually allocated on bootup via command line.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agoARM64: mm: Move PTE_PROT_NONE bit.
Steve Capper [Tue, 28 May 2013 12:35:51 +0000 (13:35 +0100)]
ARM64: mm: Move PTE_PROT_NONE bit.

Under ARM64, PTEs can be broadly categorised as follows:
   - Present and valid: Bit #0 is set. The PTE is valid and memory
     access to the region may fault.

   - Present and invalid: Bit #0 is clear and bit #1 is set.
     Represents present memory with PROT_NONE protection. The PTE
     is an invalid entry, and the user fault handler will raise a
     SIGSEGV.

   - Not present (file or swap): Bits #0 and #1 are clear.
     Memory represented has been paged out. The PTE is an invalid
     entry, and the fault handler will try and re-populate the
     memory where necessary.

Huge PTEs are block descriptors that have bit #1 clear. If we wish
to represent PROT_NONE huge PTEs we then run into a problem as
there is no way to distinguish between regular and huge PTEs if we
set bit #1.

To resolve this ambiguity this patch moves PTE_PROT_NONE from
bit #1 to bit #2 and moves PTE_FILE from bit #2 to bit #3. The
number of swap/file bits is reduced by 1 as a consequence, leaving
60 bits for file and swap entries.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agoARM64: mm: Make PAGE_NONE pages read only and no-execute.
Steve Capper [Thu, 2 May 2013 15:25:42 +0000 (16:25 +0100)]
ARM64: mm: Make PAGE_NONE pages read only and no-execute.

If we consider the following code sequence:

my_pte = pte_modify(entry, myprot);
x = pte_write(my_pte);
y = pte_exec(my_pte);

If myprot comes from a PROT_NONE page, then x and y will both be
true which is undesireable behaviour.

This patch sets the no-execute and read-only bits for PAGE_NONE
such that the code above will return false for both x and y.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agoARM64: mm: Restore memblock limit when map_mem finished.
Steve Capper [Tue, 30 Apr 2013 10:00:33 +0000 (11:00 +0100)]
ARM64: mm: Restore memblock limit when map_mem finished.

In paging_init the memblock limit is set to restrict any addresses
returned by early_alloc to fit within the initial direct kernel
mapping in swapper_pg_dir. This allows map_mem to allocate puds,
pmds and ptes from the initial direct kernel mapping.

The limit stays low after paging_init() though, meaning any
bootmem allocations will be from a restricted subset of memory.
Gigabyte huge pages, for instance, are normally allocated from
bootmem as their order (18) is too large for the default buddy
allocator (MAX_ORDER = 11).

This patch restores the memblock limit when map_mem has finished,
allowing gigabyte huge pages (and other objects) to be allocated
from all of bootmem.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
10 years agomm: thp: Correct the HPAGE_PMD_ORDER check.
Steve Capper [Tue, 7 May 2013 13:46:03 +0000 (14:46 +0100)]
mm: thp: Correct the HPAGE_PMD_ORDER check.

All Transparent Huge Pages are allocated by the buddy allocator.

A compile time check is in place that fails when the order of a
transparent huge page is too large to be allocated by the buddy
allocator. Unfortunately that compile time check passes when:
HPAGE_PMD_ORDER == MAX_ORDER
( which is incorrect as the buddy allocator can only allocate
memory of order strictly less than MAX_ORDER. )

This patch updates the compile time check to fail in the above
case.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
10 years agox86: mm: Remove general hugetlb code from x86.
Steve Capper [Tue, 30 Apr 2013 07:03:42 +0000 (08:03 +0100)]
x86: mm: Remove general hugetlb code from x86.

huge_pte_alloc, huge_pte_offset and follow_huge_p[mu]d have
already been copied over to mm.

This patch removes the x86 copies of these functions and activates
the general ones by enabling:
CONFIG_ARCH_WANT_GENERAL_HUGETLB

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
10 years agomm: hugetlb: Copy general hugetlb code from x86 to mm.
Steve Capper [Tue, 30 Apr 2013 07:02:03 +0000 (08:02 +0100)]
mm: hugetlb: Copy general hugetlb code from x86 to mm.

The huge_pte_alloc, huge_pte_offset and follow_huge_p[mu]d
functions in x86/mm/hugetlbpage.c do not rely on any architecture
specific knowledge other than the fact that pmds and puds can be
treated as huge ptes.

To allow other architectures to use this code (and reduce the need
for code duplication), this patch copies these functions into mm,
replaces the use of pud_large with pud_huge and provides a config
flag to activate them:
CONFIG_ARCH_WANT_GENERAL_HUGETLB

If CONFIG_ARCH_WANT_HUGE_PMD_SHARE is also active then the
huge_pmd_share code will be called by huge_pte_alloc (othewise we
call pmd_alloc and skip the sharing code).

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
10 years agox86: mm: Remove x86 version of huge_pmd_share.
Steve Capper [Mon, 29 Apr 2013 13:29:48 +0000 (14:29 +0100)]
x86: mm: Remove x86 version of huge_pmd_share.

The huge_pmd_share code has been copied over to mm/hugetlb.c to
make it accessible to other architectures.

Remove the x86 copy of the huge_pmd_share code and enable the
ARCH_WANT_HUGE_PMD_SHARE config flag. That way we reference the
general one.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
10 years agomm: hugetlb: Copy huge_pmd_share from x86 to mm.
Steve Capper [Tue, 23 Apr 2013 11:35:02 +0000 (12:35 +0100)]
mm: hugetlb: Copy huge_pmd_share from x86 to mm.

Under x86, multiple puds can be made to reference the same bank of
huge pmds provided that they represent a full PUD_SIZE of shared
huge memory that is aligned to a PUD_SIZE boundary.

The code to share pmds does not require any architecture specific
knowledge other than the fact that pmds can be indexed, thus can
be beneficial to some other architectures.

This patch copies the huge pmd sharing (and unsharing) logic from
x86/ to mm/ and introduces a new config option to activate it:
CONFIG_ARCH_WANTS_HUGE_PMD_SHARE

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
10 years agoMerge remote-tracking branch 'lsk/v3.10/topic/arm64-lts' into lsk-v3.10-arm64-misc
Mark Brown [Thu, 15 May 2014 10:42:57 +0000 (11:42 +0100)]
Merge remote-tracking branch 'lsk/v3.10/topic/arm64-lts' into lsk-v3.10-arm64-misc

10 years agoarm64: Remove __flush_dcache_page()
Catalin Marinas [Wed, 1 May 2013 15:38:23 +0000 (16:38 +0100)]
arm64: Remove __flush_dcache_page()

This function is only used in __sync_icache_dcache(), so remove it and
call __flush_dcache_area() directly. The flush_icache_user_range()
function is not used in the arm64 kernel.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will.deacon@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit ebd88367de80f9509bd30a09342d0a19c925b23e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Make DMA coherent and strongly ordered mappings not executable
Catalin Marinas [Wed, 12 Mar 2014 16:07:06 +0000 (16:07 +0000)]
arm64: Make DMA coherent and strongly ordered mappings not executable

commit de2db7432917a82b62d55bb59635586eeca6d1bd upstream.

pgprot_{dmacoherent,writecombine,noncached} don't need to generate
executable mappings with side-effects like __sync_icache_dcache() being
called when the mapping is in user space.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Tested-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 73e8697b71fa956642403304b96442fd4b57ce97)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Do not synchronise I and D caches for special ptes
Catalin Marinas [Wed, 12 Mar 2014 16:28:09 +0000 (16:28 +0000)]
arm64: Do not synchronise I and D caches for special ptes

commit 71fdb6bf61bf0692f004f9daf5650392c0cfe300 upstream.

Special pte mappings are not intended to be executable and do not even
have an associated struct page. This patch ensures that we do not call
__sync_icache_dcache() on such ptes.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Steve Capper <Steve.Capper@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Tested-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit b1f2bfc9d8bb42b4e5a93470e8a0974cd4c04697)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoARM64: unwind: Fix PC calculation
Olof Johansson [Fri, 14 Feb 2014 19:35:15 +0000 (19:35 +0000)]
ARM64: unwind: Fix PC calculation

commit e306dfd06fcb44d21c80acb8e5a88d55f3d1cf63 upstream.

The frame PC value in the unwind code used to just take the saved LR
value and use that.  That's incorrect as a stack trace, since it shows
the return path stack, not the call path stack.

In particular, it shows faulty information in case the bl is done as
the very last instruction of one label, since the return point will be
in the next label. That can easily be seen with tail calls to panic(),
which is marked __noreturn and thus doesn't have anything useful after it.

Easiest here is to just correct the unwind code and do a -4, to get the
actual call site for the backtrace instead of the return site.

Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit c56e0dc1b70f1537659acffc6ac8d8d615be2dae)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: add DSB after icache flush in __flush_icache_all()
Vinayak Kale [Wed, 5 Feb 2014 09:34:36 +0000 (09:34 +0000)]
arm64: add DSB after icache flush in __flush_icache_all()

commit 5044bad43ee573d0b6d90e3ccb7a40c2c7d25eb4 upstream.

Add DSB after icache flush to complete the cache maintenance operation.
The function __flush_icache_all() is used only for user space mappings
and an ISB is not required because of an exception return before executing
user instructions. An exception return would behave like an ISB.

Signed-off-by: Vinayak Kale <vkale@apm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 02599bad3774a5147eb8d98df7c9362fdc1a50c6)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: vdso: fix coarse clock handling
Nathan Lynch [Wed, 5 Feb 2014 05:53:04 +0000 (05:53 +0000)]
arm64: vdso: fix coarse clock handling

commit 069b918623e1510e58dacf178905a72c3baa3ae4 upstream.

When __kernel_clock_gettime is called with a CLOCK_MONOTONIC_COARSE or
CLOCK_REALTIME_COARSE clock id, it returns incorrectly to whatever the
caller has placed in x2 ("ret x2" to return from the fast path).  Fix
this by saving x30/LR to x2 only in code that will call
__do_get_tspec, restoring x30 afterward, and using a plain "ret" to
return from the routine.

Also: while the resulting tv_nsec value for CLOCK_REALTIME and
CLOCK_MONOTONIC must be computed using intermediate values that are
left-shifted by cs_shift (x12, set by __do_get_tspec), the results for
coarse clocks should be calculated using unshifted values
(xtime_coarse_nsec is in units of actual nanoseconds).  The current
code shifts intermediate values by x12 unconditionally, but x12 is
uninitialized when servicing a coarse clock.  Fix this by setting x12
to 0 once we know we are dealing with a coarse clock id.

Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit fb569d15d867a06e89b1be8278404b6fbf6b5bde)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Invalidate the TLB when replacing pmd entries during boot
Catalin Marinas [Tue, 4 Feb 2014 16:01:31 +0000 (16:01 +0000)]
arm64: Invalidate the TLB when replacing pmd entries during boot

commit a55f9929a9b257f84b6cc7b2397379cabd744a22 upstream.

With the 64K page size configuration, __create_page_tables in head.S
maps enough memory to get started but using 64K pages rather than 512M
sections with a single pgd/pud/pmd entry pointing to a pte table.
create_mapping() may override the pgd/pud/pmd table entry with a block
(section) one if the RAM size is more than 512MB and aligned correctly.
For the end of this block to be accessible, the old TLB entry must be
invalidated.

Reported-by: Mark Salter <msalter@redhat.com>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit f35f27e775d6f8aa265fb698975c9c95f2757ef4)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: vdso: prevent ld from aligning PT_LOAD segments to 64k
Will Deacon [Tue, 4 Feb 2014 14:41:26 +0000 (14:41 +0000)]
arm64: vdso: prevent ld from aligning PT_LOAD segments to 64k

commit 40507403485fcb56b83d6ddfc954e9b08305054c upstream.

Whilst the text segment for our VDSO is marked as PT_LOAD in the ELF
headers, it is mapped by the kernel and not actually subject to
demand-paging. ld doesn't realise this, and emits a p_align field of 64k
(the maximum supported page size), which conflicts with the load address
picked by the kernel on 4k systems, which will be 4k aligned. This
causes GDB to fail with "Failed to read a valid object file image from
memory" when attempting to load the VDSO.

This patch passes the -n option to ld, which prevents it from aligning
PT_LOAD segments to the maximum page size.

Reported-by: Kyle McMartin <kyle@redhat.com>
Acked-by: Kyle McMartin <kyle@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 6737eaebff6297e5b6aeb458a4b4ad4b61df8f57)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: vdso: update wtm fields for CLOCK_MONOTONIC_COARSE
Nathan Lynch [Mon, 3 Feb 2014 19:48:52 +0000 (19:48 +0000)]
arm64: vdso: update wtm fields for CLOCK_MONOTONIC_COARSE

commit d4022a335271a48cce49df35d825897914fbffe3 upstream.

Update wall-to-monotonic fields in the VDSO data page
unconditionally.  These are used to service CLOCK_MONOTONIC_COARSE,
which is not guarded by use_syscall.

Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit b666f382900c74f23c78556777041c2ceb7c2b24)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Use Normal NonCacheable memory for writecombine
Catalin Marinas [Fri, 29 Nov 2013 10:56:14 +0000 (10:56 +0000)]
arm64: Use Normal NonCacheable memory for writecombine

commit 4f00130b70e5eee813cc7bc298e0f3fdf79673cc upstream.

This provides better performance compared to Device GRE and also allows
unaligned accesses. Such memory is intended to be used with standard RAM
(e.g. framebuffers) and not I/O.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 3655a197b1ea3ce989d34868768c5f4b6205061c)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Do not flush the D-cache for anonymous pages
Catalin Marinas [Wed, 1 May 2013 15:34:22 +0000 (16:34 +0100)]
arm64: Do not flush the D-cache for anonymous pages

commit 7249b79f6b4cc3c2aa9138dca52e535a4c789107 upstream.

The D-cache on AArch64 is VIPT non-aliasing, so there is no need to
flush it for anonymous pages.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will.deacon@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit fc54900e08d840fcfec9e5d2fba2c6f233aa49b9)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Avoid cache flushing in flush_dcache_page()
Catalin Marinas [Wed, 1 May 2013 11:23:05 +0000 (12:23 +0100)]
arm64: Avoid cache flushing in flush_dcache_page()

commit b5b6c9e9149d8a7c3f1d7b9d0c046c6184e1dd17 upstream.

The flush_dcache_page() function is called when the kernel modified a
page cache page. Since the D-cache on AArch64 does not have aliases
this function can simply mark the page as dirty for later flushing via
set_pte_at()/__sync_icache_dcache() if the page is executable (to ensure
the I-D cache coherency).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will.deacon@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 1e616427f20943c9966296dfff9e7a2b825846aa)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoclocksource: arch_timer: use virtual counters
Mark Rutland [Wed, 30 Jan 2013 17:51:26 +0000 (17:51 +0000)]
clocksource: arch_timer: use virtual counters

commit 0d651e4e65e96989f72236bf83bd4c6e55eb6ce4 upstream.

Switching between reading the virtual or physical counters is
problematic, as some core code wants a view of time before we're fully
set up. Using a function pointer and switching the source after the
first read can make time appear to go backwards, and having a check in
the read function is an unfortunate block on what we want to be a fast
path.

Instead, this patch makes us always use the virtual counters. If we're a
guest, or don't have hyp mode, we'll use the virtual timers, and as such
don't care about CNTVOFF as long as it doesn't change in such a way as
to make time appear to travel backwards. As the guest will use the
virtual timers, a (potential) KVM host must use the physical timers
(which can wake up the host even if they fire while a guest is
executing), and hence a host must have CNTVOFF set to zero so as to have
a consistent view of time between the physical timers and virtual
counters.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 714c21cb90951905b269870087a99c37f3a7af0c)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.S
Catalin Marinas [Mon, 2 Sep 2013 15:33:54 +0000 (16:33 +0100)]
arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.S

commit f3a1d7d53dccf51959aec16b574617cc6bfeca09 upstream.

This string has been moved to arch/arm64/kernel/cputable.c.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit ae5092fadd50cfd0d29e762771d30cbf7f2c2fec)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: dts: Reserve the memory used for secondary CPU release address
Catalin Marinas [Thu, 14 Nov 2013 15:15:37 +0000 (15:15 +0000)]
arm64: dts: Reserve the memory used for secondary CPU release address

commit df503ba7f653c590b475ab80bde788edf5af70d5 upstream.

With the spin-table SMP booting method, secondary CPUs poll a location
passed in the DT. The foundation-v8.dts file doesn't have this memory
reserved and there is a risk of Linux using it before secondary CPUs are
started.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 29f37087bb51dfe4ff61daad60bd93f7bcfa3d72)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: check for number of arguments in syscall_get/set_arguments()
AKASHI Takahiro [Thu, 3 Oct 2013 05:47:44 +0000 (06:47 +0100)]
arm64: check for number of arguments in syscall_get/set_arguments()

commit 7b22c03536a539142f931815528d55df455ffe2d upstream.

In ftrace_syscall_enter(),
    syscall_get_arguments(..., 0, n, ...)
        if (i == 0) { <handle orig_x0> ...; n--;}
        memcpy(..., n * sizeof(args[0]));
If 'number of arguments(n)' is zero and 'argument index(i)' is also zero in
syscall_get_arguments(), none of arguments should be copied by memcpy().
Otherwise 'n--' can be a big positive number and unexpected amount of data
will be copied. Tracing system calls which take no argument, say sync(void),
may hit this case and eventually make the system corrupted.
This patch fixes the issue both in syscall_get_arguments() and
syscall_set_arguments().

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit e2956ef5b5ddb3ede70962afe9297d8340787fa6)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: fix possible invalid FPSIMD initialization state
Jiang Liu [Fri, 27 Sep 2013 08:04:41 +0000 (09:04 +0100)]
arm64: fix possible invalid FPSIMD initialization state

commit 6db83cea1c975b9a102e17def7d2795814e1ae2b upstream.

If context switching happens during executing fpsimd_flush_thread(),
stale value in FPSIMD registers will be saved into current thread's
fpsimd_state by fpsimd_thread_switch(). That may cause invalid
initialization state for the new process, so disable preemption
when executing fpsimd_flush_thread().

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 2bf5861acf602fe6201ac1f82955ac7283e56b6f)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Change kernel stack size to 16K
Feng Kan [Tue, 23 Jul 2013 17:52:31 +0000 (18:52 +0100)]
arm64: Change kernel stack size to 16K

commit 845ad05ec31e0f3872a321e10dbeaf872022632c upstream.

Written by Catalin Marinas, tested by APM on storm platform. This is needed
because of the failures encountered when running SpecWeb benchmark test.

Signed-off-by: Feng Kan <fkan@apm.com>
Acked-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 79f783f05539479676406d3d42c3d86bd203f083)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: virt: ensure visibility of __boot_cpu_mode
Mark Rutland [Tue, 9 Jul 2013 14:16:06 +0000 (15:16 +0100)]
arm64: virt: ensure visibility of __boot_cpu_mode

commit 82b2f495fba338d1e3098dde1df54944a9c19751 upstream.

Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a
cached value of __boot_cpu_mode may be incoherent with that in memory.
This could lead to a failure to detect mismatched boot modes.

This patch adds flushing to ensure that writes by secondaries to
__boot_cpu_mode are made visible before we test against it.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 5fb08df3dd1f7b8e83936808b042725a8b067562)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: Only enable local interrupts after the CPU is marked online
Catalin Marinas [Fri, 19 Jul 2013 14:08:15 +0000 (15:08 +0100)]
arm64: Only enable local interrupts after the CPU is marked online

commit 53ae3acd4390ffeecb3a11dbd5be347b5a3d98f2 upstream.

There is a slight chance that (timer) interrupts are triggered before a
secondary CPU has been marked online with implications on softirq thread
affinity.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Kirill Tkhai <tkhai@yandex.ru>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 81ca15a3e04961b54d4a0b395e681bffb6cfbc68)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: spinlock: retry trylock operation if strex fails on free lock
Catalin Marinas [Fri, 31 May 2013 15:30:58 +0000 (16:30 +0100)]
arm64: spinlock: retry trylock operation if strex fails on free lock

commit 4ecf7ccb1973fd826456b6ab1e6dfafe9023c753 upstream.

An exclusive store instruction may fail for reasons other than lock
contention (e.g. a cache eviction during the critical section) so, in
line with other architectures using similar exclusive instructions
(alpha, mips, powerpc), retry the trylock operation if the lock appears
to be free but the strex reported failure.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Tony Thompson <anthony.thompson@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 42b5bb47a62b7b2d8cd0b9af58f914c5e83ef76e)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: ptrace: avoid using HW_BREAKPOINT_EMPTY for disabled events
Will Deacon [Tue, 17 Dec 2013 17:09:08 +0000 (17:09 +0000)]
arm64: ptrace: avoid using HW_BREAKPOINT_EMPTY for disabled events

commit cdc27c27843248ae7eb0df5fc261dd004eaa5670 upstream.

Commit 8f34a1da35ae ("arm64: ptrace: use HW_BREAKPOINT_EMPTY type for
disabled breakpoints") fixed an issue with GDB trying to zero breakpoint
control registers. The problem there is that the arch hw_breakpoint code
will attempt to create a (disabled), execute breakpoint of length 0.

This will fail validation and report unexpected failure to GDB. To avoid
this, we treated disabled breakpoints as HW_BREAKPOINT_EMPTY, but that
seems to have broken with recent kernels, causing watchpoints to be
treated as TYPE_INST in the core code and returning ENOSPC for any
further breakpoints.

This patch fixes the problem by prioritising the `enable' field of the
breakpoint: if it is cleared, we simply update the perf_event_attr to
indicate that the thing is disabled and don't bother changing either the
type or the length. This reinforces the behaviour that the breakpoint
control register is essentially read-only apart from the enable bit
when disabling a breakpoint.

Reported-by: Aaron Liu <liucy214@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 11b81802921618b02122855db021a63df7d9fd49)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: perf: fix ARMv8 EVTYPE_MASK to include NSH bit
Will Deacon [Tue, 20 Aug 2013 10:47:42 +0000 (11:47 +0100)]
arm64: perf: fix ARMv8 EVTYPE_MASK to include NSH bit

commit 178cd9ce377232518ec17ff2ecab2e80fa60784c upstream.

This is a port of f2fe09b055e2 ("ARM: 7663/1: perf: fix ARMv7 EVTYPE_MASK
to include NSH bit") to arm64, which fixes the broken evtype mask to
include the NSH bit, allowing profiling at EL2.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 0fe9a0dc92a64c088e76fcd3d35b2ba36b4d7f3c)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: perf: fix group validation when using enable_on_exec
Will Deacon [Tue, 20 Aug 2013 10:47:41 +0000 (11:47 +0100)]
arm64: perf: fix group validation when using enable_on_exec

commit 8455e6ec70f33b0e8c3ffd47067e00481f09f454 upstream.

This is a port of cb2d8b342aa0 ("ARM: 7698/1: perf: fix group validation
when using enable_on_exec") to arm64, which fixes the event validation
checking so that events in the OFF state are still considered when
enable_on_exec is true.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 1f96d83b38937294584ede9473c2b2961528f287)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: perf: fix event validation for software group leaders
Will Deacon [Tue, 20 Aug 2013 10:47:40 +0000 (11:47 +0100)]
arm64: perf: fix event validation for software group leaders

commit ee7538a008a45050c8f706d38b600f55953169f9 upstream.

This is a port of c95eb3184ea1 ("ARM: 7809/1: perf: fix event validation
for software group leaders") to arm64, which fixes a panic in the arm64
perf backend found as a result of Vince's fuzzing tool.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 6c6f6c7166e9ee45e545eab850a2862a92c135b2)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: perf: fix array out of bounds access in armpmu_map_hw_event()
Will Deacon [Tue, 20 Aug 2013 10:47:39 +0000 (11:47 +0100)]
arm64: perf: fix array out of bounds access in armpmu_map_hw_event()

commit 868f6fea8fa63f09acbfa93256d0d2abdcabff79 upstream.

This is a port of d9f966357b14 ("ARM: 7810/1: perf: Fix array out of
bounds access in armpmu_map_hw_event()") to arm64, which fixes an oops
in the arm64 perf backend found as a result of Vince's fuzzing tool.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 975a3cd4cc0dff06a635cb16b8fb25fd3ae88234)
Signed-off-by: Mark Brown <broonie@linaro.org>
10 years agoarm64: mm: don't treat user cache maintenance faults as writes
Will Deacon [Fri, 19 Jul 2013 14:37:12 +0000 (15:37 +0100)]
arm64: mm: don't treat user cache maintenance faults as writes

commit db6f41063cbdb58b14846e600e6bc3f4e4c2e888 upstream.

On arm64, cache maintenance faults appear as data aborts with the CM
bit set in the ESR. The WnR bit, usually used to distinguish between
faulting loads and stores, always reads as 1 and (slightly confusingly)
the instructions are treated as reads by the architecture.

This patch fixes our fault handling code to treat cache maintenance
faults in the same way as loads.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 88c0a794e5d9bcc29926e636cd1d6eb5c9dcb235)
Signed-off-by: Mark Brown <broonie@linaro.org>