arm64: mm: rewrite ASID allocator and MM context-switching code
authorWill Deacon <will.deacon@arm.com>
Tue, 6 Oct 2015 17:46:24 +0000 (18:46 +0100)
committerCatalin Marinas <catalin.marinas@arm.com>
Wed, 7 Oct 2015 10:55:41 +0000 (11:55 +0100)
commit5aec715d7d3122f77cabaa7578d9d25a0c1ed20e
tree8d75ae3f1f72bfa8ee77fdea406b6c9dcfaf4e60
parent8e63d38876691756f9bc6930850f1fb77809be1b
arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

  (1) The ASID allocator relies on IPIs to synchronise the CPUs on a
      rollover event

  (2) Because of (1), we cannot allocate ASIDs with interrupts disabled
      and therefore make use of a TIF_SWITCH_MM flag to postpone the
      actual switch to finish_arch_post_lock_switch

  (3) We run context switch with a reserved (invalid) TTBR0 value, even
      though the ASID and pgd are updated atomically

  (4) We take a global spinlock (cpu_asid_lock) during context-switch

  (5) We use h/w broadcast TLB operations when they are not required
      (e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch/arm64/include/asm/mmu.h
arch/arm64/include/asm/mmu_context.h
arch/arm64/include/asm/thread_info.h
arch/arm64/kernel/asm-offsets.c
arch/arm64/kernel/efi.c
arch/arm64/mm/context.c
arch/arm64/mm/proc.S