arm64: ensure completion of TLB invalidatation
authorMark Rutland <mark.rutland@arm.com>
Mon, 2 Dec 2013 16:11:00 +0000 (16:11 +0000)
committerMark Brown <broonie@linaro.org>
Thu, 15 May 2014 19:00:56 +0000 (20:00 +0100)
Currently there is no dsb between the tlbi in __cpu_setup and the write
to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the
TLB invalidation is not guaranteed to have completed at the point
address translation is enabled, leading to a number of possible issues
including incorrect translations and TLB conflict faults.

This patch moves the tlbi in __cpu_setup above an existing dsb used to
synchronise I-cache invalidation, ensuring that the TLBs have been
invalidated at the point the MMU is enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit 3cea71bc6b470372ae407881b87128aadf0afec0)
Signed-off-by: Mark Brown <broonie@linaro.org>
arch/arm64/mm/proc.S

index b1b31bbc967b43cceaabaee71d850a6619d2a158..2fbd93e9be15ccca2a8988b62a8c835b351255d3 100644 (file)
@@ -111,12 +111,12 @@ ENTRY(__cpu_setup)
        bl      __flush_dcache_all
        mov     lr, x28
        ic      iallu                           // I+BTB cache invalidate
+       tlbi    vmalle1is                       // invalidate I + D TLBs
        dsb     sy
 
        mov     x0, #3 << 20
        msr     cpacr_el1, x0                   // Enable FP/ASIMD
        msr     mdscr_el1, xzr                  // Reset mdscr_el1
-       tlbi    vmalle1is                       // invalidate I + D TLBs
        /*
         * Memory region attributes for LPAE:
         *