arm64: mm: remove pointless PAGE_MASKing
authorMark Rutland <mark.rutland@arm.com>
Wed, 9 Dec 2015 12:44:36 +0000 (12:44 +0000)
committerAlex Shi <alex.shi@linaro.org>
Thu, 20 Oct 2016 08:23:46 +0000 (16:23 +0800)
As pgd_offset{,_k} shift the input address by PGDIR_SHIFT, the sub-page
bits will always be shifted out. There is no need to apply PAGE_MASK
before this.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(cherry picked from commit e2c30ee320eb96304896c7ab84499e5bc5e5fb6e)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
arch/arm64/mm/mmu.c

index 653735a8c58a86248e37593648de4a62f301893c..41b62ef828475bcbed5b95fa3853ab1a6032c087 100644 (file)
@@ -280,7 +280,7 @@ static void __init create_mapping(phys_addr_t phys, unsigned long virt,
                        &phys, virt);
                return;
        }
-       __create_mapping(&init_mm, pgd_offset_k(virt & PAGE_MASK), phys, virt,
+       __create_mapping(&init_mm, pgd_offset_k(virt), phys, virt,
                         size, prot, early_alloc);
 }
 
@@ -301,7 +301,7 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
                return;
        }
 
-       return __create_mapping(&init_mm, pgd_offset_k(virt & PAGE_MASK),
+       return __create_mapping(&init_mm, pgd_offset_k(virt),
                                phys, virt, size, prot, late_alloc);
 }