[PATCH] fix extra page ref count in follow_hugetlb_page
authorChen, Kenneth W <kenneth.w.chen@intel.com>
Fri, 31 Mar 2006 10:29:57 +0000 (02:29 -0800)
committerLinus Torvalds <torvalds@g5.osdl.org>
Fri, 31 Mar 2006 20:18:49 +0000 (12:18 -0800)
git-commit: d5d4b0aa4e1430d73050babba999365593bdb9d2
"[PATCH] optimize follow_hugetlb_page" breaks mlock on hugepage areas.

I mis-interpret pages argument and made get_page() unconditional.  It
should only get a ref count when "pages" argument is non-null.

Credit goes to Adam Litke who spotted the bug.

Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Acked-by: Adam Litke <agl@us.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
mm/hugetlb.c

index ebad6bbb35012570944117a4a495f2f6782e8f12..d87885eb4acc8eee8dab46ff84131ed712b1350d 100644 (file)
@@ -697,9 +697,10 @@ int follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
                pfn_offset = (vaddr & ~HPAGE_MASK) >> PAGE_SHIFT;
                page = pte_page(*pte);
 same_page:
-               get_page(page);
-               if (pages)
+               if (pages) {
+                       get_page(page);
                        pages[i] = page + pfn_offset;
+               }
 
                if (vmas)
                        vmas[i] = vma;