mm/memory-failure.c: fix wrong num_poisoned_pages in handling memory error on thp
authorNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Sat, 23 Feb 2013 00:34:05 +0000 (16:34 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sun, 24 Feb 2013 01:50:15 +0000 (17:50 -0800)
num_poisoned_pages counts up the number of pages isolated by memory
errors.  But for thp, only one subpage is isolated because memory error
handler splits it, so it's wrong to add (1 << compound_trans_order).

[akpm@linux-foundation.org: tweak comment]
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory-failure.c

index 9cab165fd668e3caeeb2e2a72976432cdb938fff..1a56d63adf9c642eed15e5d9e276060e2f01a933 100644 (file)
@@ -1039,7 +1039,17 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
                return 0;
        }
 
-       nr_pages = 1 << compound_trans_order(hpage);
+       /*
+        * Currently errors on hugetlbfs pages are measured in hugepage units,
+        * so nr_pages should be 1 << compound_order.  OTOH when errors are on
+        * transparent hugepages, they are supposed to be split and error
+        * measurement is done in normal page units.  So nr_pages should be one
+        * in this case.
+        */
+       if (PageHuge(p))
+               nr_pages = 1 << compound_order(hpage);
+       else /* normal page or thp */
+               nr_pages = 1;
        atomic_long_add(nr_pages, &num_poisoned_pages);
 
        /*