THP: fix comment about memory barrier
authorMinchan Kim <minchan@kernel.org>
Mon, 29 Apr 2013 22:08:15 +0000 (15:08 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 29 Apr 2013 22:54:37 +0000 (15:54 -0700)
Currently the memory barrier in __do_huge_pmd_anonymous_page doesn't
work.  Because lru_cache_add_lru uses pagevec so it could miss spinlock
easily so above rule was broken so user might see inconsistent data.

I was not first person who pointed out the problem.  Mel and Peter
pointed out a few months ago and Peter pointed out further that even
spin_lock/unlock can't make sure of it:

  http://marc.info/?t=134333512700004

In particular:

         *A = a;
         LOCK
         UNLOCK
         *B = b;

may occur as:

         LOCK, STORE *B, STORE *A, UNLOCK

At last, Hugh pointed out that even we don't need memory barrier in
there because __SetPageUpdate already have done it from Nick's commit
0ed361dec369 ("mm: fix PageUptodate data race") explicitly.

So this patch fixes comment on THP and adds same comment for
do_anonymous_page, too because everybody except Hugh was missing that.
It means we need a comment about that.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/huge_memory.c
mm/memory.c

index e2f7f5aaaafb77c848dcb21fcdc8168cdcf8d860..45eaae03062805899f559399691b42ca9554d829 100644 (file)
@@ -713,6 +713,11 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
                return VM_FAULT_OOM;
 
        clear_huge_page(page, haddr, HPAGE_PMD_NR);
+       /*
+        * The memory barrier inside __SetPageUptodate makes sure that
+        * clear_huge_page writes become visible before the set_pmd_at()
+        * write.
+        */
        __SetPageUptodate(page);
 
        spin_lock(&mm->page_table_lock);
@@ -724,12 +729,6 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
        } else {
                pmd_t entry;
                entry = mk_huge_pmd(page, vma);
-               /*
-                * The spinlocking to take the lru_lock inside
-                * page_add_new_anon_rmap() acts as a full memory
-                * barrier to be sure clear_huge_page writes become
-                * visible after the set_pmd_at() write.
-                */
                page_add_new_anon_rmap(page, vma, haddr);
                set_pmd_at(mm, haddr, pmd, entry);
                pgtable_trans_huge_deposit(mm, pgtable);
index ba94dec5b25900e47093544e11f8c765bf243f9d..f7a1fba85d140483242345e97acdbdf62eaafda8 100644 (file)
@@ -3244,6 +3244,11 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
        page = alloc_zeroed_user_highpage_movable(vma, address);
        if (!page)
                goto oom;
+       /*
+        * The memory barrier inside __SetPageUptodate makes sure that
+        * preceeding stores to the page contents become visible before
+        * the set_pte_at() write.
+        */
        __SetPageUptodate(page);
 
        if (mem_cgroup_newpage_charge(page, mm, GFP_KERNEL))