slub: add missed accounting
authorShaohua Li <shaohua.li@intel.com>
Fri, 11 Nov 2011 06:54:14 +0000 (14:54 +0800)
committerPekka Enberg <penberg@kernel.org>
Tue, 13 Dec 2011 20:27:09 +0000 (22:27 +0200)
With per-cpu partial list, slab is added to partial list first and then moved
to node list. The __slab_free() code path for add/remove_partial is almost
deprecated(except for slub debug). But we forget to account add/remove_partial
when move per-cpu partial pages to node list, so the statistics for such events
are always 0. Add corresponding accounting.

This is against the patch "slub: use correct parameter to add a page to
partial list tail"

Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
mm/slub.c

index 4056d29e66106d85a5b0eac54baeba9325af21c7..8284a206f48d2da3a712adb726ec4319c91c632f 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1901,11 +1901,14 @@ static void unfreeze_partials(struct kmem_cache *s)
                        }
 
                        if (l != m) {
-                               if (l == M_PARTIAL)
+                               if (l == M_PARTIAL) {
                                        remove_partial(n, page);
-                               else
+                                       stat(s, FREE_REMOVE_PARTIAL);
+                               } else {
                                        add_partial(n, page,
                                                DEACTIVATE_TO_TAIL);
+                                       stat(s, FREE_ADD_PARTIAL);
+                               }
 
                                l = m;
                        }