path: root/mm/slob.c
diff options
authorVladimir Davydov <>2016-12-12 16:41:32 -0800
committerLinus Torvalds <>2016-12-12 18:55:06 -0800
commit89e364db71fb5e7fc8d93228152abfa67daf35fa (patch)
tree7e70cba61d27fc6e7c7ebd21ec498b808ba2e132 /mm/slob.c
parent13583c3d3224508582ec03d881d0b68dd3ee8e10 (diff)
slub: move synchronize_sched out of slab_mutex on shrink
synchronize_sched() is a heavy operation and calling it per each cache owned by a memory cgroup being destroyed may take quite some time. What is worse, it's currently called under the slab_mutex, stalling all works doing cache creation/destruction. Actually, there isn't much point in calling synchronize_sched() for each cache - it's enough to call it just once - after setting cpu_partial for all caches and before shrinking them. This way, we can also move it out of the slab_mutex, which we have to hold for iterating over the slab cache list. Link: Link: Signed-off-by: Vladimir Davydov <> Reported-by: Doug Smythies <> Acked-by: Joonsoo Kim <> Cc: Christoph Lameter <> Cc: David Rientjes <> Cc: Johannes Weiner <> Cc: Michal Hocko <> Cc: Pekka Enberg <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
Diffstat (limited to 'mm/slob.c')
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slob.c b/mm/slob.c
index 5ec158054ffe..eac04d4357ec 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -634,7 +634,7 @@ void __kmem_cache_release(struct kmem_cache *c)
-int __kmem_cache_shrink(struct kmem_cache *d, bool deactivate)
+int __kmem_cache_shrink(struct kmem_cache *d)
return 0;