path: root/mm/slob.c
diff options
authorVladimir Davydov <>2016-12-12 16:41:29 -0800
committerLinus Torvalds <>2016-12-12 18:55:06 -0800
commit13583c3d3224508582ec03d881d0b68dd3ee8e10 (patch)
tree48ca686076bb06069656111ed4321236c1e83944 /mm/slob.c
parentc62c38f6b91b87a013bccd3637c2a1850d8e590c (diff)
mm: memcontrol: use special workqueue for creating per-memcg caches
Creating a lot of cgroups at the same time might stall all worker threads with kmem cache creation works, because kmem cache creation is done with the slab_mutex held. The problem was amplified by commits 801faf0db894 ("mm/slab: lockless decision to grow cache") in case of SLAB and 81ae6d03952c ("mm/slub.c: replace kick_all_cpus_sync() with synchronize_sched() in kmem_cache_shrink()") in case of SLUB, which increased the maximal time the slab_mutex can be held. To prevent that from happening, let's use a special ordered single threaded workqueue for kmem cache creation. This shouldn't introduce any functional changes regarding how kmem caches are created, as the work function holds the global slab_mutex during its whole runtime anyway, making it impossible to run more than one work at a time. By using a single threaded workqueue, we just avoid creating a thread per each work. Ordering is required to avoid a situation when a cgroup's work is put off indefinitely because there are other cgroups to serve, in other words to guarantee fairness. Link: Link: Signed-off-by: Vladimir Davydov <> Reported-by: Doug Smythies <> Acked-by: Michal Hocko <> Cc: Christoph Lameter <> Cc: David Rientjes <> Cc: Johannes Weiner <> Cc: Joonsoo Kim <> Cc: Pekka Enberg <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
Diffstat (limited to 'mm/slob.c')
0 files changed, 0 insertions, 0 deletions