path: root/mm/percpu-km.c
diff options
authorRoman Gushchin <>2020-08-11 18:30:17 -0700
committerLinus Torvalds <>2020-08-12 10:57:55 -0700
commit3c7be18ac9a06bc67196bfdabb7c21e1bbacdc13 (patch)
treed886a76c49c8bc157e600c07969ba0beb81d253b /mm/percpu-km.c
parent5b32af91b5de6f95ad99e4eaaf57777376af124f (diff)
mm: memcg/percpu: account percpu memory to memory cgroups
Percpu memory is becoming more and more widely used by various subsystems, and the total amount of memory controlled by the percpu allocator can make a good part of the total memory. As an example, bpf maps can consume a lot of percpu memory, and they are created by a user. Also, some cgroup internals (e.g. memory controller statistics) can be quite large. On a machine with many CPUs and big number of cgroups they can consume hundreds of megabytes. So the lack of memcg accounting is creating a breach in the memory isolation. Similar to the slab memory, percpu memory should be accounted by default. To implement the perpcu accounting it's possible to take the slab memory accounting as a model to follow. Let's introduce two types of percpu chunks: root and memcg. What makes memcg chunks different is an additional space allocated to store memcg membership information. If __GFP_ACCOUNT is passed on allocation, a memcg chunk should be be used. If it's possible to charge the corresponding size to the target memory cgroup, allocation is performed, and the memcg ownership data is recorded. System-wide allocations are performed using root chunks, so there is no additional memory overhead. To implement a fast reparenting of percpu memory on memcg removal, we don't store mem_cgroup pointers directly: instead we use obj_cgroup API, introduced for slab accounting. [ fix CONFIG_MEMCG_KMEM=n build errors and warning] [ move unreachable code, per Roman] [ mm/percpu: fix 'defined but not used' warning] Link: Signed-off-by: Roman Gushchin <> Signed-off-by: Bixuan Cui <> Signed-off-by: Andrew Morton <> Reviewed-by: Shakeel Butt <> Acked-by: Dennis Zhou <> Cc: Christoph Lameter <> Cc: David Rientjes <> Cc: Johannes Weiner <> Cc: Joonsoo Kim <> Cc: Mel Gorman <> Cc: Michal Hocko <> Cc: Pekka Enberg <> Cc: Tejun Heo <> Cc: Tobin C. Harding <> Cc: Vlastimil Babka <> Cc: Waiman Long <> Cc: Bixuan Cui <> Cc: Michal Koutný <> Cc: Stephen Rothwell <> Link: Signed-off-by: Linus Torvalds <>
Diffstat (limited to 'mm/percpu-km.c')
1 files changed, 3 insertions, 2 deletions
diff --git a/mm/percpu-km.c b/mm/percpu-km.c
index 20d2b69a13b0..35c9941077ee 100644
--- a/mm/percpu-km.c
+++ b/mm/percpu-km.c
@@ -44,7 +44,8 @@ static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk,
/* nada */
-static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp)
+static struct pcpu_chunk *pcpu_create_chunk(enum pcpu_chunk_type type,
+ gfp_t gfp)
const int nr_pages = pcpu_group_sizes[0] >> PAGE_SHIFT;
struct pcpu_chunk *chunk;
@@ -52,7 +53,7 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp)
unsigned long flags;
int i;
- chunk = pcpu_alloc_chunk(gfp);
+ chunk = pcpu_alloc_chunk(type, gfp);
if (!chunk)
return NULL;