summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorAaron Lu <ziqianlu@bytedance.com>2025-09-10 17:50:42 +0800
committerPeter Zijlstra <peterz@infradead.org>2025-09-15 09:38:37 +0200
commitfcd394866e3db344cbe0bb485d7e3f741ac07245 (patch)
tree70b7b00fd96d51fd132d0d43f2a7e82894956c32 /kernel
parentfe8d238e646e16cc431b7a5899f8dda690258ee9 (diff)
sched/fair: update_cfs_group() for throttled cfs_rqs
With task based throttle model, tasks in a throttled hierarchy are allowed to continue to run if they are running in kernel mode. For this reason, PELT clock is not stopped for these cfs_rqs in throttled hierarchy when they still have tasks running or queued. Since PELT clock is not stopped, whether to allow update_cfs_group() doing its job for cfs_rqs which are in throttled hierarchy but still have tasks running/queued is a question. The good side is, continue to run update_cfs_group() can get these cfs_rq entities with an up2date weight and that up2date weight can be useful to derive an accurate load for the CPU as well as ensure fairness if multiple tasks of different cgroups are running on the same CPU. OTOH, as Benjamin Segall pointed: when unthrottle comes around the most likely correct distribution is the distribution we had at the time of throttle. In reality, either way may not matter that much if tasks in throttled hierarchy don't run in kernel mode for too long. But in case that happens, let these cfs_rq entities have an up2date weight seems a good thing to do. Signed-off-by: Aaron Lu <ziqianlu@bytedance.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/fair.c3
1 files changed, 0 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f993de30e146..58f5349d3725 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3957,9 +3957,6 @@ static void update_cfs_group(struct sched_entity *se)
if (!gcfs_rq || !gcfs_rq->load.weight)
return;
- if (throttled_hierarchy(gcfs_rq))
- return;
-
shares = calc_group_shares(gcfs_rq);
if (unlikely(se->load.weight != shares))
reweight_entity(cfs_rq_of(se), se, shares);