diff options
author | Tejun Heo <tj@kernel.org> | 2025-09-03 11:36:07 -1000 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2025-09-03 11:36:07 -1000 |
commit | a5bd6ba30b3364354269b81ac55c2edca9a96d6d (patch) | |
tree | 14d4a1faba02b17d3c163d646a1eda88d0e5ac0f /rust/helpers/security.c | |
parent | bcb7c2305682c77a8bfdbfe37106b314ac10110f (diff) |
sched_ext: Use cgroup_lock/unlock() to synchronize against cgroup operations
SCX hooks into CPU cgroup controller operations and read-locks
scx_cgroup_rwsem to exclude them while enabling and disable schedulers.
While this works, it's unnecessarily complicated given that
cgroup_[un]lock() are available and thus the cgroup operations can be locked
out that way.
Drop scx_cgroup_rwsem locking from the tg on/offline and cgroup [can_]attach
operations. Instead, grab cgroup_lock() from scx_cgroup_lock(). Drop
scx_cgroup_finish_attach() which is no longer necessary. Drop the now
unnecessary rcu locking and css ref bumping in scx_cgroup_init() and
scx_cgroup_exit().
As scx_cgroup_set_weight/bandwidth() paths aren't protected by
cgroup_lock(), rename scx_cgroup_rwsem to scx_cgroup_ops_rwsem and retain
the locking there.
This is overall simpler and will also allow enable/disable paths to
synchronize against cgroup changes independent of the CPU controller.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Andrea Righi <arighi@nvidia.com>
Diffstat (limited to 'rust/helpers/security.c')
0 files changed, 0 insertions, 0 deletions