Skip to content

Commit a59be66

Browse files
committed
sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug
jira LE-3262 Rebuild_History Non-Buildable kernel-5.14.0-570.22.1.el9_6 commit-author Vishal Chourasia <[email protected]> commit af98d8a CPU controller limits are not properly enforced during CPU hotplug operations, particularly during CPU offline. When a CPU goes offline, throttled processes are unintentionally being unthrottled across all CPUs in the system, allowing them to exceed their assigned quota limits. Consider below for an example, Assigning 6.25% bandwidth limit to a cgroup in a 8 CPU system, where, workload is running 8 threads for 20 seconds at 100% CPU utilization, expected (user+sys) time = 10 seconds. $ cat /sys/fs/cgroup/test/cpu.max 50000 100000 $ ./ebizzy -t 8 -S 20 // non-hotplug case real 20.00 s user 10.81 s // intended behaviour sys 0.00 s $ ./ebizzy -t 8 -S 20 // hotplug case real 20.00 s user 14.43 s // Workload is able to run for 14 secs sys 0.00 s // when it should have only run for 10 secs During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain is called for every active CPU to update the root domain. That ends up calling rq_offline_fair which un-throttles any throttled hierarchies. Unthrottling should only occur for the CPU being hotplugged to allow its throttled processes to become runnable and get migrated to other CPUs. With current patch applied, $ ./ebizzy -t 8 -S 20 // hotplug case real 21.00 s user 10.16 s // intended behaviour sys 0.00 s This also has another symptom, when a CPU goes offline, and if the cfs_rq is not in throttled state and the runtime_remaining still had plenty remaining, it gets reset to 1 here, causing the runtime_remaining of cfs_rq to be quickly depleted. Note: hotplug operation (online, offline) was performed in while(1) loop v3: https://lore.kernel.org/all/[email protected] v2: https://lore.kernel.org/all/[email protected] v1: https://lore.kernel.org/all/[email protected] Suggested-by: Zhang Qiao <[email protected]> Signed-off-by: Vishal Chourasia <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Vincent Guittot <[email protected]> Tested-by: Madadi Vineeth Reddy <[email protected]> Tested-by: Samir Mulani <[email protected]> Link: https://lore.kernel.org/r/[email protected] (cherry picked from commit af98d8a) Signed-off-by: Jonathan Maple <[email protected]>
1 parent 0c644ff commit a59be66

File tree

1 file changed

+13
-7
lines changed

1 file changed

+13
-7
lines changed

kernel/sched/fair.c

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6304,6 +6304,10 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
63046304

63056305
lockdep_assert_rq_held(rq);
63066306

6307+
// Do not unthrottle for an active CPU
6308+
if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask))
6309+
return;
6310+
63076311
/*
63086312
* The rq clock has already been updated in the
63096313
* set_rq_offline(), so we should skip updating
@@ -6318,19 +6322,21 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
63186322
if (!cfs_rq->runtime_enabled)
63196323
continue;
63206324

6321-
/*
6322-
* clock_task is not advancing so we just need to make sure
6323-
* there's some valid quota amount
6324-
*/
6325-
cfs_rq->runtime_remaining = 1;
63266325
/*
63276326
* Offline rq is schedulable till CPU is completely disabled
63286327
* in take_cpu_down(), so we prevent new cfs throttling here.
63296328
*/
63306329
cfs_rq->runtime_enabled = 0;
63316330

6332-
if (cfs_rq_throttled(cfs_rq))
6333-
unthrottle_cfs_rq(cfs_rq);
6331+
if (!cfs_rq_throttled(cfs_rq))
6332+
continue;
6333+
6334+
/*
6335+
* clock_task is not advancing so we just need to make sure
6336+
* there's some valid quota amount
6337+
*/
6338+
cfs_rq->runtime_remaining = 1;
6339+
unthrottle_cfs_rq(cfs_rq);
63346340
}
63356341
rcu_read_unlock();
63366342

0 commit comments

Comments
 (0)