Skip to content

Commit 0387703

Browse files
Frederic WeisbeckerKAGA-KOKO
Frederic Weisbecker
authored andcommitted
timers: Fix removed self-IPI on global timer's enqueue in nohz_full
While running in nohz_full mode, a task may enqueue a timer while the tick is stopped. However the only places where the timer wheel, alongside the timer migration machinery's decision, may reprogram the next event accordingly with that new timer's expiry are the idle loop or any IRQ tail. However neither the idle task nor an interrupt may run on the CPU if it resumes busy work in userspace for a long while in full dynticks mode. To solve this, the timer enqueue path raises a self-IPI that will re-evaluate the timer wheel on its IRQ tail. This asynchronous solution avoids potential locking inversion. This is supposed to happen both for local and global timers but commit: b2cf750 ("timers: Always queue timers on the local CPU") broke the global timers case with removing the ->is_idle field handling for the global base. As a result, global timers enqueue may go unnoticed in nohz_full. Fix this with restoring the idle tracking of the global timer's base, allowing self-IPIs again on enqueue time. Fixes: b2cf750 ("timers: Always queue timers on the local CPU") Reported-by: Paul E. McKenney <[email protected]> Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent f55acb1 commit 0387703

File tree

1 file changed

+11
-1
lines changed

1 file changed

+11
-1
lines changed

kernel/time/timer.c

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -642,7 +642,8 @@ trigger_dyntick_cpu(struct timer_base *base, struct timer_list *timer)
642642
* the base lock:
643643
*/
644644
if (base->is_idle) {
645-
WARN_ON_ONCE(!(timer->flags & TIMER_PINNED));
645+
WARN_ON_ONCE(!(timer->flags & TIMER_PINNED ||
646+
tick_nohz_full_cpu(base->cpu)));
646647
wake_up_nohz_cpu(base->cpu);
647648
}
648649
}
@@ -2292,6 +2293,13 @@ static inline u64 __get_next_timer_interrupt(unsigned long basej, u64 basem,
22922293
*/
22932294
if (!base_local->is_idle && time_after(nextevt, basej + 1)) {
22942295
base_local->is_idle = true;
2296+
/*
2297+
* Global timers queued locally while running in a task
2298+
* in nohz_full mode need a self-IPI to kick reprogramming
2299+
* in IRQ tail.
2300+
*/
2301+
if (tick_nohz_full_cpu(base_local->cpu))
2302+
base_global->is_idle = true;
22952303
trace_timer_base_idle(true, base_local->cpu);
22962304
}
22972305
*idle = base_local->is_idle;
@@ -2364,6 +2372,8 @@ void timer_clear_idle(void)
23642372
* path. Required for BASE_LOCAL only.
23652373
*/
23662374
__this_cpu_write(timer_bases[BASE_LOCAL].is_idle, false);
2375+
if (tick_nohz_full_cpu(smp_processor_id()))
2376+
__this_cpu_write(timer_bases[BASE_GLOBAL].is_idle, false);
23672377
trace_timer_base_idle(false, smp_processor_id());
23682378

23692379
/* Activate without holding the timer_base->lock */

0 commit comments

Comments
 (0)