Skip to content

Commit 3ba7dfb

Browse files
committed
Merge tag 'rcu-next-v6.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux
Pull RCU updates from Boqun Feng: "Documentation: - Add broken-timing possibility to stallwarn.rst - Improve discussion of this_cpu_ptr(), add raw_cpu_ptr() - Document self-propagating callbacks - Point call_srcu() to call_rcu() for detailed memory ordering - Add CONFIG_RCU_LAZY delays to call_rcu() kernel-doc header - Clarify RCU_LAZY and RCU_LAZY_DEFAULT_OFF help text - Remove references to old grace-period-wait primitives srcu: - Introduce srcu_read_{un,}lock_fast(), which is similar to srcu_read_{un,}lock_lite(): avoid smp_mb()s in lock and unlock at the cost of calling synchronize_rcu() in synchronize_srcu() Moreover, by returning the percpu offset of the counter at srcu_read_lock_fast() time, srcu_read_unlock_fast() can avoid extra pointer dereferencing, which makes it faster than srcu_read_{un,}lock_lite() srcu_read_{un,}lock_fast() are intended to replace rcu_read_{un,}lock_trace() if possible RCU torture: - Add get_torture_init_jiffies() to return the start time of the test - Add a test_boost_holdoff module parameter to allow delaying boosting tests when building rcutorture as built-in - Add grace period sequence number logging at the beginning and end of failure/close-call results - Switch to hexadecimal for the expedited grace period sequence number in the rcu_exp_grace_period trace point - Make cur_ops->format_gp_seqs take buffer length - Move RCU_TORTURE_TEST_{CHK_RDR_STATE,LOG_CPU} to bool - Complain when invalid SRCU reader_flavor is specified - Add FORCE_NEED_SRCU_NMI_SAFE Kconfig for testing, which forces SRCU uses atomics even when percpu ops are NMI safe, and use the Kconfig for SRCU lockdep testing Misc: - Split rcu_report_exp_cpu_mult() mask parameter and use for tracing - Remove READ_ONCE() for rdp->gpwrap access in __note_gp_changes() - Fix get_state_synchronize_rcu_full() GP-start detection - Move RCU Tasks self-tests to core_initcall() - Print segment lengths in show_rcu_nocb_gp_state() - Make RCU watch ct_kernel_exit_state() warning - Flush console log from kernel_power_off() - rcutorture: Allow a negative value for nfakewriters - rcu: Update TREE05.boot to test normal synchronize_rcu() - rcu: Use _full() API to debug synchronize_rcu() Make RCU handle PREEMPT_LAZY better: - Fix header guard for rcu_all_qs() - rcu: Rename PREEMPT_AUTO to PREEMPT_LAZY - Update __cond_resched comment about RCU quiescent states - Handle unstable rdp in rcu_read_unlock_strict() - Handle quiescent states for PREEMPT_RCU=n, PREEMPT_COUNT=y - osnoise: Provide quiescent states - Adjust rcutorture with possible PREEMPT_RCU=n && PREEMPT_COUNT=y combination - Limit PREEMPT_RCU configurations - Make rcutorture senario TREE07 and senario TREE10 use PREEMPT_LAZY=y" * tag 'rcu-next-v6.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux: (59 commits) rcutorture: Make scenario TREE07 build CONFIG_PREEMPT_LAZY=y rcutorture: Make scenario TREE10 build CONFIG_PREEMPT_LAZY=y rcu: limit PREEMPT_RCU configurations rcutorture: Update ->extendables check for lazy preemption rcutorture: Update rcutorture_one_extend_check() for lazy preemption osnoise: provide quiescent states rcu: Use _full() API to debug synchronize_rcu() rcu: Update TREE05.boot to test normal synchronize_rcu() rcutorture: Allow a negative value for nfakewriters Flush console log from kernel_power_off() context_tracking: Make RCU watch ct_kernel_exit_state() warning rcu/nocb: Print segment lengths in show_rcu_nocb_gp_state() rcu-tasks: Move RCU Tasks self-tests to core_initcall() rcu: Fix get_state_synchronize_rcu_full() GP-start detection torture: Make SRCU lockdep testing use srcu_read_lock_nmisafe() srcu: Add FORCE_NEED_SRCU_NMI_SAFE Kconfig for testing rcutorture: Complain when invalid SRCU reader_flavor is specified rcutorture: Move RCU_TORTURE_TEST_{CHK_RDR_STATE,LOG_CPU} to bool rcutorture: Make cur_ops->format_gp_seqs take buffer length rcutorture: Add ftrace-compatible timestamp to GP# failure/close-call output ...
2 parents 2f2d529 + 467c890 commit 3ba7dfb

38 files changed

+718
-247
lines changed

Documentation/RCU/rcubarrier.rst

+1-4
Original file line numberDiff line numberDiff line change
@@ -329,10 +329,7 @@ Answer:
329329
was first added back in 2005. This is because on_each_cpu()
330330
disables preemption, which acted as an RCU read-side critical
331331
section, thus preventing CPU 0's grace period from completing
332-
until on_each_cpu() had dealt with all of the CPUs. However,
333-
with the advent of preemptible RCU, rcu_barrier() no longer
334-
waited on nonpreemptible regions of code in preemptible kernels,
335-
that being the job of the new rcu_barrier_sched() function.
332+
until on_each_cpu() had dealt with all of the CPUs.
336333

337334
However, with the RCU flavor consolidation around v4.20, this
338335
possibility was once again ruled out, because the consolidated

Documentation/RCU/stallwarn.rst

+7
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,13 @@ warnings:
9696
the ``rcu_.*timer wakeup didn't happen for`` console-log message,
9797
which will include additional debugging information.
9898

99+
- A timer issue causes time to appear to jump forward, so that RCU
100+
believes that the RCU CPU stall-warning timeout has been exceeded
101+
when in fact much less time has passed. This could be due to
102+
timer hardware bugs, timer driver bugs, or even corruption of
103+
the "jiffies" global variable. These sorts of timer hardware
104+
and driver bugs are not uncommon when testing new hardware.
105+
99106
- A low-level kernel issue that either fails to invoke one of the
100107
variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(),
101108
ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one

Documentation/admin-guide/kernel-parameters.txt

+5
Original file line numberDiff line numberDiff line change
@@ -5760,6 +5760,11 @@
57605760
rcutorture.test_boost_duration= [KNL]
57615761
Duration (s) of each individual boost test.
57625762

5763+
rcutorture.test_boost_holdoff= [KNL]
5764+
Holdoff time (s) from start of test to the start
5765+
of RCU priority-boost testing. Defaults to zero,
5766+
that is, no holdoff.
5767+
57635768
rcutorture.test_boost_interval= [KNL]
57645769
Interval (s) between each boost test.
57655770

Documentation/core-api/this_cpu_ops.rst

+16-6
Original file line numberDiff line numberDiff line change
@@ -138,12 +138,22 @@ get_cpu/put_cpu sequence requires. No processor number is
138138
available. Instead, the offset of the local per cpu area is simply
139139
added to the per cpu offset.
140140

141-
Note that this operation is usually used in a code segment when
142-
preemption has been disabled. The pointer is then used to
143-
access local per cpu data in a critical section. When preemption
144-
is re-enabled this pointer is usually no longer useful since it may
145-
no longer point to per cpu data of the current processor.
146-
141+
Note that this operation can only be used in code segments where
142+
smp_processor_id() may be used, for example, where preemption has been
143+
disabled. The pointer is then used to access local per cpu data in a
144+
critical section. When preemption is re-enabled this pointer is usually
145+
no longer useful since it may no longer point to per cpu data of the
146+
current processor.
147+
148+
The special cases where it makes sense to obtain a per-CPU pointer in
149+
preemptible code are addressed by raw_cpu_ptr(), but such use cases need
150+
to handle cases where two different CPUs are accessing the same per cpu
151+
variable, which might well be that of a third CPU. These use cases are
152+
typically performance optimizations. For example, SRCU implements a pair
153+
of counters as a pair of per-CPU variables, and rcu_read_lock_nmisafe()
154+
uses raw_cpu_ptr() to get a pointer to some CPU's counter, and uses
155+
atomic_inc_long() to handle migration between the raw_cpu_ptr() and
156+
the atomic_inc_long().
147157

148158
Per cpu variables and offsets
149159
-----------------------------

include/linux/printk.h

+6
Original file line numberDiff line numberDiff line change
@@ -207,6 +207,7 @@ void printk_legacy_allow_panic_sync(void);
207207
extern bool nbcon_device_try_acquire(struct console *con);
208208
extern void nbcon_device_release(struct console *con);
209209
void nbcon_atomic_flush_unsafe(void);
210+
bool pr_flush(int timeout_ms, bool reset_on_progress);
210211
#else
211212
static inline __printf(1, 0)
212213
int vprintk(const char *s, va_list args)
@@ -315,6 +316,11 @@ static inline void nbcon_atomic_flush_unsafe(void)
315316
{
316317
}
317318

319+
static inline bool pr_flush(int timeout_ms, bool reset_on_progress)
320+
{
321+
return true;
322+
}
323+
318324
#endif
319325

320326
bool this_cpu_in_panic(void);

include/linux/rcupdate.h

+8-17
Original file line numberDiff line numberDiff line change
@@ -95,9 +95,9 @@ static inline void __rcu_read_lock(void)
9595

9696
static inline void __rcu_read_unlock(void)
9797
{
98-
preempt_enable();
9998
if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD))
10099
rcu_read_unlock_strict();
100+
preempt_enable();
101101
}
102102

103103
static inline int rcu_preempt_depth(void)
@@ -121,12 +121,6 @@ void rcu_init(void);
121121
extern int rcu_scheduler_active;
122122
void rcu_sched_clock_irq(int user);
123123

124-
#ifdef CONFIG_TASKS_RCU_GENERIC
125-
void rcu_init_tasks_generic(void);
126-
#else
127-
static inline void rcu_init_tasks_generic(void) { }
128-
#endif
129-
130124
#ifdef CONFIG_RCU_STALL_COMMON
131125
void rcu_sysrq_start(void);
132126
void rcu_sysrq_end(void);
@@ -806,11 +800,9 @@ do { \
806800
* sections, invocation of the corresponding RCU callback is deferred
807801
* until after the all the other CPUs exit their critical sections.
808802
*
809-
* In v5.0 and later kernels, synchronize_rcu() and call_rcu() also
810-
* wait for regions of code with preemption disabled, including regions of
811-
* code with interrupts or softirqs disabled. In pre-v5.0 kernels, which
812-
* define synchronize_sched(), only code enclosed within rcu_read_lock()
813-
* and rcu_read_unlock() are guaranteed to be waited for.
803+
* Both synchronize_rcu() and call_rcu() also wait for regions of code
804+
* with preemption disabled, including regions of code with interrupts or
805+
* softirqs disabled.
814806
*
815807
* Note, however, that RCU callbacks are permitted to run concurrently
816808
* with new RCU read-side critical sections. One way that this can happen
@@ -865,11 +857,10 @@ static __always_inline void rcu_read_lock(void)
865857
* rcu_read_unlock() - marks the end of an RCU read-side critical section.
866858
*
867859
* In almost all situations, rcu_read_unlock() is immune from deadlock.
868-
* In recent kernels that have consolidated synchronize_sched() and
869-
* synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity
870-
* also extends to the scheduler's runqueue and priority-inheritance
871-
* spinlocks, courtesy of the quiescent-state deferral that is carried
872-
* out when rcu_read_unlock() is invoked with interrupts disabled.
860+
* This deadlock immunity also extends to the scheduler's runqueue
861+
* and priority-inheritance spinlocks, courtesy of the quiescent-state
862+
* deferral that is carried out when rcu_read_unlock() is invoked with
863+
* interrupts disabled.
873864
*
874865
* See rcu_read_lock() for more information.
875866
*/

include/linux/rcupdate_wait.h

+3
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,9 @@
1616
struct rcu_synchronize {
1717
struct rcu_head head;
1818
struct completion completion;
19+
20+
/* This is for debugging. */
21+
struct rcu_gp_oldstate oldstate;
1922
};
2023
void wakeme_after_rcu(struct rcu_head *head);
2124

include/linux/rcutree.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ extern int rcu_scheduler_active;
100100
void rcu_end_inkernel_boot(void);
101101
bool rcu_inkernel_boot_has_ended(void);
102102
bool rcu_is_watching(void);
103-
#ifndef CONFIG_PREEMPTION
103+
#ifndef CONFIG_PREEMPT_RCU
104104
void rcu_all_qs(void);
105105
#endif
106106

include/linux/srcu.h

+88-14
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,13 @@ int init_srcu_struct(struct srcu_struct *ssp);
4747
#define SRCU_READ_FLAVOR_NORMAL 0x1 // srcu_read_lock().
4848
#define SRCU_READ_FLAVOR_NMI 0x2 // srcu_read_lock_nmisafe().
4949
#define SRCU_READ_FLAVOR_LITE 0x4 // srcu_read_lock_lite().
50-
#define SRCU_READ_FLAVOR_ALL 0x7 // All of the above.
50+
#define SRCU_READ_FLAVOR_FAST 0x8 // srcu_read_lock_fast().
51+
#define SRCU_READ_FLAVOR_ALL (SRCU_READ_FLAVOR_NORMAL | SRCU_READ_FLAVOR_NMI | \
52+
SRCU_READ_FLAVOR_LITE | SRCU_READ_FLAVOR_FAST) // All of the above.
53+
#define SRCU_READ_FLAVOR_SLOWGP (SRCU_READ_FLAVOR_LITE | SRCU_READ_FLAVOR_FAST)
54+
// Flavors requiring synchronize_rcu()
55+
// instead of smp_mb().
56+
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
5157

5258
#ifdef CONFIG_TINY_SRCU
5359
#include <linux/srcutiny.h>
@@ -60,15 +66,6 @@ int init_srcu_struct(struct srcu_struct *ssp);
6066
void call_srcu(struct srcu_struct *ssp, struct rcu_head *head,
6167
void (*func)(struct rcu_head *head));
6268
void cleanup_srcu_struct(struct srcu_struct *ssp);
63-
int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp);
64-
void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp);
65-
#ifdef CONFIG_TINY_SRCU
66-
#define __srcu_read_lock_lite __srcu_read_lock
67-
#define __srcu_read_unlock_lite __srcu_read_unlock
68-
#else // #ifdef CONFIG_TINY_SRCU
69-
int __srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp);
70-
void __srcu_read_unlock_lite(struct srcu_struct *ssp, int idx) __releases(ssp);
71-
#endif // #else // #ifdef CONFIG_TINY_SRCU
7269
void synchronize_srcu(struct srcu_struct *ssp);
7370

7471
#define SRCU_GET_STATE_COMPLETED 0x1
@@ -257,6 +254,51 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)
257254
return retval;
258255
}
259256

257+
/**
258+
* srcu_read_lock_fast - register a new reader for an SRCU-protected structure.
259+
* @ssp: srcu_struct in which to register the new reader.
260+
*
261+
* Enter an SRCU read-side critical section, but for a light-weight
262+
* smp_mb()-free reader. See srcu_read_lock() for more information.
263+
*
264+
* If srcu_read_lock_fast() is ever used on an srcu_struct structure,
265+
* then none of the other flavors may be used, whether before, during,
266+
* or after. Note that grace-period auto-expediting is disabled for _fast
267+
* srcu_struct structures because auto-expedited grace periods invoke
268+
* synchronize_rcu_expedited(), IPIs and all.
269+
*
270+
* Note that srcu_read_lock_fast() can be invoked only from those contexts
271+
* where RCU is watching, that is, from contexts where it would be legal
272+
* to invoke rcu_read_lock(). Otherwise, lockdep will complain.
273+
*/
274+
static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *ssp) __acquires(ssp)
275+
{
276+
struct srcu_ctr __percpu *retval;
277+
278+
srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_FAST);
279+
retval = __srcu_read_lock_fast(ssp);
280+
rcu_try_lock_acquire(&ssp->dep_map);
281+
return retval;
282+
}
283+
284+
/**
285+
* srcu_down_read_fast - register a new reader for an SRCU-protected structure.
286+
* @ssp: srcu_struct in which to register the new reader.
287+
*
288+
* Enter a semaphore-like SRCU read-side critical section, but for
289+
* a light-weight smp_mb()-free reader. See srcu_read_lock_fast() and
290+
* srcu_down_read() for more information.
291+
*
292+
* The same srcu_struct may be used concurrently by srcu_down_read_fast()
293+
* and srcu_read_lock_fast().
294+
*/
295+
static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *ssp) __acquires(ssp)
296+
{
297+
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
298+
srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_FAST);
299+
return __srcu_read_lock_fast(ssp);
300+
}
301+
260302
/**
261303
* srcu_read_lock_lite - register a new reader for an SRCU-protected structure.
262304
* @ssp: srcu_struct in which to register the new reader.
@@ -278,7 +320,7 @@ static inline int srcu_read_lock_lite(struct srcu_struct *ssp) __acquires(ssp)
278320
{
279321
int retval;
280322

281-
srcu_check_read_flavor_lite(ssp);
323+
srcu_check_read_flavor_force(ssp, SRCU_READ_FLAVOR_LITE);
282324
retval = __srcu_read_lock_lite(ssp);
283325
rcu_try_lock_acquire(&ssp->dep_map);
284326
return retval;
@@ -335,7 +377,8 @@ srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp)
335377
* srcu_down_read() nor srcu_up_read() may be invoked from an NMI handler.
336378
*
337379
* Calls to srcu_down_read() may be nested, similar to the manner in
338-
* which calls to down_read() may be nested.
380+
* which calls to down_read() may be nested. The same srcu_struct may be
381+
* used concurrently by srcu_down_read() and srcu_read_lock().
339382
*/
340383
static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp)
341384
{
@@ -360,10 +403,41 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
360403
__srcu_read_unlock(ssp, idx);
361404
}
362405

406+
/**
407+
* srcu_read_unlock_fast - unregister a old reader from an SRCU-protected structure.
408+
* @ssp: srcu_struct in which to unregister the old reader.
409+
* @scp: return value from corresponding srcu_read_lock_fast().
410+
*
411+
* Exit a light-weight SRCU read-side critical section.
412+
*/
413+
static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
414+
__releases(ssp)
415+
{
416+
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
417+
srcu_lock_release(&ssp->dep_map);
418+
__srcu_read_unlock_fast(ssp, scp);
419+
}
420+
421+
/**
422+
* srcu_up_read_fast - unregister a old reader from an SRCU-protected structure.
423+
* @ssp: srcu_struct in which to unregister the old reader.
424+
* @scp: return value from corresponding srcu_read_lock_fast().
425+
*
426+
* Exit an SRCU read-side critical section, but not necessarily from
427+
* the same context as the maching srcu_down_read_fast().
428+
*/
429+
static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
430+
__releases(ssp)
431+
{
432+
WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi());
433+
srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST);
434+
__srcu_read_unlock_fast(ssp, scp);
435+
}
436+
363437
/**
364438
* srcu_read_unlock_lite - unregister a old reader from an SRCU-protected structure.
365439
* @ssp: srcu_struct in which to unregister the old reader.
366-
* @idx: return value from corresponding srcu_read_lock().
440+
* @idx: return value from corresponding srcu_read_lock_lite().
367441
*
368442
* Exit a light-weight SRCU read-side critical section.
369443
*/
@@ -379,7 +453,7 @@ static inline void srcu_read_unlock_lite(struct srcu_struct *ssp, int idx)
379453
/**
380454
* srcu_read_unlock_nmisafe - unregister a old reader from an SRCU-protected structure.
381455
* @ssp: srcu_struct in which to unregister the old reader.
382-
* @idx: return value from corresponding srcu_read_lock().
456+
* @idx: return value from corresponding srcu_read_lock_nmisafe().
383457
*
384458
* Exit an SRCU read-side critical section, but in an NMI-safe manner.
385459
*/

include/linux/srcutiny.h

+27-2
Original file line numberDiff line numberDiff line change
@@ -64,13 +64,38 @@ static inline int __srcu_read_lock(struct srcu_struct *ssp)
6464
{
6565
int idx;
6666

67-
preempt_disable(); // Needed for PREEMPT_AUTO
67+
preempt_disable(); // Needed for PREEMPT_LAZY
6868
idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1;
6969
WRITE_ONCE(ssp->srcu_lock_nesting[idx], READ_ONCE(ssp->srcu_lock_nesting[idx]) + 1);
7070
preempt_enable();
7171
return idx;
7272
}
7373

74+
struct srcu_ctr;
75+
76+
static inline bool __srcu_ptr_to_ctr(struct srcu_struct *ssp, struct srcu_ctr __percpu *scpp)
77+
{
78+
return (int)(intptr_t)(struct srcu_ctr __force __kernel *)scpp;
79+
}
80+
81+
static inline struct srcu_ctr __percpu *__srcu_ctr_to_ptr(struct srcu_struct *ssp, int idx)
82+
{
83+
return (struct srcu_ctr __percpu *)(intptr_t)idx;
84+
}
85+
86+
static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp)
87+
{
88+
return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp));
89+
}
90+
91+
static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp)
92+
{
93+
__srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp));
94+
}
95+
96+
#define __srcu_read_lock_lite __srcu_read_lock
97+
#define __srcu_read_unlock_lite __srcu_read_unlock
98+
7499
static inline void synchronize_srcu_expedited(struct srcu_struct *ssp)
75100
{
76101
synchronize_srcu(ssp);
@@ -82,7 +107,7 @@ static inline void srcu_barrier(struct srcu_struct *ssp)
82107
}
83108

84109
#define srcu_check_read_flavor(ssp, read_flavor) do { } while (0)
85-
#define srcu_check_read_flavor_lite(ssp) do { } while (0)
110+
#define srcu_check_read_flavor_force(ssp, read_flavor) do { } while (0)
86111

87112
/* Defined here to avoid size increase for non-torture kernels. */
88113
static inline void srcu_torture_stats_print(struct srcu_struct *ssp,

0 commit comments

Comments
 (0)