Skip to content

Commit ec6de20

Browse files
Tvrtko Ursulingregkh
Tvrtko Ursulin
authored andcommitted
drm/v3d: Appease lockdep while updating GPU stats
[ Upstream commit 06c3c40 ] Lockdep thinks our seqcount_t usage is unsafe because the update path can be both from irq and worker context: [ ] ================================ [ ] WARNING: inconsistent lock state [ ] 6.10.3-v8-16k-numa #159 Tainted: G WC [ ] -------------------------------- [ ] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage. [ ] swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes: [ ] ffff80003d7c08d0 (&v3d_priv->stats[i].lock){?.+.}-{0:0}, at: v3d_irq+0xc8/0x660 [v3d] [ ] {HARDIRQ-ON-W} state was registered at: [ ] lock_acquire+0x1f8/0x328 [ ] v3d_job_start_stats.isra.0+0xd8/0x218 [v3d] [ ] v3d_bin_job_run+0x23c/0x388 [v3d] [ ] drm_sched_run_job_work+0x520/0x6d0 [gpu_sched] [ ] process_one_work+0x62c/0xb48 [ ] worker_thread+0x468/0x5b0 [ ] kthread+0x1c4/0x1e0 [ ] ret_from_fork+0x10/0x20 [ ] irq event stamp: 337094 [ ] hardirqs last enabled at (337093): [<ffffc0008144ce7c>] default_idle_call+0x11c/0x140 [ ] hardirqs last disabled at (337094): [<ffffc0008144a354>] el1_interrupt+0x24/0x58 [ ] softirqs last enabled at (337082): [<ffffc00080061d90>] handle_softirqs+0x4e0/0x538 [ ] softirqs last disabled at (337073): [<ffffc00080010364>] __do_softirq+0x1c/0x28 [ ] other info that might help us debug this: [ ] Possible unsafe locking scenario: [ ] CPU0 [ ] ---- [ ] lock(&v3d_priv->stats[i].lock); [ ] <Interrupt> [ ] lock(&v3d_priv->stats[i].lock); [ ] *** DEADLOCK *** [ ] no locks held by swapper/0/0. [ ] stack backtrace: [ ] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G WC 6.10.3-v8-16k-numa #159 [ ] Hardware name: Raspberry Pi 5 Model B Rev 1.0 (DT) [ ] Call trace: [ ] dump_backtrace+0x170/0x1b8 [ ] show_stack+0x20/0x38 [ ] dump_stack_lvl+0xb4/0xd0 [ ] dump_stack+0x18/0x28 [ ] print_usage_bug+0x3cc/0x3f0 [ ] mark_lock+0x4d0/0x968 [ ] __lock_acquire+0x784/0x18c8 [ ] lock_acquire+0x1f8/0x328 [ ] v3d_job_update_stats+0xec/0x2e0 [v3d] [ ] v3d_irq+0xc8/0x660 [v3d] [ ] __handle_irq_event_percpu+0x1f8/0x488 [ ] handle_irq_event+0x88/0x128 [ ] handle_fasteoi_irq+0x298/0x408 [ ] generic_handle_domain_irq+0x50/0x78 But it is a false positive because all the queue-stats pairs have their own lock and jobs are also one at a time. Nevertheless we can appease lockdep by disabling local interrupts to make it see lock usage is consistent. Cc: Maíra Canal <[email protected]> Fixes: 6abe93b ("drm/v3d: Fix race-condition between sysfs/fdinfo and interrupt handler") Signed-off-by: Tvrtko Ursulin <[email protected]> Signed-off-by: Maíra Canal <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] Signed-off-by: Sasha Levin <[email protected]>
1 parent fd8488e commit ec6de20

File tree

1 file changed

+41
-5
lines changed

1 file changed

+41
-5
lines changed

drivers/gpu/drm/v3d/v3d_sched.c

Lines changed: 41 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -135,8 +135,31 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
135135
struct v3d_stats *global_stats = &v3d->queue[queue].stats;
136136
struct v3d_stats *local_stats = &file->stats[queue];
137137
u64 now = local_clock();
138-
139-
preempt_disable();
138+
unsigned long flags;
139+
140+
/*
141+
* We only need to disable local interrupts to appease lockdep who
142+
* otherwise would think v3d_job_start_stats vs v3d_stats_update has an
143+
* unsafe in-irq vs no-irq-off usage problem. This is a false positive
144+
* because all the locks are per queue and stats type, and all jobs are
145+
* completely one at a time serialised. More specifically:
146+
*
147+
* 1. Locks for GPU queues are updated from interrupt handlers under a
148+
* spin lock and started here with preemption disabled.
149+
*
150+
* 2. Locks for CPU queues are updated from the worker with preemption
151+
* disabled and equally started here with preemption disabled.
152+
*
153+
* Therefore both are consistent.
154+
*
155+
* 3. Because next job can only be queued after the previous one has
156+
* been signaled, and locks are per queue, there is also no scope for
157+
* the start part to race with the update part.
158+
*/
159+
if (IS_ENABLED(CONFIG_LOCKDEP))
160+
local_irq_save(flags);
161+
else
162+
preempt_disable();
140163

141164
write_seqcount_begin(&local_stats->lock);
142165
local_stats->start_ns = now;
@@ -146,7 +169,10 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
146169
global_stats->start_ns = now;
147170
write_seqcount_end(&global_stats->lock);
148171

149-
preempt_enable();
172+
if (IS_ENABLED(CONFIG_LOCKDEP))
173+
local_irq_restore(flags);
174+
else
175+
preempt_enable();
150176
}
151177

152178
static void
@@ -167,11 +193,21 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue)
167193
struct v3d_stats *global_stats = &v3d->queue[queue].stats;
168194
struct v3d_stats *local_stats = &file->stats[queue];
169195
u64 now = local_clock();
196+
unsigned long flags;
197+
198+
/* See comment in v3d_job_start_stats() */
199+
if (IS_ENABLED(CONFIG_LOCKDEP))
200+
local_irq_save(flags);
201+
else
202+
preempt_disable();
170203

171-
preempt_disable();
172204
v3d_stats_update(local_stats, now);
173205
v3d_stats_update(global_stats, now);
174-
preempt_enable();
206+
207+
if (IS_ENABLED(CONFIG_LOCKDEP))
208+
local_irq_restore(flags);
209+
else
210+
preempt_enable();
175211
}
176212

177213
static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job)

0 commit comments

Comments
 (0)