-
Notifications
You must be signed in to change notification settings - Fork 420
Fix race in PeerManager read pausing.
#4168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix race in PeerManager read pausing.
#4168
Conversation
TheBlueMatt
commented
Oct 22, 2025
|
👋 Thanks for assigning @joostjager as a reviewer! |
aa5f64e to
ad1e948
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4168 +/- ##
==========================================
- Coverage 88.78% 88.63% -0.16%
==========================================
Files 180 179 -1
Lines 137004 136979 -25
Branches 137004 136979 -25
==========================================
- Hits 121642 121409 -233
- Misses 12538 12838 +300
+ Partials 2824 2732 -92
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| us_lock.read_paused = true; | ||
| } | ||
| }, | ||
| Ok(()) => {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be easy to reproduce the problem on linux by reducing OUTBOUND_BUFFER_LIMIT_READ_PAUSE and adding a delay? Just to verify that the bug really is what we think it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need, just by adding an extra few-ms sleep after the handle_message call (after setting pause_read) easily reproduces (and this PR fixes it).
lightning-net-tokio/src/lib.rs
Outdated
| us.read_paused = false; | ||
| let _ = us.read_waker.try_send(()); | ||
| } else if !resume_read { | ||
| us.read_paused = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would a comment here be beneficial, or a fn-level doc explaining the resume_read semantics?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better yet, I simplified the code to be a bit clearer so that its hopefully not needed.
| /// Note that these messages are *not* encrypted/MAC'd, and are only serialized. | ||
| gossip_broadcast_buffer: VecDeque<MessageBuf>, | ||
| awaiting_write_event: bool, | ||
| sent_pause_read: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this necessary to avoid always calling into send_data with no data, and obtaining the conn lock unnecessarily?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, basically. We don't want to just slam each SocketDescriptor with a call every time we go through the process_events loop (which is very often).
|
👋 The first review has been submitted! Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer. |
What API change would be required to fix it completely? And is a reason not to do it to avoid breaking external usage of this code? |
|
The API change in this PR should fix it completely. My comment was about backporting to 0.1, where we aren't allowed to remove the returned-bool from |
ad1e948 to
e4a70b9
Compare
We recently ran into a race condition on macOS where `read_event` would return `Ok(true)` (implying reads should be paused) but calls to `send_data` which flushed the buffer completed before the `read_event` caller was able to set the read-pause flag. This should be fairly rare, but not unheard of - the `pause_read` flag in `read_event` is calculated before handling the last message, so there's some time between when its calculated and when its returned. However, that has to race with multiple calls to `send_data` to send all the pending messages, which all have to complete before the `read_event` return happens. We've (as far as I recall) never hit this in prod, but a benchmark HTLC-flood test managed to hit it somewhat reliably within a few minutes on macOS and when a synthetic few-ms sleep was added to each message handling call. Ultimately we can't fix this with the current API (though we could make it more rare). Thus, here, we stick to a single "stream" of pause-read events from `PeerManager` to user code via `send_data` calls, dropping the read-pause flag return from `read_event` entirely. Technically this adds risk that someone can flood us with enough messages fast enough to bloat our outbound buffer for a peer before `PeerManager::process_events` gets called and can flush the pause flag via `read_event` calls to all descriptors. This isn't ideal but it should still be relatively hard to do as `process_events` calls are pretty quick and should be triggered immediately after each `read_event` call completes.
In the previous commit, we moved the `send_data` `resume_read` flag to also indicate that we should pause if its unset. This should work as we mostly only set the flag when we're sending but may cause us to fail to pause if we are blocked on gossip validation but `awaiting_write_event` wasn't set as we had previously failed to fully flush a buffer (which no longer implies read-pause). Here we make this logic much more robust by ensuring we always make at least one `send_data` call in `do_attempt_write_data` if we need to pause read (or unpause read).
e4a70b9 to
bd4356a
Compare
|
Dropped the first commit as it makes it more annoying to remove the spurious |