-
Notifications
You must be signed in to change notification settings - Fork 417
Async send always-online counterparty side #4045
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async send always-online counterparty side #4045
Conversation
👋 Thanks for assigning @TheBlueMatt as a reviewer! |
lightning/src/ln/channel.rs
Outdated
if let Some(per_commitment_secret) = per_commitment_secret { | ||
if self.holder_commitment_point.can_advance() { | ||
let mut release_htlc_message_paths = Vec::new(); | ||
for htlc in &self.context.pending_inbound_htlcs { | ||
// TODO: how to handle the errors here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reviewers:
Would be good to discuss how to handle failure to create the path_for_release_held_htlc
here.
This may be fine for v1 since the failures should basically never happen, but in theory we could end up in a situation where the HTLC gets close to expiring and the sender isn't online for the HTLC to be failed back to them, which would result in FC.
It's a little tricky because it seems nontrivial to add support for returning HTLCs-to-fail from this method.
One option would be to make path_for_release_held_htlc
infallible by having it fall back to creating a 1-hop blinded path.
Another more robust but more complex-seeming option would be to attempt to create the blinded path when the corresponding update_add is received, and storing the resulting path in the pending HTLC for use when it comes time to generate the RAA. That would allow us to cache the HTLC's pending failure in the InboundHTLCResolution
if we fail to create a blinded path for the RAA, though it would reintroduce a potential timing attack that we had eliminated during the refactor that added ChannelManager::decode_update_add_htlcs
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
end up in a situation where the HTLC gets close to expiring and the sender isn't online for the HTLC to be failed back to them, which would result in FC.
It shouldn't the LSP (and LDK generally) won't FC a channel because of an inbound HTLC which we don't have the preimage or (ie its our counterparty's money). The client probably will FC here, but that's a long-standing issue (#4048).
One option would be to make path_for_release_held_htlc infallible by having it fall back to creating a 1-hop blinded path.
Seems fine to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Advantage of the 1-hop blinded path would probably also be that peers don't need to be cached in flow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Advantage of the 1-hop blinded path would probably also be that peers don't need to be cached in flow?
The approach discussed above means we'll fall back to using 1-hop if creating a multi-hop path using the cached peers fails, although falling back should basically never happen
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4045 +/- ##
==========================================
- Coverage 88.76% 88.35% -0.41%
==========================================
Files 176 177 +1
Lines 129518 131472 +1954
Branches 129518 131472 +1954
==========================================
+ Hits 114968 116167 +1199
- Misses 11945 12655 +710
- Partials 2605 2650 +45
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a bad change set for what it gets us. Two comments:
- Are existing tests covering some or most of this code, or does that need to be added still?
- I think I would have preferred the single hop blinded path option for simplicity.
lightning/src/offers/flow.rs
Outdated
// for blinded paths. | ||
let peer = MessageForwardNode { node_id: peer_node_id, short_channel_id: None }; | ||
match peers_cache.len() { | ||
num_peers if num_peers >= MAX_CACHED_PEERS => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't this be simplified by keeping all peers in memory? It can't be that many.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely could be simplified that way. My thought process was that this is intended for use by (up to) an acinq-sized node. Acinq has ~1,500 public channel peers and probably thousands(?) of tiny mobile peers. So cloning all those for every mobile client's async payment doesn't seem ideal. It's an extreme example but the code didn't seem too bad as-is, I agree this may be premature though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just remove it really. For an acinq-sized node, I don't think the memory usage matters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I guess you could also argue that optimizing here can wait for a larger refactor that gets rid of a lot more low hanging fruit clones of peer vecs in the flow that were discussed on #3639. Removed in a fixup.
lightning/src/ln/channel.rs
Outdated
if let Some(per_commitment_secret) = per_commitment_secret { | ||
if self.holder_commitment_point.can_advance() { | ||
let mut release_htlc_message_paths = Vec::new(); | ||
for htlc in &self.context.pending_inbound_htlcs { | ||
// TODO: how to handle the errors here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Advantage of the 1-hop blinded path would probably also be that peers don't need to be cached in flow?
👋 The first review has been submitted! Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer. |
0a858a9
to
c0a89bb
Compare
c0a89bb
to
1d398ff
Compare
lightning/src/ln/msgs.rs
Outdated
@@ -3173,14 +3186,17 @@ impl_writeable_msg!(RevokeAndACK, { | |||
channel_id, | |||
per_commitment_secret, | |||
next_per_commitment_point | |||
}, {}); | |||
}, { | |||
(0, release_htlc_message_paths, optional_vec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if we should use an experimental TLV for this until we get more spec feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't really bother with that. It's just a convention, but does require a switch-over even when everything is good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI I'm switching to an experimental TLV per #4045 (comment)
c487147
to
b079c36
Compare
b079c36
to
aeaa42d
Compare
This PR looks good to me, but I think it would be good to have a passing unit test in the follow up and/or an ldk-node test before merging this one. |
From the above #4045 (comment), could this also happen if the LSP is holding an htlc from an async sender but the receiver didn't send a Although if so, I guess it should be solved by #4048 |
🔔 1st Reminder Hey @TheBlueMatt! This PR has been waiting for your review. |
aeaa42d
to
f97e0e0
Compare
Definitely, see the testing in #4046. Hopefully we can get an LDK Node test passing as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor things but I'm still kinda meh on the way we're doing connected-peers caching. Its weird that we now have a cache in the flow and the only use it when building one type of blinded path, not the rest. We should keep things consistent, IMO.
lightning/src/ln/channelmanager.rs
Outdated
@@ -13124,6 +13126,11 @@ where | |||
}), | |||
ReconnectionMsg::None => {}, | |||
} | |||
if chan.context().is_usable() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we use the same full criteria we do for get_peers_for_blinded_path
? Maybe it makes more sense to just call get_peers_for_blinded_path
on each peer connect/timer tick/channel confirmation/whatever and cache that? We maybe should cache it in ChannelManager
anyway given we call that method during message handling and its pretty expensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine by me, per #4045 (comment) I would rather save this for follow-up though since it is part of a larger refactor. Opened #4066
If there are doubts about the cache, I'd like to mention again that I believe that at this stage just doing a 1-hop blinded path is OK. |
As part of supporting sending payments as an often-offline sender, the sender needs to be able to set a flag in their update_add_htlc message indicating that the HTLC should be held until receipt of a release_held_htlc onion message from the often-offline payment recipient. We don't yet ever set this flag, but lay the groundwork by including the field in the update_add struct. See-also <lightning/bolts#989>
As part of supporting sending payments as an often-offline sender, the often-offline sender's channel counterparty needs to advertise a feature bit indicating that they support holding onto the sender's HTLC until they receive a release_held_htlc onion message from the recipient indicating that they are online and ready to receive the payment. See-also <lightning/bolts#989> We don't yet advertise support of this feature.
As part of supporting sending payments as an often-offline sender, the often-offline sender's channel counterparty needs to advertise a feature bit indicating that they support holding onto the sender's HTLC until they receive a release_held_htlc onion message from the recipient indicating that they are online and ready to receive the payment. Here we add a config flag to turn on advertising this feature, and fail back hold_htlcs if this config flag is not set. See-also <lightning/bolts#989>
e16e4e3
to
f8143cf
Compare
f8143cf
to
a3d9c45
Compare
As part of supporting sending payments as an often-offline sender, the sender needs to send held_htlc_available onion messages where the reply path terminates at their always-online channel counterparty that is holding the HTLC until the recipient comes online. That way when the recipient sends release_held_htlc, the sender's counterparty will receive that message. To accomplish this, the sender's always-online counterparty includes said reply path in the revoke_and_ack message corresponding to the held HTLC. Here we add support for this field, though we don't set it yet. We also had to tweak the ser macros for this because impl_writeable_msg had never had to write a Vec in a message TLV field before.
We previously had a lock order dependency where the forward_htlcs lock had to be taken before the intercepted_htlcs lock. Here we remove this dependency, which also prepares for a cleaner commit once we start also intercepting HTLCs for often-offline recipients.
Makes the next commit cleaner.
As part of supporting sending payments as an often-offline sender, the sender's always-online channel counterparty needs to hold onto the sender's HTLC until they receive a release_held_htlc onion message from the often-offline recipient. Here we implement storing these held HTLCs in the existing ChannelManager::pending_intercepted_htlcs map. We want to move in the direction of obviating the need to persistence the ChannelManager entirely, so it doesn't really make sense to add a whole new map for these HTLCs.
As part of supporting sending payments as an often-offline sender, the sender needs to send held_htlc_available onion messages such that the reply path to the message terminates at their always-online channel counterparty that is holding the HTLC. That way when the recipient responds with release_held_htlc, the sender's counterparty will receive that message. Here we add a method for creating said reply path, which will be used in the next commit.
As part of supporting sending payments as an often-offline sender, the sender needs to send held_htlc_available onion messages such that the reply path to the message terminates at their always-online channel counterparty that is holding the HTLC. That way when the recipient responds with release_held_htlc, the sender's counterparty will receive that message. Here the counterparty starts including said reply paths in the revoke_and_ack message destined for the sender, so the sender can use these paths in subsequent held_htlc_available messages. We put the paths in the RAA to ensure the sender receives the blinded paths, because failure to deliver the paths means the HTLC will timeout/fail.
As part of supporting sending payments as an often-offline sender, the sender's always-online channel counterparty needs to hold onto the sender's HTLC until they receive a release_held_htlc onion message from the often-offline recipient. Here we implement forwarding these held HTLCs upon receipt of the release message from the recipient.
See prior commits.
9d7a004
to
6aa52e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
}, | ||
} | ||
} else { | ||
match forward_htlcs.entry(scid) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking the forward lock could be moved here, to make it more clear that both locks are never held at the same time. If you like the suggestion, it can be done in the follow up too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point, I'm sorry but I think there was a rebase error because this commit as-is does not remove the lock dep due to the reason you're saying. I'm debating whether it's worth waiting on CI/reacks again...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chatgpt says the Rust compiler is smart enough to see that the lock can be dropped when entering any of the first two branches. If chatty is wrong, it'll be corrected in the next PR within days.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I'm going to roll with that since it's not a huge deal. Although I'm curious our policy on merging without CI passing in a case like this, seems we already do it sometimes.
@@ -10719,16 +10776,43 @@ This indicates a bug inside LDK. Please report this error at https://github.com/ | |||
)); | |||
}; | |||
|
|||
if !is_our_scid | |||
// In the case that we have an HTLC that we're supposed to hold onto until the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saw that you have a test for this in the follow up. Good!
// | | | ||
// | |__`pending_intercepted_htlcs` | ||
// | | ||
// |__`pending_intercepted_htlcs` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I searched a bit through the code to ensure that these locks are now never held together. Seems good. I did wonder if debug_sync
can be of use to check this type of requirement, but don't know how it works exactly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The nice thing about our murex debug stuff is that it requires no configuration - it checks that locks are correct by detecting the ordering rather than being told it. Comments here are advisory to help developers fix issues they run into, rather than prescriptive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. And that is running automatically in debug builds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does, though it runs more aggressively when you build with --features backtrace
.
560150d
into
lightningdevkit:main
Implements the always-online counterparty side of sending payments as an often-offline sender to an often-offline recipient.
Partially addresses #2298
Based on #4044