-
Notifications
You must be signed in to change notification settings - Fork 37
simln-lib: add htlc interceptor for simulated nodes #261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Driveby comment: could we split up the logic adding a unique index and the interceptor into separate commits? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Main comments are around how we handle shutdown in a reaonsable way. Specifically:
- An interceptor has had a critical failure, how does it tell the simulator to shut down
- The simultor needs to terminate the interceptors, how can we cleanly do this so that interception code isn't left in a bad state
simln-lib/src/sim_node.rs
Outdated
#[error("DuplicateCustomRecord: key {0}")] | ||
DuplicateCustomRecord(u64), | ||
#[error("InterceptorError: {0}")] | ||
InterceptorError(Box<dyn Error + Send + Sync + 'static>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's go ahead and define this as an error type in the top level library?
I think that simln
could use an overhaul in the way we do errors (I did not understand rust errors when we started this project 🙈 ), and defining a reasonable general error type seems like a good start.
simln-lib/src/sim_node.rs
Outdated
/// The short channel id for the incoming channel that this htlc was delivered on. | ||
pub incoming_htlc: HtlcRef, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: note that this is a unique identifier for the htlc
simln-lib/src/sim_node.rs
Outdated
/// Custom records provided by the incoming htlc. | ||
pub incoming_custom_records: CustomRecords, | ||
|
||
/// The short channel id for the outgoing channel that this htlc should be forwarded over. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: expand doc to note that None
indicates that the intercepting node is the receiver.
simln-lib/src/sim_node.rs
Outdated
} | ||
}, | ||
Ok(Err(e)) => { | ||
drop(intercepts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we may need a more gentle shutdown than dropping the task (which afaik will force abort the intercept_htlc
call).
I can see it being difficult to write interceptor code that works with another system when your code may be run half way and then killed - could end up with funny states. My instinct is to provides a triggered
pair and instruct interceptors to listen on it for shutdown signals? Open to other approaches as well.
I do think that the ability to shut down all the other interceptors once the first error is reached is a really, really nice feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good 👍 provide that trigger in the InterceptRequest
and send on it if we need to fail other interceptors?
just to note, although intercept_htlc
may force-aborted, interceptor should still get notified about htlc resolution through notify_resolution
but I think we'd still like to shutdown gently through intercept_htlc
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
provide that trigger in the InterceptRequest and send on it if we need to fail other interceptors?
I think that we should handle the triggering?
- Give each interceptor a listener
- On first interceptor error, trigger shutdown
Then interceptors are responsible for listening on that shutdown listener and getting their state in order before they exit. We still wait for each to finish, but that should be relatively quick because we've signaled that it's shutdown time.
Interceptor should still get notified about htlc resolution through notify_resolution
As-is I don't think we'd notify if the HTLC is failed by the interceptor? It'll never be "fully" forwarded by the node, so we don't notify it being resolved.
Edit: although coming to think of this, we probably do want to notify the failure even if one of the interceptors has returned a fail outcome - the others may have returned success and aren't aware that it actually never ended up going through.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Give each interceptor a listener
- On first interceptor error, trigger shutdown
Then interceptors are responsible for listening on that shutdown listener and getting their state in order before they exit. We still wait for each to finish, but that should be relatively quick because we've signaled that it's shutdown time.
got it, will do.
As-is I don't think we'd notify if the HTLC is failed by the interceptor? It'll never be "fully" forwarded by the node, so we don't notify it being resolved.
I thought we would. Because if an interceptor returns an error, we then return it in add_htlcs
and then call remove_htlcs
here: https://github.com/elnosh/sim-ln/blob/2de405fdcb37c8ffe4bd8d2cc0077ef7099cc3ed/simln-lib/src/sim_node.rs#L1274 which internally calls notify_resolution
for each interceptors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we would.
Ok cool, forgot when that is/isn't called bu sgtm!
simln-lib/src/sim_node.rs
Outdated
pub trait Interceptor: Send + Sync { | ||
/// Implemented by HTLC interceptors that provide input on the resolution of HTLCs forwarded in the simulation. | ||
async fn intercept_htlc(&self, req: InterceptRequest) | ||
-> Result<CustomRecords, ForwardingError>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we only have one layer of Result
here, how does the intercept_htlc
call tell the simulator that it's hit a ciritical error and wants to shut down (rather than a forwarding error for this payment specifically)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't sure if that's something we'd want. I didn't like the idea of nested Result
s. It could instead be communicated through specific variants in ForwardingError
? Simulation could check with is_critical
if the ForwardingError
returned warrants a shutdown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could instead be communicated through specific variants in ForwardingError
That seems like a bit of a layering violation to me. AnInterceptorError
for interceptors that might want to fail forwards with their own ForwardingFalure
reason that isn't defined in simln (eg, you don't have enough reputation) makes sense to me, but we're stretching its definition a bit to cover things like unexpected internal state errors.
Nested results are definitely ugly, but I think that it more clearly represents the two types of failures that we could have here? One for "fail this HTLC", one for "something has gone wrong with my interception". If we define a type for the inner result the function signatures won't be too ugly at least.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fair! will change 👍
I wasn't sure if letting the interceptor shutdown the simulation was something we'd want but it could be done through specific variants in the
should we include something like a channel or shutdown signal in |
WDYT about adapting this latency interceptor and surfacing it on the CI? Nice to have a user of the interceptor that we can run + test in master rather than merging in some unused code and having to test it externally. |
sgtm, as I was doing exactly that and testing externally heh. |
simln-lib/src/sim_node.rs
Outdated
async fn notify_resolution( | ||
&self, | ||
_res: InterceptResolution, | ||
) -> Result<(), Box<dyn Error + Send + Sync + 'static>> { | ||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rethinking this, I'm wondering what type of error should this be or if this method should return a Result
at all? Leaning towards this should not return a Result
since the simulation is just notifying the interceptor of a resolution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the interceptor wants to signal shutdown on the notify_resolution
because it got information it wasn't expecting? Or a database became unavailable, etc.
Added changes addressing comments. Using nested results now to differentiate between normal and critical errors that could be returned. If a Also took your commit carlaKC@2a35bd5 as suggested to add the latency interceptor. If the simulation runs with the
because we triggered a shutdown while an intercepted payment (with the added latency) hasn't resolved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking really good! Only two major comments on shutdown + cli option for latency.
Been a bit nitty about docs because this feature really will only be used when people use simln as a library so it's important to get the docs readable.
#[error("DuplicateCustomRecord: key {0}")] | ||
DuplicateCustomRecord(u64), | ||
#[error("InterceptorError: {0}")] | ||
InterceptorError(String), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: docs on these new variants, here + above
simln-lib/src/sim_node.rs
Outdated
// The listener on which the interceptor will receive shutdown signals. | ||
pub shutdown_listener: Listener, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a little more instruction here for end users? Just explaining that implementations must listen on this channel, because if they don't they're at risk of blocking quick resolution of HTLCs when other interceptors quickly return a failure outcome.
pub success: bool, | ||
} | ||
|
||
pub type CustomRecords = HashMap<u64, Vec<u8>>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're aiming to enforce docs on public types in future so let's add docs here and below?
simln-lib/src/sim_node.rs
Outdated
/// Optional set of interceptors that will be called every time a HTLC is added to a simulated channel. | ||
interceptors: Vec<Arc<dyn Interceptor>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that it would be useful to add some docs explaining how multiple interceptors interact with each other, specifically:
- That custom records will be merged, but conflicts will fail
- That any single interception sending a forwarding failure will result in the HTLC being failed and they'll receive a shutdown signal if this happens
I think that this is probably the most natural place for this to live? Doesn't really fit on the trait because this is one specific way of using it.
let channel = node_lock | ||
.get_mut(&scid) | ||
.ok_or(CriticalError::ChannelNotFound(scid))?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: pre-existing, but could you update function docs to note that if we run into a critical error we don't bother to fail back the HTLC, with the expectation that the simulation will shut down shortly.
simln-lib/src/sim_node.rs
Outdated
// the HTLC. If any of the interceptors did return an error, we send a shutdown signal | ||
// to the other interceptors that may have not returned yet. | ||
let mut interceptor_failure = None; | ||
while let Some(res) = intercepts.join_next().await { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's one last shutdown case we need to think about here:
- Waiting on long resolving interceptors
- Error elsewhere in the simulation (eg, it hits its total runtime and shuts down)
We'll keep waiting here, because we've created our own triggered
pair. I think that we can do this by selecting on intercepts.join_next()
and pass in SimGraph
's shutdown_trigger
and shut down the interceptors if we get the high level signal that it's time to shut down.
We're somewhat running into a flaw with triggered
here - that we can't create a child trigger that would shut down with the parent, but maybe that's a feature not a bug. Either way, I think it's outside of the scope of this PR to rework all that so I think a select
is our best option (even if ugly).
Another reason to pull this out to a function IMO - always good to have some unit tests checking that all this slippery shut down stuff works as expected! Sadly might have to make a manual mock, because mockall
doesn't play nice with async
but hopefully it's minimal boilerplate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we can do this by selecting on
intercepts.join_next()
and pass inSimGraph
'sshutdown_trigger
and shut down the interceptors if we get the high level signal that it's time to shut down.
I think we'll need to pass in a shutdown listener from the upstream Simulation
since currently we don't have a way to listen for a shutdown signal in SimGraph
afaik. shutdown_trigger
is the one we use to trigger on critical errors.
We're somewhat running into a flaw with
triggered
here - that we can't create a child trigger that would shut down with the parent, but maybe that's a feature not a bug. Either way, I think it's outside of the scope of this PR to rework all that so I think aselect
is our best option (even if ugly).
maybe for another PR but could look into CancellationToken looks like it lets you create "child tokens" that can get cancelled along with the parent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'll need to pass in a shutdown listener from the upstream Simulation since currently we don't have a way to listen for a shutdown signal in SimGraph afaik.
Right, we need shutdown_listener
🤦♀️ SGTM, we can add it to SimGraph
and then pass it along.
maybe for another PR but could look into CancellationToken looks like it lets you create "child tokens" that can get cancelled along with the parent
Yeah I like the idea of investigating this - took a look when we updated the leaky joinset and liked it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moved the htlc interception logic to a separate method. I didn't end up needing to use select
here since we are passing the shutdown listener to SimGraph
now so it can be passed directly in the InterceptRequest
to the interceptors.
|
||
/// Notifies the interceptor that a previously intercepted htlc has been resolved. Default implementation is a no-op | ||
/// for cases where the interceptor only cares about interception, not resolution of htlcs. | ||
async fn notify_resolution(&self, _res: InterceptResolution) -> Result<(), CriticalError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note in docs that this function should not be blocking
|
||
/// Tests intercepted htlc success. | ||
#[tokio::test] | ||
async fn test_intercepted_htlc_success() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very nice tests 👌
let mut mock_interceptor_1 = MockTestInterceptor::new(); | ||
mock_interceptor_1 | ||
.expect_intercept_htlc() | ||
.returning(|_| Ok(Ok(CustomRecords::default()))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pity we can't have an async closure here - would be nice to make this select on shutdown + a long sleep so we know we're correctly shutting down a long waiting interceptor when another errors.
sim-cli/src/parsing.rs
Outdated
@@ -87,6 +88,9 @@ pub struct Cli { | |||
/// simulated nodes. | |||
#[clap(long)] | |||
pub speedup_clock: Option<u16>, | |||
/// Latency to optionally introduce for simulated nodes. | |||
#[clap(long)] | |||
pub latency_ms: Option<f32>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somebody would have to try really hard to mess this up, but technically this could be 0 or negative which doesn't make sense here.
Eg:
sim-cli -l debug -s ln_10_simln.json --latency-ms="-2"
Error: Simulated Network Error: Could not create possion: lambda is not positive in Poisson distribution
Also don't think we need the granularity of a fraction of a millisecond, so I think it's okay to make this a u32
? Then validate:
- It's not zero (just don't set it in that case)
- It's not set when we run a simulation with real nodes (like we do for clock speedup)
- nit: note in doc that this is expressed in milliseconds, since the doc is what
sim-cli --help
will print.
for #255
Mostly took changes from these 2 commits to add an
Interceptor
trait:Some of the things I changed from those commits:
intercept_htlc
now returns aResult
to communicate the result of the intercepted htlc instead of sending it through a channel.JoinSet
for each interceptor'sintercept_htlc
so that if any of those holds the htlc for a long time it does not block the other interceptors. It then waits for the completion of the tasks in the joinset. If any of them returns a result to fail the htlc, it drops theJoinSet
since there is no need to wait for completion of other tasks because the htlc will fail anyways.