-
Notifications
You must be signed in to change notification settings - Fork 418
Various followups to #3917 #3941
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Various followups to #3917 #3941
Conversation
We recently switched to `ReceiveAuthKey`-based blinded path authentication, removing various fields used to authenticate blinded paths from contexts. In doing so, we left a comment arguing that such fields should not have their TLV IDs reused, however there's no reason for that restriction. If we reuse old blinded paths with those fields set they'll fail the new authentication checks anyway, so the fields should never end up being seen in blinded paths we'd accept. This simply updates the comments.
We recently switched to `ReceiveAuthKey`-based blinded path authentication, removing various fields used to authenticate blinded paths from contexts. In doing so we forgot to update the documentation on `verify_inbound_async_payment_context` to note that it no longer does any cryptographic verification but instead only validates the expiry. Here we update that documentation.
We recently switched to `ReceiveAuthKey`-based blinded path authentication, removing various fields used to authenticate blinded paths from contexts. In doing so we removed no-longer-needed `HMAC_INPUT`s in offer metadata validation, and left a comment noting that previously used values should not be reused. That comment was slightly incorrect as it indicated some kind of "backward compatibility" concern, but of course we broke backwards compatibility when we stopped accepting the previous authentication scheme. Instead, here, we update the comment to note that what we're protecting against is a type confusion attack.
`ChaChaPolyWriteAdapter` is used to wrap `Writeable` objects in an AEAD before serializing them. This is great, except that we sometimes call `<T as ChaChaPolyWriteAdapter>::encode` which first calls `Writeable::serialized_length` to calculate the length to pre-allocate in a `Vec` before doing the actual write. Because `ChaChaPolyWriteAdapter` did not override `Writeable::serialized_length`, this led to doing the full cryptographic AEAD encryption and MAC twice, once to calculate the length and once to actually write the data. Instead, here, we override `<ChaChaPolyWriteAdapter as Writeable::serialized_length` to simply calculate the length we need.
`ChaCha[Dual]PolyReadAdapter` currently read the encrypted object using `Readable` through the ChaCha stream (including the Poly1305 HMAC), but then consume any remaining bytes directly. This results in any extra bytes not consumed by the desired type's `Readable` being ignored and not included in the HMAC check. This is likely not the desired behavior - if we get some data which has extra slack at the end we ignore, it should still be authenticated as the sender likely thinks that data has meaning and included it in their HMAC check. Luckily, I believe this is currently dead code - `ChaCha[Dual]PolyReadAdapter` are only used for TLV stream reads which consume the full underlying stream. However, if either is used for non-TLV-streams in the future, this may be important. Here we simply push any extra bytes read through the ChaCha20Poly1305 reader, ensuring extra data is included in the HMAC check.
👋 Thanks for assigning @valentinewallace as a reviewer! |
@@ -128,7 +128,10 @@ impl<T: Readable> LengthReadableArgs<([u8; 32], [u8; 32])> for ChaChaDualPolyRea | |||
ChaChaDualPolyReader { chacha: &mut chacha, poly: &mut mac, read_len: 0, read: s }; | |||
|
|||
let readable: T = Readable::read(&mut chacha_stream)?; | |||
chacha_stream.read.eat_remaining()?; | |||
while chacha_stream.read.bytes_remain() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused by this change. eat_remaining
will consume the remaining bytes and error if we didn't read all of the bytes in the initial call to read
, which seems correct to me. Doesn't any bytes remaining indicate a buggy node, since they incorrectly encoded the length we were supposed to read in the LengthLimitedReader
? If we're reading multiple objects, seems like the underlying Readable
should be on a tuple and still not have any bytes remaining.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eat_remaining
will read those bytes directly, though, rather than going through the chacha_stream
. Note that we call eat_remaining
on chacha_stream.read
not on chacha_stream
. As a result, any bytes which get read via that will not be included in the HMAC, which seems wrong.
👋 The first review has been submitted! Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have one question but I don't think the code is harmful and it seems I'm likely missing something. LGTM otherwise!
Some comment/doc corrections, a bug in dead code, and a performance improvement.