-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Batch proposal spend limits #3471
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope it helps
78db712
to
cd43ece
Compare
5ef6a17
to
ab66df9
Compare
Note to reviewers: I have modified the ready queue to support O(1) operations at the front, giving us an efficient fifo. This makes batch building logic simpler because we can optimistically remove the transmission from the queue before processing it for inclusion, with reinsertions at the front being required only for valid transmissions that bring the batch above the spend limit. This may also be useful for implementing priority fees in future. |
42d0918
to
ecce684
Compare
ecce684
to
efa7663
Compare
Rebased |
efa7663
to
e172443
Compare
4c7957d
to
17e7371
Compare
I'm actually curious whether if this new limit will reject large block like this from mainnet outage recover? And does that means it can never recover if so? https://aleoscan.io/block?h=5213428 @vicsn |
Thx for asking! The new limit applies to proposals and will limit the maximum compute used in a block. It is likely that the block you refer to still falls within the margin. If it doesn't, it is not an issue, as the limit only becomes effective from a certain future block height onwards. |
@vicsn Thanks! It's more of a concern for future outage, maybe it's helpful to also make sure the validators are able to spread out transactions to different blocks in case of massive pending queue if not yet. |
Thank you for bringing up this topic, as it is good to think about in terms of possible edge cases. I don't believe this will cause an issue, because this enforcement is on the proposal/certificate level (which are the building components of a block). This just means that each valid certificate would be limited, which implies that a large pending queue will take longer to clear out, but should not invalidate any block production. Blocks can still grow larger if the subdag takes many rounds to commit, however now there's a stronger limiter to prevent blocks from ballooning too quickly. Another point to note is that block production itself does not care about the number of transactions, it just cares that a proper subdag is found. |
I see thanks for the clarification! |
07ec2f2
to
fae356a
Compare
Rebased. |
After the recent changes on the VM side, I have moved cost calculation up into the ledger. The two spend limit test cases are expected to fail as no limit is currently enforced. |
You can merge in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll just note here, for the record, the issue we already discussed. Two validators may happen to have different block heights when they check the same batch proposal, and thus have a slightly different idea of whether the batch proposal is valid or not. I don't think that this can cause forking, because a quorum of signatures, if achieved, is still a quorum. But it would be a cleaner story if the notion of whether a batch is valid at a given round only depended on the round number, which we could achieve by using round numbers instead of block heights to define different versions of batch proposal validity (such as cost limit).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Two nits. Might want to update the snarkVM rev one more time before merging.
This PR introduces spend limit checks on batch proposals at construction and prior to signing.
This requires some changes to batch construction: the workers are now drained only after the transmissions have been checked for inclusion in a batch. This, to avoid reinserting transmissions into the memory pool once the spend limit is surpassed.I have modified theReady
queue to supportO(1)
operations at the front, as well as inclusion checks and regular inserts, so the batch proposal logic could be simplified. We no longer need to defer draining and can instead optimistically remove the transmission from the queue for processing. There's an occasionalO(n)
cost to reset the internally tracked offset but this should happen at most once per epoch, when solutions are cleared from the queue. It was also necessary to expose acompute_cost
function on theLedgerService
trait—its internals might still be moved into snarkVM.This changeset will need to be refined and tested, hence the draft. CI is currently expected to fail.
Related PRs:
BLOCK_SPEND_LIMIT
snarkVM#2565 (previous discussion)