Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(protocol): support delayed forced inclusion of txs #18826

Draft
wants to merge 43 commits into
base: pacaya_fork
Choose a base branch
from

Conversation

dantaik
Copy link
Contributor

@dantaik dantaik commented Jan 23, 2025

Initial implementation of delayed inbox (forced inclusion of transactions)

The original idea is from @cyberhorsey's PR: https://github.com/taikoxyz/taiko-mono/pull/18824/files#diff-bea5dd46ba3d231a238b7c76adc7d09dfa3f8ebd3bf552407f9098492f3ff8f5

Copy link

openzeppelin-code bot commented Jan 23, 2025

feat(protocol): support delayed forced inclusion of txs

Generated at commit: 13113d8e3d9e0215ef323716ea5bd1f01dc3e568

🚨 Report Summary

Severity Level Results
Contracts Critical
High
Medium
Low
Note
Total
3
3
0
10
40
56
Dependencies Critical
High
Medium
Low
Note
Total
0
0
0
0
0
0

For more details view the full report in OpenZeppelin Code Inspector

@dantaik
Copy link
Contributor Author

dantaik commented Jan 23, 2025

@jeff, I had a call with our friends from Nethermind, here are some feedback/suggestions:

  • In the ForcedInclusionStore, we should support using both calldata and a blob to submit their transactions (lets implement the blob support as the first step)
  • A request should be processed with a 12*x seconds delay since 1) the time it is saved or 2) the time the previous request is processed or 3) the time the last batch is proposed. which ever is the biggest.

Base automatically changed from emit_blob_hashes_in_event to pacaya_fork January 24, 2025 04:30
@cyberhorsey cyberhorsey requested a review from Brechtpd January 24, 2025 20:45
uint64 _head = head;
ForcedInclusion storage inclusion = queue[_head];

if (inclusion.createdAt != 0 && block.timestamp >= inclusionDelay + inclusion.createdAt) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm kind of thinking that making the inclusion of these forced transactions "automatic" (as in the preconfer cannot decide easily when they are included) could be problematic for a preconfer. Because the preconfer will not always know in which L1 block its propose transaction will be included, but depending on when that happens, these forced transactions need to be inserted at the right place to be able to give preconfirmations after these are supposed to happen. It does seem to introduce a level of non-determinism that could be quite tricky for a preconfer to fully control easily.

When the preconfer can decide on his own when these transactions actually get included within a window, that seems to make the preconfers job much easier to make things 100% predictable without any tricky boundaries enforced onchai, resulting with a hard deadline on when a propose tx actually needs to be included onchain to not mess with the expected order of the preconfer.

Copy link
Contributor Author

@dantaik dantaik Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another valid point. To address this, we could shift from using timestamps as the indicator for when an inclusion request is due, and instead rely on numBatches in the ITaikoInbox contract. This approach would mean that every N batch proposals, the oldest inclusion request must be processed.

However, one drawback of this design is that it restricts proposers from proactively processing delayed inbox requests, which is not ideal. To mitigate this, we could modify the proposeBatchWithForcedInclusion implementation to allow for the proactive processing of the head inclusion request, even if it isn’t technically due yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See an improvement idea: #18842

Comment on lines +44 to +47
// Call the proposeBatchWithForcedInclusion function on the ForcedInclusionInbox
(, meta_) = IForcedInclusionInbox(forcedInclusionInbox).proposeBatchWithForcedInclusion(
_forcedInclusionParams, _batchParams, _batchTxList
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that this is still only callable by the whitelisted preconfers right? If so, would say again that forced inclusion != inclusion. Wrote about that here before: #18815 (comment)

Inclusion can only be guaranteed when block proposing is made permissionless again (at least for the forced blocks, but may as well allow any block to be proposed when that happens because why not).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Brecht's point (above) — in the current implementation, forced inclusion can only occur if there is at least one preconfer willing to propose blocks when it becomes the current preconfer.

To address this, one potential solution is to enable fully permissionless block proposing if no blocks have been proposed within the last N seconds (though some details still need to be ironed out). While this approach is related to forced inclusion, I believe it aligns more closely with the core protocol or preconfirmation design. Even without forced inclusion, this feature would be a valuable addition to the preconfirmation mechanism. Without it, if there are only a few preconfers and they all stop proposing blocks due to profitability issues, the chain could come to a halt.

@dantaik dantaik changed the title feat(protocol): show case an different forced-inclusion idea feat(protocol): support delayed forced inclusion of txs Jan 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants