Skip to content

Conversation

@drebelsky
Copy link
Contributor

Resolves #4732. Currently, this does the naive update of adding the Soroban tx limit to the classic ops limit. The calculation is also repeated a few times to support iteration (in this PR) and potential differences between pull mode and flood mode—this will need to get cleaned up. Putting this up now so the spots to update are apparent and so there is a spot for discussion.

Pull mode doesn't know ahead of time whether the transactions are Soroban or classic, but the queue limit is per peer, so we could plausibly lower it. Also, the limit is on advertised hashes that we haven't yet received full transactions for. Similarly, in flood mode, this is just limiting the outstanding messages to a given peer. That is, in both modes, although we do need some limit to cap maximum memory usage, the particular limit is somewhat arbitrary (consider also #3514).

Copy link
Contributor

@bboston7 bboston7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting this together. I think the approach of adding soroban limits to our existing limits is entirely logical, but (as you mention) I wonder whether our existing limits might need tweaking given that they assume that every transaction is a single op. At the same time, it seems to be working well, so maybe it's not worth touching that? I'd like to rope @marta-lokhova and @SirTyson into that broader discussion about these heuristics.

The calculation is also repeated a few times to support iteration (in this PR) and potential differences between pull mode and flood mode—this will need to get cleaned up.

I think we'll want these to be the same? Otherwise I worry about one choking the other.

TxAdverts::getTxLimit()
{
auto& lm = mApp.getLedgerManager();
size_t classic = lm.getLastMaxTxSetSizeOps();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you mention in your PR description, this assumes that all classic transactions are a single op, but that's probably not true. This has admittedly been working fine, but I wonder if there's a better way (such as dividing this value by the average number of ops per transaction). At the very least it might be worth checking what the average number of ops per classic transaction is these days to better answer the question of whether this needs adjusting.

Copy link
Contributor Author

@drebelsky drebelsky Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick Hubble query results

SELECT AVG(operation_count)
FROM crypto-stellar.crypto_stellar.history_transactions
WHERE batch_run_date >= DATE(2025, 05, 01) AND resource_fee = 0
f0_
2.7467817158433689
SELECT AVG(operation_count)
FROM crypto-stellar.crypto_stellar.history_transactions
WHERE batch_run_date >= DATE(2025, 08, 01) AND resource_fee = 0
f0_
3.3659880974068379

@marta-lokhova
Copy link
Contributor

I think we have a path forward with this one based on the discussions last week:

  • Let's update the limits to roughly 1 second worth of classic + soroban traffic
  • We can run the new limits on a watcher, comparing side-by-side with the latest stable. This should give us an idea if the updated limits work well in practice (I suspect they should work better than what we have today, because current 5-second limits are really conservative)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Audit core limits that depend on network op limits

3 participants