Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aristo: fork support via layers/txframes #2960

Merged
merged 48 commits into from
Feb 6, 2025
Merged

aristo: fork support via layers/txframes #2960

merged 48 commits into from
Feb 6, 2025

Conversation

arnetheduck
Copy link
Member

@arnetheduck arnetheduck commented Dec 19, 2024

This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in AristoDbRef and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain.

In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database.

Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied.

"committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag.

In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top.

Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together

  • this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future.

Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch.

Another advantage of this approach is that AristoDbRef conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of AristoTxRef - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change.

Once this change has been merged, there are several follow-ups to do:

  • there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO
  • naming is confused - many names for the same thing for legacy reason
  • forkedchain support is still missing in lots of code
  • clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense
  • the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for every database interaction - regardless, the "dag-like" nature will stay
  • dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now

More about the changes:

  • AristoDbRef gains a txRef field (todo: rename) that "more or less" corresponds to the old balancer field
  • AristoDbRef.stack is gone - instead, there's a chain of AristoTxRef objects that hold their respective "layer" which has the actual changes
  • No more reasoning about "top" and "stack" - instead, each AristoTxRef can be a "head" that "more or less" corresponds to the old single-history top notion and its stack
  • level still represents "distance to base" - it's computed from the parent chain instead of being stored
  • one has to be careful not to use frames where forkedchain was intended
  • layers are only for a single branch of history!

TODO items for this PR:

fix #2949
fix #2950

This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.

In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.

Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.

"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.

In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.

Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.

Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.

Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.

Once this change has been merged, there are several follow-ups to do:

* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now

More about the changes:

* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
index* : int

BranchRef* = ref object
blocks*: seq[BlockDesc]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the content of blocks? is it all blocks all the way to the base?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only one branch directly contains a base block. All other branches only contains blocks up to the fork block. And they can access the ancestor branch through parent branch.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok - my gut feeling is that the code would be simplified by removing the additional "branchref" layer and simply have a "BlockDescRef". this would be something to consider for future pr:s..

In particular, when a fork/pruning happens somewhere in the "middle" of the list of blocks here, the logic to split and recombine the branchref into a parent and two mini-branches is susceptible to subtle bugs.

it's certainly not impossible to get this logic right, but even if it "saves" on ref instances by grouping ranges of branch blocks under a single entity, the "range of blocks" doesn't really appear in any first-principle descriptions of how the chain works and therefore, one always has to "reason" about this additional abstraction when relating it to the spec.

@jangko
Copy link
Contributor

jangko commented Feb 5, 2025

image
The hive test shows the new FC module is stabilizing. From the 50 failing test case, 1 related to known issue of ChainId testing against UInt256 while nimbus still using 64 bit ChainId. The rest of 49 cases related to BLS precompiles(EIP 2537) unable to handle infinity point correctly.

note: The hive test requires the new mapper.jq for nimbus-el ethereum/hive#1239

@advaita-saha
Copy link
Contributor

image The hive test shows the new FC module is stabilizing. From the 50 failing test case, 1 related to known issue of ChainId testing against UInt256 while nimbus still using 64 bit ChainId. The rest of 49 cases related to BLS precompiles(EIP 2537) unable to handle infinity point correctly.

note: The hive test requires the new mapper.jq for nimbus-el ethereum/hive#1239

Doesn't work as you can see here https://hivenimbus.advaita.work
Your machine might have cached the master docker image, run with --docker.nocache hive/clients/nimbus-el and select the branch with --client nimbus-el_forked-layers

For verification you can check the commit hash of the runs visible in https://hivenimbus.advaita.work, one is master and other forked-layers
Also it's run with hive with your mapper.jq , which will be visible in the nimbus-el log genesis file, as deposit contract is added

@arnetheduck
Copy link
Member Author

how long does it take for hive to run? could we have a jenkins job for that?

@advaita-saha
Copy link
Contributor

how long does it take for hive to run? could we have a jenkins job for that?

Around 45mins on a good machine, slow machines take a lot of time, can be parallelized based on resources. Major time is during compiling nimbus, after that it's around 20mins

I would not recommend having it in CI, because of constant changing of test cases by EF during development, but had a chat with Dustin to have a process of triggering a hive run in our servers

@jangko
Copy link
Contributor

jangko commented Feb 5, 2025

Your genesis.json produced by mapper.jq is missing mergeNetsplitBlock. Although I cannot correlate between the error message and that missing field.
image

@advaita-saha
Copy link
Contributor

advaita-saha commented Feb 5, 2025

Indeed the missing field is not related with the failing tests because of crash
I just used your branch from hive to build the tests, might be a problem there

Will check

@jangko
Copy link
Contributor

jangko commented Feb 5, 2025

Rerun forked-layers branch and the result is same. Only one 7702 failing case, and the rest is 2537.

image

@jangko
Copy link
Contributor

jangko commented Feb 5, 2025

@advaita-saha you can try to verify forked-layers branch using test_blockchain_json and test_generalstate_json. Run it locally on your machine. It should produce similar result to running hive against master branch. If the problem still persist then It is very weird. If both bc and gst test runs ok, then something wrong with your hive setup.

@advaita-saha
Copy link
Contributor

@advaita-saha you can try to verify forked-layers branch using test_blockchain_json and test_generalstate_json. Run it locally on your machine. It should produce similar result to running hive against master branch. If the problem still persist then It is very weird. If both bc and gst test runs ok, then something wrong with your hive setup.

Indeed this is weird, let me try

@jangko jangko merged commit 2961905 into master Feb 6, 2025
17 checks passed
@jangko jangko deleted the forked-layers branch February 6, 2025 07:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Next step after fixing misconception in ForkedChain Need to fix some misconception in ForkedChain
5 participants