forked from paritytech/polkadot-sdk
-
Notifications
You must be signed in to change notification settings - Fork 1
Skalman proof of possession for all cryptos #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
coax1d
wants to merge
179
commits into
master
Choose a base branch
from
skalman--proof-of-possession-for-all-cryptos
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Derive ProofOfPossession for all pubkey crypto type beside BLS.
- Change PoP type to be &[u8] instead of signature so it works for BLS12.
enforcing conetxt
…ech#8504) ## Description When dry-running a contract deployment through the runtime API, the returned address does not match the actual address that will be used when the transaction is submitted. This inconsistency occurs because the address derivation logic doesn't properly account for the difference between transaction execution and dry-run execution contexts. The issue stems from the `create1` address derivation logic in `exec.rs`: ```rust address::create1( &deployer, // the Nonce from the origin has been incremented pre-dispatch, so we // need to subtract 1 to get the nonce at the time of the call. if origin_is_caller { account_nonce.saturating_sub(1u32.into()).saturated_into() } else { account_nonce.saturated_into() }, ) ``` The code correctly subtracts 1 from the account nonce during a transaction execution (because the nonce is incremented pre-dispatch), but doesn't account for execution context - whether it's a real transaction or a dry run through the RPC. ## Review Notes This PR adds a new condition to check for the `IncrementOnce` when calculating the nonce for address derivation: ```rust address::create1( &deployer, // the Nonce from the origin has been incremented pre-dispatch, so we // need to subtract 1 to get the nonce at the time of the call. if origin_is_caller && matches!(exec_context, IncrementOnce::AlreadyIncremented) { account_nonce.saturating_sub(1u32.into()).saturated_into() } else { account_nonce.saturated_into() }, ) ``` ## Before Fix - Dry-run contract deployment returns address derived with nonce N - Actual transaction deployment creates contract at address derived with nonce N-1 - Result: Inconsistent addresses between simulation and actual execution ## After Fix - Dry-run and actual transaction deployments both create contracts at the same address - Result: Consistent contract addresses regardless of execution context - Added test case to verify nonce handling in different execution contexts This fix ensures that users can rely on the address returned by a dry run to match the actual address that will be used when the transaction is submitted. Fixes paritytech/contract-issues#37 # Checklist * [x] My PR includes a detailed description as outlined in the "Description" and its two subsections above. * [x] My PR follows the [labeling requirements]( https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process ) of this project (at minimum one label for `T` required) * External contributors: ask maintainers to put the right label on your PR. * [x] I have made corresponding changes to the documentation (if applicable) * [x] I have added tests that prove my fix is effective or that my feature works (if applicable) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: pgherveou <[email protected]>
paritytech#8585) Backports a part of paritytech#8422 to master so it can be included in ongoing releases sooner. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dónal Murray <[email protected]>
Related to paritytech#8308 Follow-up for paritytech#8021 --------- Co-authored-by: command-bot <> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Removing a tx subtree means partly removing some txs from the unlocks set of other txs. This logic is buggy and the PR attempts to fix it. Closes paritytech#8498 ## Integration N/A ## Review Notes This doesn't seem to be an important bug. Unit tests for txpool still pass after the fix, so txpool behavior isn't changing much. ### TODOs - [x] test with a heavy load test (5 millions txs) - all txs were validated successfully - [x] added a unit test --------- Signed-off-by: Iulian Barbu <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description Proposing a new block on top of an existing parent block considers the best ready transactions provided by the txpool to be included in the new block. Whenever the given parent hash to build on top of is part of a fork up to the finalized block which has no best block notified to the pool, it might be the case that the proposer will rely on `ready_at_light` (due to various reason not in our control), and when that's the case, the ready transactions set will be empty. This PR adds a fallback for the `ready_at_light` where we consider the ready txs of the most recent view processed by the txpool, even if those txs might be invalid Closes paritytech#8213 Closes paritytech#6056 ## Integration N/A ## Review Notes In terms of testing, I updated an existing test which already exercises `ready_at_light` in the scope of the newly added fallback. --------- Signed-off-by: Iulian Barbu <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Michal Kucharczyk <[email protected]>
Related to paritytech#7575 . This change introduces a few metrics (and corresponding logs) to track the state of the produced collation. - time till collation fetched - backing latency (counting from RP) - backing latency (counting from collation fetch) - inclusion latency - expired collations (not backed, not advertised, not fetched) This information should help understanding the causes and possible improvements for higher parachain block times. --------- Signed-off-by: Andrei Sandu <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
revert to use changed files action (pointing to commit of latest release). cc: @alvicsam --------- Co-authored-by: Alexander Samusev <[email protected]>
Fixes paritytech#7987 Fixes paritytech#7868 --------- Signed-off-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Bastian Köcher <[email protected]>
Buckets for a maximum unincluded segment size of 24. --------- Signed-off-by: Andrei Sandu <[email protected]>
Commenting out all flaky tests and tracking them here paritytech#48 Changes: - Disable flaky Rust tests by adding a new disabled feature. `#[ignore]` attribute is not possible since CI runs with `--ignored` - Disable all Zombienet tests - [ ] Waiting for CI what other tests fail. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Since zombienet [has been disabled](paritytech#8600) to improve stability, it makes no sense to keep the GitLab configuration.
Discovered while profiling paritytech#6131 (comment) with the benchmark paritytech#8069 that when running in validation a big chunk of the time is spent inserting and retrieving data from the BTreeMap/BTreeSet. By switching to hashbrown HashMap/HashSet in validation TrieCache and TrieRecorder and the memory-db paritytech/trie#221 read costs improve with around ~40% and write with about ~20% --------- Signed-off-by: Alexandru Gheorghe <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Rename `CreateInherent` to `CreateBare`, add method `create_bare` and deprecate `create_inherent`. Both unsigned transaction and inherent use the extrinsic type `Bare`. Before this PR `CreateInherent` trait was use to generate unsigned transaction, now unsigned transaction can be generated using a proper trait `CreateBare`. How to upgrade: * Change usage of `CreateInherent` to `CreateBare` and `create_inherent` to `create_bare`. * Implement `CreateBare` for the runtime, the method `create_bare` is usually implemented using `Extrinsic::new_bare`. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…h#8615) Fixing paritytech#8215 based on paritytech#8185: Improve try-state for pallet-xcm-bridge-hub It removes try_as and uses try_into implementation instead. --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…Watch` (paritytech#8345) This PR adds metrics for the following RPC subscription: [transactionWatch_v1_submitAndWatch](https://paritytech.github.io/json-rpc-interface-spec/api/transactionWatch_v1_submitAndWatch.html) Metrics are exposed in two ways: - simple counters of how many events we've seen globally - a histogram vector of execution times, which is labeled by `initial event` -> `final event` - This helps us identify how long it takes the transaction pool to advance the state of the events, and further debug issues Part of: paritytech#8336 ### (outdated) PoC Dashboards  ### Next steps - [x] initial dashboards with a live node - [x] adjust testing --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- subscription task are "essential tasks" and the service should go down when they fail. - Upgrade subxt to 0.41 - Update zombienet-sdk to use the subxt re-export of subxt so it does not conflict with the workspace version --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
We were only charging storage deposit based on value length but not based on key length. Since we allow for variable length keys this has to be done. Needs to be back ported since changing this in an already deployed system will be nasty. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…aritytech#8650) This PR rejects non-reserved peers in the reserved-only mode of the litep2p notification peerset. Previously, litep2p ignored completely the reserved-only state while accepting inbound connections. However, it handled it properly during the slot allocation phase. - the main changes are in the `report_inbound_substream` function, which now propagated a `Rejected` response to litep2p on the reserved-only state - in response, litep2p should never open an inbound substream after receiving the rejected response - the state of peers is not advanced while in `Disconnected` or `Backoff` states - the opening state is moved to `Cancelled` - for consistency purposes (and fuzzing purposes), the `report_substream_opened` is more robustly handling the `Disconnected` state - while at it have replaced a panic with `debug_assert` and an instant reject ## Testing Done - started 2 nodes in Kusama and Polkadot with litep2p - added the `reserved_only_rejects_non_reserved_peers` test to ensure litep2p handles peers properly from different states This PR has been extracted from paritytech#8461 to ease the review process cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dmitry Markin <[email protected]>
# Description * This PR adds a new extrinsic `poke_deposit` to `pallet-bounties`. This extrinsic will be used to re-adjust the deposits made in the pallet to create a new bounty. * Part of paritytech#5591 ## Review Notes * Added a new extrinsic `poke_deposit` in `pallet-bounties`. * This extrinsic checks and adjusts the deposits made for creating a bounty. * Added a new event `DepositPoked` to be emitted upon a successful call of the extrinsic. * Although the immediate use of the extrinsic will be to give back some of the deposit after the AH-migration, the extrinsic is written such that it can work if the deposit decreases or increases (both). * The call to the extrinsic would be `free` if an actual adjustment is made to the deposit and `paid` otherwise (when no deposit is changed). * Added tests to test all scenarios. * Added benchmarks ## TO-DOs * [x] Run CI cmd bot to benchmark --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…#8473) The `TokenIdOf` [convert](https://github.com/paritytech/polkadot-sdk/blob/4b83d24f4bc96a7b17964be94b178dd7b8f873b5/bridges/snowbridge/primitives/core/src/location.rs#L40) is XCM version-agnostic, meaning we will get the same token ID for both V5 and legacy V4 asset. However, the extra check is unnecessary, as the`ConvertAssetId::convert(&token_id).ok_or(InvalidAsset)?;` alone is sufficient to verify whether the token is registered.
Update rpc-types - Remove unnecessary derive traits - Fix json decoding for `BlockNumberOrTagOrHash` --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Fix different commit sha for google registry and docker hub images
Move pallet-revive runtime api implementation in a macro, so that we don't repeat the code for every runtime. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Historically, the collection of storage deposits was running in an infallible context. Meaning we needed to make sure that the caller was able to pay the deposits when the last contract execution returns. To achieve that, we capped the storage deposit limit to the maximum balance of the origin. This made the code more complex: It conflated the deposit **limit** with the amount of balance the origin has. In the meantime, we changed code around to make the deposit collection fallible. But never changed this aspect. This PR rectifies that by doing: - The root storage meter and all its nested meter's limits are completely independent of the origin's balance. This makes it way easier to argue about the limit that a nested meter has at any point. - Consistently use `StorageDepositNotEnoughFunds` (limit not reached) and `StorageDepositLimitExhausted` (limit reached). - Origin not being able to pay the ed for a new account is now `StorageDepositNotEnoughFunds` and traps the caller rather then being a `TransferFailed` return code. Important since we are hiding the ed from contracts. So it should also not be an error code that must be handled. Im preparation for: paritytech/contract-issues#38 --------- Co-authored-by: xermicus <[email protected]> Co-authored-by: PG Herveou <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Instead of just checking for the slot, we also take the block number and the relay parent into account (as we actually allow to build multiple blocks per slot). Then this pr also ensures that we are still able to import blocks from availability recovery. This ensures that a network doesn't get stuck on a storm of equivocations. The next step after this pull request would be to implement on chain slashing for equivocations and probably disabling of the offending author. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Bump memory-db to pick up paritytech#8606 and paritytech/trie#221 Additionally, polkavm needs to be bumped to get rid of to get rid of https://github.com/paritytech/polkadot-sdk/actions/runs/15180236627/job/42688141374#step:5:1869 --------- Signed-off-by: Alexandru Gheorghe <[email protected]>
# Description Fixes paritytech#7533. Part of launching smart contracts on AssetHub. The current task involves warming up the state and performing sanity checks to ensure that memory usage remains within reasonable limits; otherwise, it should abort gracefully. Here is a breakdown of the task by parts: - **Configuration.** We use a CLI flag to trigger the warm up. To increase the cache size, we use an existing flag. - **Warm-up.** We populate the trie cache with values from the database using a best hash as a storage root. For normal use we spawn a background task for it, for testing purposes we warm up in a blocking manner. - **Sanity checks.** If we find that the configured cache size is too large, we warn. Variants of the added flag: | Flag | Meaning | |---------------------------------|---------------------------------------------| | [no flag] | no warm-up | | warm-up-trie-cache | non-blocking warmup | | warm-up-trie-cache non-blocking | non-blocking warmup (same as just the flag) | | —warm-up-trie-cache blocking | blocking warmup | ## Integration Tested on a snapshot from polkadot-asset-hub: 5426064 keys, ~900MIB state. Populating Trie cache takes 1.5–2 minutes on Macbook Pro M2. ## Aditional Notes Initially, we thought about putting the entire state into memory using a CLI flag or a runtime parameter, but decided against it. That’s up to collators to decide on which setup run their machines, they can use the current cache size setting to increase the throughput. ## Todos Testing moved to paritytech#8644 --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Alexander Theißen <[email protected]>
Modify asset-hub-westend to use the read/write costs in validate block on validator reference hardware, since these are weights that guarantee the blocks will pass validation and make it on the relay chain. This were obtained with the benchmark created here: paritytech#8069 and it contains all the optimisation brought by paritytech#8607. Fixes: paritytech#7537 --------- Signed-off-by: Alexandru Gheorghe <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
✄ -----------------------------------------------------------------------------
Thank you for your Pull Request! 🙏 Please make sure it follows the contribution guidelines outlined in this
document and fill out the
sections below. Once you're ready to submit your PR for review, please delete this section and leave only the text under
the "Description" heading.
Description
A concise description of what your PR is doing, and what potential issue it is solving. Use Github semantic
linking
to link the PR to an issue that must be closed once this is merged.
Integration
In depth notes about how this PR should be integrated by downstream projects. This part is mandatory, and should be
reviewed by reviewers, if the PR does NOT have the
R0-Silent
label. In case of aR0-Silent
, it can be ignored.Review Notes
In depth notes about the implementation details of your PR. This should be the main guide for reviewers to
understand your approach and effectively review it. If too long, use
<details>
.Imagine that someone who is depending on the old code wants to integrate your new code and the only information that
they get is this section. It helps to include example usage and default value here, with a
diff
code-block to showpossibly integration.
Include your leftover TODOs, if any, here.
Checklist
T
required)You can remove the "Checklist" section once all have been checked. Thank you for your contribution!
✄ -----------------------------------------------------------------------------