feat(swift-sdk,platform-wallet): wire shielded send end-to-end (all 4 transitions)#3603
feat(swift-sdk,platform-wallet): wire shielded send end-to-end (all 4 transitions)#3603QuantumExplorer wants to merge 33 commits intov3.1-devfrom
Conversation
…draw end-to-end
Shielded send was stubbed out behind a "rebuilt in follow-up PR"
placeholder for the four send flows even though
`ShieldedWallet::transfer` / `unshield` / `withdraw` already exist
on the Rust side and need only the bound shielded wallet's cached
`SpendAuthorizingKey` (no host signer). This commit threads them
through to the Swift Send sheet.
platform-wallet
- New `PlatformWalletError::ShieldedNotBound` so the wrapper can
distinguish "wallet has no shielded sub-wallet" from a build /
broadcast failure.
- New `PlatformWallet` wrappers under the existing `shielded`
feature: `shielded_transfer_to(recipient_raw_43, amount, prover)`,
`shielded_unshield_to(to_platform_addr_bytes, amount, prover)`,
`shielded_withdraw_to(to_core_address, amount, core_fee_per_byte,
prover)`. Each takes the prover by value because `OrchardProver`
is impl'd on `&CachedOrchardProver` (not the bare struct), and
forwards `&prover` into the underlying `ShieldedWallet` op.
Address parsing is inline — Orchard 43-byte raw → `PaymentAddress`,
bincode `PlatformAddress::from_bytes`, `dashcore::Address` from
string with network-match check.
platform-wallet-ffi
- New module `shielded_send` (feature-gated `shielded`):
- `platform_wallet_shielded_warm_up_prover()` —
fire-and-forget global, no manager handle.
- `platform_wallet_shielded_prover_is_ready()` — bool getter
for a UI affordance.
- `platform_wallet_manager_shielded_transfer/unshield/withdraw`
— manager-handle FFIs that resolve the wallet, instantiate
a `CachedOrchardProver`, and forward to the wallet wrappers
via `runtime().block_on(...)`.
swift-sdk
- New `PlatformWalletManager` async methods:
`shieldedTransfer(walletId:recipientRaw43:amount:)`,
`shieldedUnshield(walletId:toPlatformAddress:amount:)`,
`shieldedWithdraw(walletId:toCoreAddress:amount:coreFeePerByte:)`.
All run on a `Task.detached(priority: .userInitiated)` so the
~30 s first-call proof build doesn't block the main actor.
- Static helpers `PlatformWalletManager.warmUpShieldedProver()`
and `PlatformWalletManager.isShieldedProverReady`.
swift-example-app
- `SendViewModel.executeSend` gains a `walletManager` parameter
and replaces three of the four shielded placeholder branches
with the real FFI calls (Shielded → Shielded, Shielded →
Platform, Shielded → Core). The Platform → Shielded branch
retains a clearer placeholder because Type 15 still needs the
per-input nonce fetch the Rust spend builder stubs to zero.
- `SwiftExampleAppApp.bootstrap` kicks off
`warmUpShieldedProver()` on a background task at app start so
the first user-initiated shielded send doesn't pay the build
cost inline.
Verified:
- `cargo fmt --all`, `cargo clippy --workspace --all-features
--locked -- --no-deps -D warnings` clean.
- `bash build_ios.sh --target sim --profile dev` green
(** BUILD SUCCEEDED **).
The end-to-end story is still missing Platform → Shielded
(blocked on the spend builder's nonce TODO) and a host
`Signer<PlatformAddress>` adapter, plus the optional Type 18
`shield_from_asset_lock`. Wallets that already have shielded
balance can now move it freely.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds feature-gated multi-account Orchard (shielded) support across FFI, core Rust, and Swift: new FFI modules and re-exports, multi-account ShieldedWallet and sync/store changes, per-subwallet persistence and restore plumbing, prover warm-up APIs, and Swift UI/wrapping and persistence model additions. (50 words) ChangesFFI ↔ PlatformWallet ↔ Swift (shielded public surface and wiring)
Shielded internals: multi-account, storage, sync, proofs
Sequence Diagram(s)sequenceDiagram
autonumber
participant SwiftApp as Swift App
participant FFI as rs-platform-wallet-ffi
participant Wallet as PlatformWallet (Rust)
participant Network as Platform Network
participant Prover as CachedOrchardProver
SwiftApp->>FFI: platform_wallet_manager_shielded_transfer(walletId, account, recipient, amount)
FFI->>Wallet: resolve_wallet(handle) & spawn worker task
Wallet->>Network: fetch AddressInfo for input addresses (nonces & balances)
Network-->>Wallet: address nonces & balances
Wallet->>Prover: request proving key / build proof
Prover-->>Wallet: proof result
Wallet->>Network: broadcast state transition
Network-->>Wallet: broadcast success / error (maybe AddressesNotEnoughFunds)
Wallet-->>FFI: return mapped PlatformWalletFFIResult
FFI-->>SwiftApp: return to caller
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
|
✅ Review complete (commit c1b0eaf) |
|
✅ DashSDKFFI.xcframework built for this PR.
SwiftPM (host the zip at a stable URL, then use): .binaryTarget(
name: "DashSDKFFI",
url: "https://your.cdn.example/DashSDKFFI.xcframework.zip",
checksum: "ae8036bd66326d44290ddbd2277ff6b1f2bc4b468b88b78e197b6f9bcfa3de8f"
)Xcode manual integration:
|
…pe 15) Completes the four shielded send flows by lighting up Type 15. The Rust spend pipeline already had `ShieldedWallet::shield` but stubbed every input's nonce to 0, which drive-abci rejected on broadcast. This commit: platform-wallet - `ShieldedWallet::shield` now fetches per-input nonces from Platform via `AddressInfo::fetch_many` and increments them before handing to `build_shield_transition`. Removes the long-standing `nonce=0` placeholder + TODO. - New `PlatformWallet::shielded_shield_from_account` helper with auto input selection: walks the chosen Platform Payment account's addresses in ascending derivation order and picks enough to cover `amount + 0.01 DASH` fee buffer (the on-chain fee comes off input 0 via `DeductFromInput(0)`). Returns `ShieldedInsufficientBalance` if the account total can't cover the request. rs-platform-wallet-ffi - New `platform_wallet_manager_shielded_shield(handle, wallet_id, account_index, amount, signer_address_handle)` in `shielded_send.rs`. Takes a `*mut SignerHandle` (Swift's `KeychainSigner.handle`) and casts to `&VTableSigner` — same shape `platform_address_wallet_transfer` uses, since `VTableSigner` already implements `Signer<PlatformAddress>`. swift-sdk - New async method `PlatformWalletManager.shieldedShield( walletId:accountIndex:amount:addressSigner:)`. Threads the `KeychainSigner` keepalive through the detached task the same way `topUpFromAddresses` does. swift-example-app - `SendViewModel.executeSend`'s `.platformToShielded` branch now constructs a `KeychainSigner` and calls `walletManager.shieldedShield(...)`. Replaces the last of the four shielded placeholder errors. The full Send Dash matrix is now real: | Source | Destination | Status | |------------|--------------|------------| | Core | Core | works | | Platform | Shielded | works (this PR) | | Shielded | Shielded | works | | Shielded | Platform | works | | Shielded | Core | works | Type 18 (`shield_from_asset_lock`) — direct Core L1 → Shielded without going through Platform first — is still unwired; tracked separately.
… restore + send credits at credits scale Two adjacent bugs that surfaced together when sending Platform → Shielded immediately after a fresh app launch: **`shielded_shield_from_account` reported `available 0`** even though the wallet detail showed 1.005 DASH on the Platform Payment account. `PlatformAddressWallet::initialize_from_persisted` was only seeding the *provider*'s `found` map — the source it hands to the SDK's incremental sync — but never pushing those balances into the in-memory `ManagedPlatformAccount.address_balances` map. Spend paths that enumerate funded addresses (`shielded_shield_from_account`, `PlatformAddressWallet::addresses_with_balances`, `account.address_credit_balance`) all read from `address_balances`, so they returned 0 until the first BLAST sync finished and `provider::on_address_found` repopulated it. Walk `persisted.per_account` at restore time and call `set_address_credit_balance(addr, balance, None)` on the matching `ManagedPlatformAccount` for each entry, mirroring the same `apply_changeset` path the steady-state sync writes through. New public accessor `PerAccountPlatformAddressState::persisted_balances()` exposes the iteration without leaking the inner `found` map. **Send screen sent at duffs scale.** `SendViewModel.amount` unconditionally multiplied the typed DASH value by 1e8 (L1 duffs). Right for `coreToCore` but wrong for the four flows that touch the credits ledger (1 DASH = 1e11), which underpaid by 1000×. Typing 0.5 DASH for a Platform → Shielded shield turned into 50_000_000 credits (~0.0005 DASH) on the wire — error-message gave it away as `required 1050000000 = amount + fee_buffer`. Split into `amountDuffs` and `amountCredits`. `executeSend` picks `amountCredits` for `shieldedToShielded`, `shieldedToPlatform`, `shieldedToCore`, `platformToShielded`; `coreToCore` still uses `amountDuffs`. The legacy `amount` property aliases `amountDuffs` so any caller that hadn't been audited still gets Core-correct semantics. Verified: `cargo clippy --workspace --all-features --locked -- --no-deps -D warnings` clean, `bash build_ios.sh --target sim --profile dev` green.
Halo 2 circuit synthesis recurses past the ~512 KB iOS dispatch-thread stack and crashes with EXC_BAD_ACCESS on the first `synthesize(config.clone(), V1Pass::<_, CS>::measure(pass))?` call when the future is polled directly on the calling thread. Switch the four shielded spend FFI entry points (transfer/unshield/withdraw/shield) from `runtime().block_on(...)` to `block_on_worker(...)` so the proof runs on a tokio worker with the configured 8 MB stack — the exact case `runtime.rs` was set up for. For `shield`, transmute the borrowed `&VTableSigner` to `&'static` inside the FFI call: the caller retains ownership of the signer handle and we block until the worker future completes, so the painted lifetime never actually escapes the call. `VTableSigner` is `Send + Sync` per its `unsafe impl` in rs-sdk-ffi, so the resulting reference is `Send + 'static` — exactly what `block_on_worker` needs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…adcast failure
`AddressesNotEnoughFundsError` from drive-abci already carries
`addresses_with_info: BTreeMap<PlatformAddress, (AddressNonce, Credits)>`
— Platform's actual per-address nonce and remaining balance after
the bundle's `DeductFromInput(0)` strategy deducts the shield
amount. Stringifying with `e.to_string()` discarded everything but
`required_balance` (the fee), leaving the host with no way to tell
*which* input fell short or whether the local-cache balance
disagreed with Platform.
Pattern-match the broadcast `dash_sdk::Error` for the structured
consensus error (via `Error::Protocol(ProtocolError::ConsensusError)`
or `Error::StateTransitionBroadcastError { cause }`), then format
both the local claim list and Platform's view side-by-side. Add a
per-input `tracing::info!`/`warn!` before broadcast so the same
data is visible in logs even on success — and hosts can spot
local-cache drift by comparing claimed_credits vs platform_balance.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swift (2)
407-438: 💤 Low valueConsider validating
toCoreAddressis non-empty.Other methods explicitly reject empty inputs; an empty
toCoreAddresshere would be passed straight towithCStringand across the FFI as a zero-length C string. Rust will reject it, but a host-side guard produces a clearer error and avoids the detached-task hop.♻️ Suggested guard
guard walletId.count == 32 else { throw PlatformWalletError.invalidParameter( "walletId must be exactly 32 bytes" ) } + guard !toCoreAddress.isEmpty else { + throw PlatformWalletError.invalidParameter( + "toCoreAddress is empty" + ) + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swift` around lines 407 - 438, Add a precondition that toCoreAddress is non-empty in shieldedWithdraw before spawning the detached Task: check that toCoreAddress.isEmpty == false and if empty throw PlatformWalletError.invalidParameter with a clear message like "toCoreAddress must be non-empty"; place this validation alongside the existing walletId and handle guards at the top of the function so the invalid input is rejected on the host side rather than passed into withCString/platform_wallet_manager_shielded_withdraw.
374-378: 💤 Low valueTighter validation on
toPlatformAddresswould catch host-side mistakes earlier.The doc says the address is "bincode-encoded
PlatformAddress—0x00 ‖ 20-byte hashfor P2PKH", which implies a 21-byte payload for P2PKH. The current guard only rejects empty buffers, so a malformed length (e.g. raw 20-byte hash without the discriminant byte) gets passed to FFI and produces a less-actionable error from Rust. Consider rejecting clearly invalid lengths up-front.♻️ Suggested validation
- guard !toPlatformAddress.isEmpty else { - throw PlatformWalletError.invalidParameter( - "toPlatformAddress is empty" - ) - } + // Bincode-encoded `PlatformAddress`: 1-byte discriminant + payload. + // P2PKH today is `0x00 ‖ 20 bytes` = 21 bytes. Reject anything that + // can't possibly be a valid encoding before crossing the FFI boundary. + guard toPlatformAddress.count >= 2 else { + throw PlatformWalletError.invalidParameter( + "toPlatformAddress is too short to be a bincode PlatformAddress" + ) + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swift` around lines 374 - 378, The guard that only rejects an empty toPlatformAddress is too weak; in PlatformWalletManagerShieldedSync validate the bincode-encoded PlatformAddress length (for P2PKH expect exactly 21 bytes: leading discriminant 0x00 + 20-byte hash) before calling the FFI. Replace the empty check on parameter toPlatformAddress with a length check and throw PlatformWalletError.invalidParameter with a clear message like "toPlatformAddress must be 21 bytes (0x00 || 20-byte hash)" when the size is invalid so malformed 20-byte raw hashes are rejected earlier.packages/rs-platform-wallet/src/wallet/shielded/operations.rs (1)
73-90: 💤 Low valueHoist the
usestatements to module scope.The three
useimports (FetchMany,AddressInfo,BTreeSet) are placed inside the function body. While valid, this departs from the existing pattern in this file (all other imports are at the top). Moving them up keeps the import surface discoverable.♻️ Proposed move
@@ top of file use std::collections::BTreeMap; +use std::collections::BTreeSet; use dash_sdk::platform::transition::broadcast::BroadcastStateTransition; +use dash_sdk::platform::FetchMany; +use dash_sdk::query_types::AddressInfo; @@ inside shield() - // Fetch the current address nonces from Platform. Each - // input address has a per-address nonce that the next - // state transition must use as `last_used + 1`. - // ... - use dash_sdk::platform::FetchMany; - use dash_sdk::query_types::AddressInfo; - use std::collections::BTreeSet; - let address_set: BTreeSet<PlatformAddress> = inputs.keys().copied().collect();🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs` around lines 73 - 90, Move the three local imports used in this block up to module scope to match the file's import pattern: remove the in-function uses of FetchMany, AddressInfo, and BTreeSet and add them to the top-level use statements for the module so callers like AddressInfo::fetch_many(&self.sdk, ...) and references to PlatformAddress and inputs.keys() keep working; ensure you import dash_sdk::platform::FetchMany, dash_sdk::query_types::AddressInfo, and std::collections::BTreeSet at the file top and then delete the redundant in-function use lines in operations.rs.packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift (2)
116-118: 💤 Low value
canSendkeys offamountDuffseven for credits-based flows.Today
amountDuffs != nil⇔amountCredits != nilbecause both parsers gate on the sameDouble > 0predicate, so this is correct in practice. It will silently break the moment one parser gains stricter validation (e.g., theDecimalswitch suggested elsewhere, or an upper-bound check). Consider keying off the right unit perdetectedFlowso the invariant is local rather than implicit.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift` around lines 116 - 118, The computed property canSend currently checks amountDuffs regardless of flow; change it to validate the correct amount field based on detectedFlow (e.g., if detectedFlow indicates a credits-based flow require amountCredits != nil, otherwise require amountDuffs != nil) while still checking detectedFlow != nil and !isSending; locate and update the canSend property and use detectedFlow’s discriminator (enum case or helper like isCredits) to pick the right amount variable (amountCredits or amountDuffs) so the invariant is explicit and local.
92-108: ⚡ Quick winConsider parsing amounts via
Decimalto avoid float rounding.
Double(amountString) * 100_000_000_000is binary-floating-point multiplication, so user-friendly inputs that aren't representable in base-2 quietly truncate by one or more credit. Example:"1.23"⇒Double≈1.2299999999999999⇒* 1e11 ≈ 122999999999.99998⇒UInt64(...)⇒122999999999(intent:123_000_000_000). For credits this is a one-credit dust loss per send; for any amount whose decimal string has >15.95 significant digits the rounding gets larger.♻️ Decimal-based parsing
- var amountDuffs: UInt64? { - guard let double = Double(amountString), double > 0 else { return nil } - return UInt64(double * 100_000_000) - } + var amountDuffs: UInt64? { + guard let dash = Decimal(string: amountString), dash > 0 else { return nil } + let duffs = (dash * Decimal(100_000_000)) as NSDecimalNumber + return duffs.uint64Value + } @@ - var amountCredits: UInt64? { - guard let double = Double(amountString), double > 0 else { return nil } - return UInt64(double * 100_000_000_000) - } + var amountCredits: UInt64? { + guard let dash = Decimal(string: amountString), dash > 0 else { return nil } + let credits = (dash * Decimal(100_000_000_000)) as NSDecimalNumber + return credits.uint64Value + }(Optionally round explicitly via
NSDecimalNumberHandlerif you want banker's rounding rather thanuint64Value's default.)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift` around lines 92 - 108, The parsing in amountDuffs and amountCredits uses Double which causes binary-floating rounding loss; update both computed properties (amountDuffs and amountCredits) to parse amountString with Decimal (or NSDecimalNumber), perform the multiplication using Decimal (100_000_000 and 100_000_000_000 respectively), then convert to UInt64 using an explicit rounding mode (via NSDecimalNumberHandler or Decimal's rounded(_:)) to avoid off-by-one dust; keep the same guard for positive values and return nil on parse failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- Around line 91-103: The code currently treats a None AddressInfo as an error;
change handling so that when infos.get(&addr) returns Some(None) (proof of
absence) you treat the starting nonce as 0 and compute nonce = 0 + 1, instead of
failing; keep an error only if the map lacks the key entirely (infos.get(&addr)
is None). Concretely, update the inputs loop that populates inputs_with_nonce
(and the lookup of infos.get(&addr) / info.nonce) to accept opt.as_ref().map(|i|
i.nonce).unwrap_or(0) and then insert (nonce + 1, credits) for that addr;
preserve the PlatformWalletError only for a truly missing map entry.
---
Nitpick comments:
In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- Around line 73-90: Move the three local imports used in this block up to
module scope to match the file's import pattern: remove the in-function uses of
FetchMany, AddressInfo, and BTreeSet and add them to the top-level use
statements for the module so callers like AddressInfo::fetch_many(&self.sdk,
...) and references to PlatformAddress and inputs.keys() keep working; ensure
you import dash_sdk::platform::FetchMany, dash_sdk::query_types::AddressInfo,
and std::collections::BTreeSet at the file top and then delete the redundant
in-function use lines in operations.rs.
In
`@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swift`:
- Around line 407-438: Add a precondition that toCoreAddress is non-empty in
shieldedWithdraw before spawning the detached Task: check that
toCoreAddress.isEmpty == false and if empty throw
PlatformWalletError.invalidParameter with a clear message like "toCoreAddress
must be non-empty"; place this validation alongside the existing walletId and
handle guards at the top of the function so the invalid input is rejected on the
host side rather than passed into
withCString/platform_wallet_manager_shielded_withdraw.
- Around line 374-378: The guard that only rejects an empty toPlatformAddress is
too weak; in PlatformWalletManagerShieldedSync validate the bincode-encoded
PlatformAddress length (for P2PKH expect exactly 21 bytes: leading discriminant
0x00 + 20-byte hash) before calling the FFI. Replace the empty check on
parameter toPlatformAddress with a length check and throw
PlatformWalletError.invalidParameter with a clear message like
"toPlatformAddress must be 21 bytes (0x00 || 20-byte hash)" when the size is
invalid so malformed 20-byte raw hashes are rejected earlier.
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift`:
- Around line 116-118: The computed property canSend currently checks
amountDuffs regardless of flow; change it to validate the correct amount field
based on detectedFlow (e.g., if detectedFlow indicates a credits-based flow
require amountCredits != nil, otherwise require amountDuffs != nil) while still
checking detectedFlow != nil and !isSending; locate and update the canSend
property and use detectedFlow’s discriminator (enum case or helper like
isCredits) to pick the right amount variable (amountCredits or amountDuffs) so
the invariant is explicit and local.
- Around line 92-108: The parsing in amountDuffs and amountCredits uses Double
which causes binary-floating rounding loss; update both computed properties
(amountDuffs and amountCredits) to parse amountString with Decimal (or
NSDecimalNumber), perform the multiplication using Decimal (100_000_000 and
100_000_000_000 respectively), then convert to UInt64 using an explicit rounding
mode (via NSDecimalNumberHandler or Decimal's rounded(_:)) to avoid off-by-one
dust; keep the same guard for positive values and return nil on parse failure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 013dd0c5-e525-4073-932a-7dd9f3ff4d1e
📒 Files selected for processing (11)
packages/rs-platform-wallet-ffi/src/lib.rspackages/rs-platform-wallet-ffi/src/shielded_send.rspackages/rs-platform-wallet/src/error.rspackages/rs-platform-wallet/src/wallet/platform_addresses/provider.rspackages/rs-platform-wallet/src/wallet/platform_addresses/wallet.rspackages/rs-platform-wallet/src/wallet/platform_wallet.rspackages/rs-platform-wallet/src/wallet/shielded/operations.rspackages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/SendTransactionView.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/SwiftExampleAppApp.swift
The shield transition uses `DeductFromInput(0)` as its fee strategy, which drive-abci interprets as "after each input has had its claim deducted, take the fee out of input 0's *remaining* balance" (see the doc comment on `deduct_fee_from_outputs_or_remaining_balance_of_inputs_v0` in rs-dpp). "Input 0" is the BTreeMap-smallest key. The previous selection code claimed the full balance of every picked input, so every input's remaining was 0, and `DeductFromInput(0)` had nothing to bite into. Platform rejected the broadcast with `AddressesNotEnoughFundsError` showing "total available is less than required <fee>". Sort candidates by address bytes (BTreeMap order), skip leading dust addresses whose balance can't reserve the fee buffer (so the next funded address becomes the bundle's input 0), then claim only what's needed to cover `amount` — capping input 0's claim at `balance - FEE_RESERVE_CREDITS` so its post-claim remaining stays ≥ FEE_RESERVE for the network's fee deduction step. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
PR wires the four shielded send flows end-to-end. One blocking issue: the Shielded→Platform path passes bech32m-payload bytes (type byte 0xb0/0x80) to a Rust entry point that decodes via bincode (expects 0x00/0x01), so unshield will fail to decode any real platform recipient. Several real suggestions around nonce overflow, fee-reserve fallback selection, and the unusual &'static transmute. 4 lower-priority findings dropped.
Reviewed commit: 6c72239
🔴 1 blocking | 🟡 7 suggestion(s) | 💬 2 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift`:
- [BLOCKING] lines 240-249: Shielded→Platform sends pass the wrong platform-address byte format to Rust
`DashAddress.parse` returns the 21-byte bech32m payload — the type byte is 0xb0 (P2PKH) or 0x80 (P2SH) per `PlatformAddress::to_bech32m_string` in `packages/rs-dpp/src/address_funds/platform_address.rs:222-242`. Those bytes are passed straight through to `platform_wallet_manager_shielded_unshield`, which calls `PlatformAddress::from_bytes` (`platform_wallet.rs:413`). `from_bytes` is bincode-decoded and expects the storage variant index — 0x00 for P2pkh, 0x01 for P2sh (see the test at `platform_address.rs:1386-1387` and the explicit `to_bytes`/`from_bytes` doc-comments at 311-319/333-337). So a normal user-entered address fails to decode and the unshield broadcast can't proceed. The fix is to either translate the type byte at the Swift→Rust boundary (0xb0→0x00, 0x80→0x01), or to expose an FFI entry point that accepts the bech32m-encoded string and goes through `PlatformAddress::from_bech32m_string` instead.
- [SUGGESTION] lines 221-242: shielded→shielded and shielded→platform branches re-parse the untrimmed recipient text
`detectAddressType()` (line 153) trims whitespace before calling `DashAddress.parse`, so a pasted address with a trailing newline is recognised and the send button is enabled. The shielded→shielded branch (lines 221) and shielded→platform branch (line 240) then re-parse `recipientAddress` without trimming, so the same input hits a `Recipient is not …` error at submit time. The Core branch (line 205) and Shielded→Core branch (line 261) already trim — these two should match.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] line 167: Address nonce increment can wrap silently at u32::MAX
`AddressNonce` is a `u32`, and `info.nonce + 1` on the line that builds `inputs_with_nonce` will panic in debug and wrap to 0 in release once an address reaches the ceiling. Drive treats `u32::MAX` as exhausted, so wrapping submits a transition with nonce 0 — drive-abci then rejects it as a replay, after the wallet has spent ~30 s building the Halo 2 proof. Practically unreachable today, but a `checked_add(1).ok_or(PlatformWalletError::ShieldedBuildError(...))` keeps the failure mode legible and matches the conservative style used elsewhere in this crate.
- [SUGGESTION] lines 117-145: Reimplements rs-sdk's canonical address-nonce fetch instead of reusing it
rs-sdk has `fetch_inputs_with_nonce`, `nonce_inc`, and `ensure_address_balance` in `packages/rs-sdk/src/platform/transition/address_inputs.rs:12-40` that encapsulate exactly this fetch-and-increment dance plus a hard balance check. They are `pub(crate)` today, so platform-wallet can't reach them directly, but a single-line visibility change would let this code re-use the canonical helpers. As written, the new shield path will silently drift from the SDK's behaviour — for example the SDK enforces a balance check that this implementation only `warn!`s on.
- [SUGGESTION] lines 108-168: Concurrent shields on the same wallet TOCTOU on the fetched address nonce
Nonces are fetched via `AddressInfo::fetch_many`, incremented locally, then handed to the builder. Two concurrent calls to `ShieldedWallet::shield` for the same wallet (e.g. user double-taps Send, or app retries while the first is still proving) both observe the same `info.nonce`, both build with `info.nonce + 1`, and the second to land at drive-abci is rejected with a nonce conflict. Not exploitable, but produces an opaque user-facing failure after a ~30 s proof. Either serialise shield-class operations on a per-wallet mutex inside `ShieldedWallet`, or document at the FFI boundary that hosts must enforce single-flight.
In `packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- [SUGGESTION] lines 553-556: Fall-through input selection can pick a tiny address as input 0 with no real fee headroom
When no candidate has `balance > FEE_RESERVE_CREDITS`, `viable_input_0` falls through to 0 and `usable` becomes the entire candidate slice. The total-balance check still requires `total_usable >= amount + FEE_RESERVE_CREDITS`, so practical broadcasts usually still succeed — actual mempool fees on Type 15 are ~20M credits, well below any candidate that can contribute. But in pathological dust scenarios (every funded address holds < actual fee) the chosen input 0's remaining balance can be smaller than the fee, and the broadcast will fail only after the ~30 s proof. Since the comment at lines 547-552 already acknowledges this case will be rejected by the network, it's cheaper to short-circuit here with `ShieldedInsufficientBalance { available: total_usable, required: amount + FEE_RESERVE_CREDITS }` when no candidate exceeds the reserve, instead of producing a bundle that's known to be on the boundary.
- [SUGGESTION] lines 469-610: shielded_shield_from_account selection logic has no Rust unit coverage
`shielded_shield_from_account` carries non-trivial selection rules that directly determine whether shield broadcasts succeed: skipping leading addresses below `FEE_RESERVE_CREDITS`, reserving fee headroom only on input 0, walking BTreeMap order, and accumulating to `amount`. None of this is covered by a focused Rust test, so a future refactor can reintroduce the original `viable_input_0`-style failure without tripping CI. Worth a deterministic unit test against a synthetic managed account covering: dust-first-address case, exact-reserve case, and amount-equal-to-total case.
In `packages/rs-platform-wallet-ffi/src/shielded_send.rs`:
- [SUGGESTION] lines 268-271: Use the established usize round-trip pattern instead of transmuting the signer borrow to 'static
`block_on_worker` requires `F: 'static`, and the new shield path satisfies this with `mem::transmute::<&VTableSigner, &'static VTableSigner>(...)`. It is sound today only because `block_on_worker` (`runtime.rs`) parks on the spawned future to completion — any future change that lets it return early (timeout, cancellation, shutdown select!) silently turns this into a use-after-free. Other call sites in this crate (e.g. `identity_top_up.rs:117-122`) solve the same `Send + 'static` constraint by round-tripping the signer pointer through `usize` and re-materializing the `&VTableSigner` *inside* the future, which captures only `Send + 'static` data and avoids the lifetime fiction entirely. Aligning the shield path to that pattern would remove a sharp edge from the FFI surface at zero behavioural cost.
| let parsed = DashAddress.parse(recipientAddress, network: network) | ||
| guard case .platform(let addressBytes) = parsed.type else { | ||
| error = "Recipient is not a platform address" | ||
| return | ||
| } | ||
| try await walletManager.shieldedUnshield( | ||
| walletId: wallet.walletId, | ||
| toPlatformAddress: addressBytes, | ||
| amount: amountCredits | ||
| ) |
There was a problem hiding this comment.
🔴 Blocking: Shielded→Platform sends pass the wrong platform-address byte format to Rust
DashAddress.parse returns the 21-byte bech32m payload — the type byte is 0xb0 (P2PKH) or 0x80 (P2SH) per PlatformAddress::to_bech32m_string in packages/rs-dpp/src/address_funds/platform_address.rs:222-242. Those bytes are passed straight through to platform_wallet_manager_shielded_unshield, which calls PlatformAddress::from_bytes (platform_wallet.rs:413). from_bytes is bincode-decoded and expects the storage variant index — 0x00 for P2pkh, 0x01 for P2sh (see the test at platform_address.rs:1386-1387 and the explicit to_bytes/from_bytes doc-comments at 311-319/333-337). So a normal user-entered address fails to decode and the unshield broadcast can't proceed. The fix is to either translate the type byte at the Swift→Rust boundary (0xb0→0x00, 0x80→0x01), or to expose an FFI entry point that accepts the bech32m-encoded string and goes through PlatformAddress::from_bech32m_string instead.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift`:
- [BLOCKING] lines 240-249: Shielded→Platform sends pass the wrong platform-address byte format to Rust
`DashAddress.parse` returns the 21-byte bech32m payload — the type byte is 0xb0 (P2PKH) or 0x80 (P2SH) per `PlatformAddress::to_bech32m_string` in `packages/rs-dpp/src/address_funds/platform_address.rs:222-242`. Those bytes are passed straight through to `platform_wallet_manager_shielded_unshield`, which calls `PlatformAddress::from_bytes` (`platform_wallet.rs:413`). `from_bytes` is bincode-decoded and expects the storage variant index — 0x00 for P2pkh, 0x01 for P2sh (see the test at `platform_address.rs:1386-1387` and the explicit `to_bytes`/`from_bytes` doc-comments at 311-319/333-337). So a normal user-entered address fails to decode and the unshield broadcast can't proceed. The fix is to either translate the type byte at the Swift→Rust boundary (0xb0→0x00, 0x80→0x01), or to expose an FFI entry point that accepts the bech32m-encoded string and goes through `PlatformAddress::from_bech32m_string` instead.
| "Shield input" | ||
| ); | ||
| } | ||
| inputs_with_nonce.insert(addr, (info.nonce + 1, credits)); |
There was a problem hiding this comment.
🟡 Suggestion: Address nonce increment can wrap silently at u32::MAX
AddressNonce is a u32, and info.nonce + 1 on the line that builds inputs_with_nonce will panic in debug and wrap to 0 in release once an address reaches the ceiling. Drive treats u32::MAX as exhausted, so wrapping submits a transition with nonce 0 — drive-abci then rejects it as a replay, after the wallet has spent ~30 s building the Halo 2 proof. Practically unreachable today, but a checked_add(1).ok_or(PlatformWalletError::ShieldedBuildError(...)) keeps the failure mode legible and matches the conservative style used elsewhere in this crate.
💡 Suggested change
| inputs_with_nonce.insert(addr, (info.nonce + 1, credits)); | |
| let next_nonce = info.nonce.checked_add(1).ok_or_else(|| { | |
| PlatformWalletError::ShieldedBuildError(format!( | |
| "input address nonce exhausted on platform: {:?}", | |
| addr | |
| )) | |
| })?; | |
| inputs_with_nonce.insert(addr, (next_nonce, credits)); |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] line 167: Address nonce increment can wrap silently at u32::MAX
`AddressNonce` is a `u32`, and `info.nonce + 1` on the line that builds `inputs_with_nonce` will panic in debug and wrap to 0 in release once an address reaches the ceiling. Drive treats `u32::MAX` as exhausted, so wrapping submits a transition with nonce 0 — drive-abci then rejects it as a replay, after the wallet has spent ~30 s building the Halo 2 proof. Practically unreachable today, but a `checked_add(1).ok_or(PlatformWalletError::ShieldedBuildError(...))` keeps the failure mode legible and matches the conservative style used elsewhere in this crate.
| let viable_input_0 = candidates | ||
| .iter() | ||
| .position(|(_, balance)| *balance > FEE_RESERVE_CREDITS) | ||
| .unwrap_or(0); |
There was a problem hiding this comment.
🟡 Suggestion: Fall-through input selection can pick a tiny address as input 0 with no real fee headroom
When no candidate has balance > FEE_RESERVE_CREDITS, viable_input_0 falls through to 0 and usable becomes the entire candidate slice. The total-balance check still requires total_usable >= amount + FEE_RESERVE_CREDITS, so practical broadcasts usually still succeed — actual mempool fees on Type 15 are ~20M credits, well below any candidate that can contribute. But in pathological dust scenarios (every funded address holds < actual fee) the chosen input 0's remaining balance can be smaller than the fee, and the broadcast will fail only after the ~30 s proof. Since the comment at lines 547-552 already acknowledges this case will be rejected by the network, it's cheaper to short-circuit here with ShieldedInsufficientBalance { available: total_usable, required: amount + FEE_RESERVE_CREDITS } when no candidate exceeds the reserve, instead of producing a bundle that's known to be on the boundary.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- [SUGGESTION] lines 553-556: Fall-through input selection can pick a tiny address as input 0 with no real fee headroom
When no candidate has `balance > FEE_RESERVE_CREDITS`, `viable_input_0` falls through to 0 and `usable` becomes the entire candidate slice. The total-balance check still requires `total_usable >= amount + FEE_RESERVE_CREDITS`, so practical broadcasts usually still succeed — actual mempool fees on Type 15 are ~20M credits, well below any candidate that can contribute. But in pathological dust scenarios (every funded address holds < actual fee) the chosen input 0's remaining balance can be smaller than the fee, and the broadcast will fail only after the ~30 s proof. Since the comment at lines 547-552 already acknowledges this case will be rejected by the network, it's cheaper to short-circuit here with `ShieldedInsufficientBalance { available: total_usable, required: amount + FEE_RESERVE_CREDITS }` when no candidate exceeds the reserve, instead of producing a bundle that's known to be on the boundary.
| let parsed = DashAddress.parse(recipientAddress, network: network) | ||
| guard case .orchard(let recipientRaw) = parsed.type else { | ||
| error = "Recipient is not a shielded address" | ||
| return | ||
| } | ||
| try await walletManager.shieldedTransfer( | ||
| walletId: wallet.walletId, | ||
| recipientRaw43: recipientRaw, | ||
| amount: amountCredits | ||
| ) | ||
| successMessage = "Shielded transfer complete" | ||
|
|
||
| case .shieldedToPlatform: | ||
| // Shielded → Platform: spend notes, credit the | ||
| // platform address (also credits scale). | ||
| guard let amountCredits else { | ||
| error = "Invalid amount" | ||
| return | ||
| } | ||
| let parsed = DashAddress.parse(recipientAddress, network: network) | ||
| guard case .platform(let addressBytes) = parsed.type else { | ||
| error = "Recipient is not a platform address" |
There was a problem hiding this comment.
🟡 Suggestion: shielded→shielded and shielded→platform branches re-parse the untrimmed recipient text
detectAddressType() (line 153) trims whitespace before calling DashAddress.parse, so a pasted address with a trailing newline is recognised and the send button is enabled. The shielded→shielded branch (lines 221) and shielded→platform branch (line 240) then re-parse recipientAddress without trimming, so the same input hits a Recipient is not … error at submit time. The Core branch (line 205) and Shielded→Core branch (line 261) already trim — these two should match.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift`:
- [SUGGESTION] lines 221-242: shielded→shielded and shielded→platform branches re-parse the untrimmed recipient text
`detectAddressType()` (line 153) trims whitespace before calling `DashAddress.parse`, so a pasted address with a trailing newline is recognised and the send button is enabled. The shielded→shielded branch (lines 221) and shielded→platform branch (line 240) then re-parse `recipientAddress` without trimming, so the same input hits a `Recipient is not …` error at submit time. The Core branch (line 205) and Shielded→Core branch (line 261) already trim — these two should match.
| let address_signer: &'static VTableSigner = | ||
| std::mem::transmute::<&VTableSigner, &'static VTableSigner>( | ||
| &*(signer_address_handle as *const VTableSigner), | ||
| ); |
There was a problem hiding this comment.
🟡 Suggestion: Use the established usize round-trip pattern instead of transmuting the signer borrow to 'static
block_on_worker requires F: 'static, and the new shield path satisfies this with mem::transmute::<&VTableSigner, &'static VTableSigner>(...). It is sound today only because block_on_worker (runtime.rs) parks on the spawned future to completion — any future change that lets it return early (timeout, cancellation, shutdown select!) silently turns this into a use-after-free. Other call sites in this crate (e.g. identity_top_up.rs:117-122) solve the same Send + 'static constraint by round-tripping the signer pointer through usize and re-materializing the &VTableSigner inside the future, which captures only Send + 'static data and avoids the lifetime fiction entirely. Aligning the shield path to that pattern would remove a sharp edge from the FFI surface at zero behavioural cost.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet-ffi/src/shielded_send.rs`:
- [SUGGESTION] lines 268-271: Use the established usize round-trip pattern instead of transmuting the signer borrow to 'static
`block_on_worker` requires `F: 'static`, and the new shield path satisfies this with `mem::transmute::<&VTableSigner, &'static VTableSigner>(...)`. It is sound today only because `block_on_worker` (`runtime.rs`) parks on the spawned future to completion — any future change that lets it return early (timeout, cancellation, shutdown select!) silently turns this into a use-after-free. Other call sites in this crate (e.g. `identity_top_up.rs:117-122`) solve the same `Send + 'static` constraint by round-tripping the signer pointer through `usize` and re-materializing the `&VTableSigner` *inside* the future, which captures only `Send + 'static` data and avoids the lifetime fiction entirely. Aligning the shield path to that pattern would remove a sharp edge from the FFI surface at zero behavioural cost.
| // Fetch the current address nonces from Platform. Each | ||
| // input address has a per-address nonce that the next | ||
| // state transition must use as `last_used + 1`. | ||
| // `AddressInfo::fetch_many` returns the last-used nonce | ||
| // (and current balance) per address; we increment it. | ||
| // Without this the broadcast was rejected by drive-abci | ||
| // because every shield transition tried to use nonce 0. | ||
| use dash_sdk::platform::FetchMany; | ||
| use dash_sdk::query_types::AddressInfo; | ||
| use std::collections::BTreeSet; | ||
|
|
||
| let address_set: BTreeSet<PlatformAddress> = inputs.keys().copied().collect(); | ||
| let infos = AddressInfo::fetch_many(&self.sdk, address_set) | ||
| .await | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!("fetch input nonces: {e}")) | ||
| })?; | ||
|
|
||
| let mut inputs_with_nonce: BTreeMap<PlatformAddress, (u32, Credits)> = BTreeMap::new(); | ||
| for (addr, credits) in inputs { | ||
| let info = infos | ||
| .get(&addr) | ||
| .and_then(|opt| opt.as_ref()) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address not found on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; |
There was a problem hiding this comment.
🟡 Suggestion: Reimplements rs-sdk's canonical address-nonce fetch instead of reusing it
rs-sdk has fetch_inputs_with_nonce, nonce_inc, and ensure_address_balance in packages/rs-sdk/src/platform/transition/address_inputs.rs:12-40 that encapsulate exactly this fetch-and-increment dance plus a hard balance check. They are pub(crate) today, so platform-wallet can't reach them directly, but a single-line visibility change would let this code re-use the canonical helpers. As written, the new shield path will silently drift from the SDK's behaviour — for example the SDK enforces a balance check that this implementation only warn!s on.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] lines 117-145: Reimplements rs-sdk's canonical address-nonce fetch instead of reusing it
rs-sdk has `fetch_inputs_with_nonce`, `nonce_inc`, and `ensure_address_balance` in `packages/rs-sdk/src/platform/transition/address_inputs.rs:12-40` that encapsulate exactly this fetch-and-increment dance plus a hard balance check. They are `pub(crate)` today, so platform-wallet can't reach them directly, but a single-line visibility change would let this code re-use the canonical helpers. As written, the new shield path will silently drift from the SDK's behaviour — for example the SDK enforces a balance check that this implementation only `warn!`s on.
| pub async fn shielded_shield_from_account<S, P>( | ||
| &self, | ||
| account_index: u32, | ||
| amount: u64, | ||
| signer: &S, | ||
| prover: P, | ||
| ) -> Result<(), PlatformWalletError> | ||
| where | ||
| S: dpp::identity::signer::Signer<dpp::address_funds::PlatformAddress> + Send + Sync, | ||
| P: dpp::shielded::builder::OrchardProver, | ||
| { | ||
| // The shield transition uses `DeductFromInput(0)` as its fee | ||
| // strategy. drive-abci interprets that as "after each input | ||
| // address has had its `claim` deducted, take the fee out of | ||
| // input 0's *remaining* balance" (see | ||
| // `deduct_fee_from_outputs_or_remaining_balance_of_inputs_v0` | ||
| // in rs-dpp). "Input 0" is the smallest-key entry of the | ||
| // BTreeMap we hand to the builder. Therefore: | ||
| // | ||
| // * we must NOT claim each input's full balance — claiming | ||
| // `balance` leaves `remaining = 0`, and the fee | ||
| // deduction has nothing to bite into. | ||
| // * we must reserve at least `FEE_RESERVE_CREDITS` of | ||
| // unclaimed balance specifically on input 0 (the | ||
| // BTreeMap-smallest address). | ||
| // | ||
| // Empty-mempool fees on Type 15 transitions land at ~20M | ||
| // credits (~0.0002 DASH). Reserve 1e9 credits (0.01 DASH) — | ||
| // 50× headroom, still trivial relative to typical balances. | ||
| const FEE_RESERVE_CREDITS: u64 = 1_000_000_000; | ||
|
|
||
| // Build the inputs map under the wallet-manager read lock, | ||
| // then drop the lock before re-entering shielded so the | ||
| // guards don't nest unnecessarily. | ||
| let inputs: std::collections::BTreeMap< | ||
| dpp::address_funds::PlatformAddress, | ||
| dpp::fee::Credits, | ||
| > = { | ||
| let wm = self.wallet_manager.read().await; | ||
| let info = wm | ||
| .get_wallet_info(&self.wallet_id) | ||
| .ok_or_else(|| PlatformWalletError::WalletNotFound(hex::encode(self.wallet_id)))?; | ||
| let account = info | ||
| .core_wallet | ||
| .platform_payment_managed_account_at_index(account_index) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::AddressOperation(format!( | ||
| "no platform payment account at index {account_index}" | ||
| )) | ||
| })?; | ||
|
|
||
| // Collect (address, balance) for every funded address, | ||
| // sorted by address bytes — that determines BTreeMap | ||
| // key order downstream and therefore which input ends | ||
| // up at index 0. | ||
| let mut candidates: Vec<(dpp::address_funds::PlatformAddress, u64)> = account | ||
| .addresses | ||
| .addresses | ||
| .values() | ||
| .filter_map(|addr_info| { | ||
| let p2pkh = | ||
| key_wallet::PlatformP2PKHAddress::from_address(&addr_info.address).ok()?; | ||
| let balance = account.address_credit_balance(&p2pkh); | ||
| if balance == 0 { | ||
| None | ||
| } else { | ||
| Some(( | ||
| dpp::address_funds::PlatformAddress::P2pkh(p2pkh.to_bytes()), | ||
| balance, | ||
| )) | ||
| } | ||
| }) | ||
| .collect(); | ||
| candidates.sort_by_key(|(addr, _)| *addr); | ||
|
|
||
| // The address that will be the bundle's `input_0` must | ||
| // have balance > FEE_RESERVE so we can claim at least 1 | ||
| // credit while leaving the reserve untouched. Skip any | ||
| // leading dust address that can't satisfy that — the | ||
| // next address up will become input 0 instead. (If | ||
| // every funded address is below the reserve, fall back | ||
| // to the smallest one so we still produce a valid | ||
| // builder input map; the network will reject it cleanly | ||
| // if the fee can't be covered.) | ||
| let viable_input_0 = candidates | ||
| .iter() | ||
| .position(|(_, balance)| *balance > FEE_RESERVE_CREDITS) | ||
| .unwrap_or(0); | ||
| let usable: &[(dpp::address_funds::PlatformAddress, u64)] = | ||
| &candidates[viable_input_0..]; | ||
|
|
||
| let total_usable: u64 = usable.iter().map(|(_, b)| b).sum(); | ||
| let needed = amount.saturating_add(FEE_RESERVE_CREDITS); | ||
| if total_usable < needed { | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: total_usable, | ||
| required: needed, | ||
| }); | ||
| } | ||
|
|
||
| // Walk usable inputs in BTreeMap order, claiming only | ||
| // what's needed to cover `amount`. The fee reserve is | ||
| // taken off input 0's max claim so its post-claim | ||
| // remaining stays ≥ FEE_RESERVE_CREDITS for the | ||
| // network's `DeductFromInput(0)` step. | ||
| let mut chosen: std::collections::BTreeMap< | ||
| dpp::address_funds::PlatformAddress, | ||
| dpp::fee::Credits, | ||
| > = std::collections::BTreeMap::new(); | ||
| let mut accumulated_claim: u64 = 0; | ||
| for (i, (addr, balance)) in usable.iter().enumerate() { | ||
| if accumulated_claim >= amount { | ||
| break; | ||
| } | ||
| let max_claim = if i == 0 { | ||
| balance.saturating_sub(FEE_RESERVE_CREDITS) | ||
| } else { | ||
| *balance | ||
| }; | ||
| let still_need = amount - accumulated_claim; | ||
| let claim = max_claim.min(still_need); | ||
| if claim > 0 { | ||
| chosen.insert(*addr, claim); | ||
| accumulated_claim = accumulated_claim.saturating_add(claim); | ||
| } | ||
| } | ||
|
|
||
| if accumulated_claim < amount { | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: accumulated_claim, | ||
| required: amount, | ||
| }); | ||
| } | ||
| chosen | ||
| }; | ||
|
|
||
| let guard = self.shielded.read().await; | ||
| let shielded = guard | ||
| .as_ref() | ||
| .ok_or(PlatformWalletError::ShieldedNotBound)?; | ||
| shielded.shield(inputs, amount, signer, &prover).await | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: shielded_shield_from_account selection logic has no Rust unit coverage
shielded_shield_from_account carries non-trivial selection rules that directly determine whether shield broadcasts succeed: skipping leading addresses below FEE_RESERVE_CREDITS, reserving fee headroom only on input 0, walking BTreeMap order, and accumulating to amount. None of this is covered by a focused Rust test, so a future refactor can reintroduce the original viable_input_0-style failure without tripping CI. Worth a deterministic unit test against a synthetic managed account covering: dust-first-address case, exact-reserve case, and amount-equal-to-total case.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- [SUGGESTION] lines 469-610: shielded_shield_from_account selection logic has no Rust unit coverage
`shielded_shield_from_account` carries non-trivial selection rules that directly determine whether shield broadcasts succeed: skipping leading addresses below `FEE_RESERVE_CREDITS`, reserving fee headroom only on input 0, walking BTreeMap order, and accumulating to `amount`. None of this is covered by a focused Rust test, so a future refactor can reintroduce the original `viable_input_0`-style failure without tripping CI. Worth a deterministic unit test against a synthetic managed account covering: dust-first-address case, exact-reserve case, and amount-equal-to-total case.
| ) -> Result<(), PlatformWalletError> { | ||
| let recipient_addr = self.default_orchard_address()?; | ||
|
|
||
| // Build nonce map: The DPP builder takes (AddressNonce, Credits) pairs. | ||
| // For now we use nonce=0 as a placeholder -- the actual nonce should be | ||
| // fetched from the platform. In production, callers may use the SDK's | ||
| // ShieldFunds trait directly which fetches nonces automatically. | ||
| // | ||
| // TODO: Add proper nonce fetching, either here or require callers to | ||
| // provide inputs_with_nonce directly. | ||
| let inputs_with_nonce: BTreeMap<PlatformAddress, (u32, Credits)> = inputs | ||
| .into_iter() | ||
| .map(|(addr, credits)| (addr, (0u32, credits))) | ||
| .collect(); | ||
| // Fetch the current address nonces from Platform. Each | ||
| // input address has a per-address nonce that the next | ||
| // state transition must use as `last_used + 1`. | ||
| // `AddressInfo::fetch_many` returns the last-used nonce | ||
| // (and current balance) per address; we increment it. | ||
| // Without this the broadcast was rejected by drive-abci | ||
| // because every shield transition tried to use nonce 0. | ||
| use dash_sdk::platform::FetchMany; | ||
| use dash_sdk::query_types::AddressInfo; | ||
| use std::collections::BTreeSet; | ||
|
|
||
| let address_set: BTreeSet<PlatformAddress> = inputs.keys().copied().collect(); | ||
| let infos = AddressInfo::fetch_many(&self.sdk, address_set) | ||
| .await | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!("fetch input nonces: {e}")) | ||
| })?; | ||
|
|
||
| let mut inputs_with_nonce: BTreeMap<PlatformAddress, (u32, Credits)> = BTreeMap::new(); | ||
| for (addr, credits) in inputs { | ||
| let info = infos | ||
| .get(&addr) | ||
| .and_then(|opt| opt.as_ref()) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address not found on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; | ||
| // Surface a per-input diagnostic so the host can see what | ||
| // we're claiming vs what Platform actually reports — | ||
| // mismatches are the typical root cause of | ||
| // `AddressesNotEnoughFundsError` on shield broadcast. | ||
| if info.balance < credits { | ||
| warn!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input claims more credits than Platform reports — broadcast will likely fail" | ||
| ); | ||
| } else { | ||
| info!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input" | ||
| ); | ||
| } | ||
| inputs_with_nonce.insert(addr, (info.nonce + 1, credits)); | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Concurrent shields on the same wallet TOCTOU on the fetched address nonce
Nonces are fetched via AddressInfo::fetch_many, incremented locally, then handed to the builder. Two concurrent calls to ShieldedWallet::shield for the same wallet (e.g. user double-taps Send, or app retries while the first is still proving) both observe the same info.nonce, both build with info.nonce + 1, and the second to land at drive-abci is rejected with a nonce conflict. Not exploitable, but produces an opaque user-facing failure after a ~30 s proof. Either serialise shield-class operations on a per-wallet mutex inside ShieldedWallet, or document at the FFI boundary that hosts must enforce single-flight.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] lines 108-168: Concurrent shields on the same wallet TOCTOU on the fetched address nonce
Nonces are fetched via `AddressInfo::fetch_many`, incremented locally, then handed to the builder. Two concurrent calls to `ShieldedWallet::shield` for the same wallet (e.g. user double-taps Send, or app retries while the first is still proving) both observe the same `info.nonce`, both build with `info.nonce + 1`, and the second to land at drive-abci is rejected with a nonce conflict. Not exploitable, but produces an opaque user-facing failure after a ~30 s proof. Either serialise shield-class operations on a per-wallet mutex inside `ShieldedWallet`, or document at the FFI boundary that hosts must enforce single-flight.
| /// each entry rendered as `<base58_addr>=(nonce <n>, <c> credits)`. | ||
| fn format_addresses_with_info( | ||
| map: &std::collections::BTreeMap< | ||
| dpp::address_funds::PlatformAddress, | ||
| (dpp::prelude::AddressNonce, dpp::fee::Credits), | ||
| >, | ||
| ) -> String { | ||
| map.iter() | ||
| .map(|(addr, (nonce, credits))| { | ||
| let hex_hash = match addr { | ||
| dpp::address_funds::PlatformAddress::P2pkh(h) => { | ||
| format!("p2pkh:{}", hex::encode(h)) | ||
| } | ||
| dpp::address_funds::PlatformAddress::P2sh(h) => format!("p2sh:{}", hex::encode(h)), | ||
| }; | ||
| format!("{hex_hash}=(nonce {nonce}, {credits} credits)") | ||
| }) | ||
| .collect::<Vec<_>>() | ||
| .join(", ") | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: format_addresses_with_info doc claims base58 but body emits hex
The doc-comment says "each entry rendered as <base58_addr>=(nonce <n>, <c> credits)", but the body matches on PlatformAddress::P2pkh/P2sh and emits p2pkh:<hex> / p2sh:<hex> via hex::encode. Either update the comment to say hex (matches what the function actually does), or render via to_bech32m_string so the diagnostic matches the address shown in the wallet UI — the latter is more useful when grepping logs for a specific address.
source: ['claude']
| /// Build the Halo 2 proving key now if it hasn't been built yet. | ||
| /// | ||
| /// First-call latency is ~30 seconds; subsequent calls return | ||
| /// immediately. Hosts should fire this on a background thread at | ||
| /// app startup so the first shielded send doesn't block the user. | ||
| /// Safe to call repeatedly and from any thread. | ||
| /// | ||
| /// Independent of any manager — the cache is a process-global | ||
| /// `OnceLock`. | ||
| #[no_mangle] | ||
| pub unsafe extern "C" fn platform_wallet_shielded_warm_up_prover() { | ||
| CachedOrchardProver::new().warm_up(); | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: warm_up_prover header says 'fire-and-forget' but the FFI call is synchronous and blocks ~30 s
The file header at lines 22-25 describes platform_wallet_shielded_warm_up_prover as a fire-and-forget global entry point hosts can call at startup. The function itself runs CachedOrchardProver::new().warm_up() synchronously on the calling thread and blocks ~30 s on first call. The Swift wrapper hides this via Task.detached(.background), but any other host that takes the doc at face value will block its UI thread. Either move the work onto a tokio task via runtime().spawn(...) so the call genuinely returns immediately, or amend the doc to say it blocks for ~30 s on first call.
source: ['claude']
- unshield FFI now takes the bech32m string and parses Rust-side via `PlatformAddress::from_bech32m_string`, with a network check. The previous byte-based path passed the 21-byte bech32m payload (type byte 0xb0/0x80) into bincode `from_bytes`, which expects the storage variant tag 0x00/0x01 and rejected real user-entered addresses (thepastaclaw c8873f6312ef). - shield: nonce increment now `checked_add(1)` so a u32 wrap surfaces as `ShieldedBuildError` instead of replaying with nonce 0 after a 30 s proof (cb50b774985e). - shield input selection: when no candidate clears FEE_RESERVE_CREDITS, fail fast with `ShieldedInsufficientBalance` instead of producing a known-boundary bundle (2b28ee4ac2f4). - SendViewModel: trim recipient in the shielded→shielded and shielded→platform branches (68c36dcd4fe0). Forward the trimmed bech32m string to `shieldedUnshield` directly — the Swift side no longer extracts payload bytes. - format_addresses_with_info now renders via `to_bech32m_string` and takes the wallet's network — diagnostics match what the UI shows so log greps line up (6b82603320bd). - platform_wallet_shielded_warm_up_prover dispatches the build via `runtime().spawn_blocking(...)` so it actually returns immediately as the doc claims (a575d0f7eb0f). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- Around line 377-459: The three new methods (shielded_transfer_to,
shielded_unshield_to, shielded_withdraw_to) call ShieldedWallet::{transfer,
unshield, withdraw} which still rely on the shared spend helper that errors out
with "Spending operations require a ShieldedStore that provides MerklePath
witnesses. Not yet implemented."; fix by wiring/implementing a ShieldedStore
that returns MerklePath witnesses (or update the shared spend helper to support
a witness-less code path), ensuring the store used by ShieldedWallet (the guard
in self.shielded) implements the witness provider used during note selection so
the calls from shielded_transfer_to / shielded_unshield_to /
shielded_withdraw_to succeed at runtime.
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift`:
- Around line 92-118: The amount parsers currently accept any positive Double
and truncate to UInt64, but canSend only checks amountDuffs so tiny values
become zero after scaling or the wrong unit is validated for shielded flows;
update amountDuffs and amountCredits to parse the Double, compute the scaled
UInt64 and only return it if the scaled integer is > 0 (so UInt64(double *
scale) must be > 0), then replace the canSend check to use the active unit based
on detectedFlow (use amountDuffs for Core flows and amountCredits for
Platform/shielded flows) — reference the existing computed properties
amountDuffs, amountCredits, amount (shim) and the canSend and detectedFlow logic
when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: db812a17-166a-4e0c-a217-a3f4b8cf1387
📒 Files selected for processing (5)
packages/rs-platform-wallet-ffi/src/shielded_send.rspackages/rs-platform-wallet/src/wallet/platform_wallet.rspackages/rs-platform-wallet/src/wallet/shielded/operations.rspackages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift
| pub async fn shielded_transfer_to<P: dpp::shielded::builder::OrchardProver>( | ||
| &self, | ||
| recipient_raw_43: &[u8; 43], | ||
| amount: u64, | ||
| prover: P, | ||
| ) -> Result<(), PlatformWalletError> { | ||
| let guard = self.shielded.read().await; | ||
| let shielded = guard | ||
| .as_ref() | ||
| .ok_or(PlatformWalletError::ShieldedNotBound)?; | ||
| let recipient = Option::<grovedb_commitment_tree::PaymentAddress>::from( | ||
| grovedb_commitment_tree::PaymentAddress::from_raw_address_bytes(recipient_raw_43), | ||
| ) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError( | ||
| "invalid Orchard payment address bytes".to_string(), | ||
| ) | ||
| })?; | ||
| shielded.transfer(&recipient, amount, &prover).await | ||
| } | ||
|
|
||
| /// Unshield: spend shielded notes and send `amount` credits to | ||
| /// the platform address `to_platform_addr_bech32m` (a bech32m | ||
| /// string like `"dash1…"` / `"tdash1…"`). Parsed via | ||
| /// `PlatformAddress::from_bech32m_string` and verified against | ||
| /// the wallet's network. | ||
| #[cfg(feature = "shielded")] | ||
| pub async fn shielded_unshield_to<P: dpp::shielded::builder::OrchardProver>( | ||
| &self, | ||
| to_platform_addr_bech32m: &str, | ||
| amount: u64, | ||
| prover: P, | ||
| ) -> Result<(), PlatformWalletError> { | ||
| let guard = self.shielded.read().await; | ||
| let shielded = guard | ||
| .as_ref() | ||
| .ok_or(PlatformWalletError::ShieldedNotBound)?; | ||
| let (to, addr_network) = | ||
| dpp::address_funds::PlatformAddress::from_bech32m_string(to_platform_addr_bech32m) | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "invalid platform address: {e}" | ||
| )) | ||
| })?; | ||
| if addr_network != self.sdk.network { | ||
| return Err(PlatformWalletError::ShieldedBuildError(format!( | ||
| "platform address network mismatch: address {addr_network:?}, wallet {:?}", | ||
| self.sdk.network | ||
| ))); | ||
| } | ||
| shielded.unshield(&to, amount, &prover).await | ||
| } | ||
|
|
||
| /// Withdraw: spend shielded notes and send `amount` credits to | ||
| /// the Core L1 address `to_core_address` (Base58Check string). | ||
| /// `core_fee_per_byte` is the L1 fee rate (duffs/byte). | ||
| #[cfg(feature = "shielded")] | ||
| pub async fn shielded_withdraw_to<P: dpp::shielded::builder::OrchardProver>( | ||
| &self, | ||
| to_core_address: &str, | ||
| amount: u64, | ||
| core_fee_per_byte: u32, | ||
| prover: P, | ||
| ) -> Result<(), PlatformWalletError> { | ||
| let guard = self.shielded.read().await; | ||
| let shielded = guard | ||
| .as_ref() | ||
| .ok_or(PlatformWalletError::ShieldedNotBound)?; | ||
| let network = self.sdk.network; | ||
| let parsed = to_core_address | ||
| .parse::<dashcore::Address<dashcore::address::NetworkUnchecked>>() | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!("invalid core address: {e}")) | ||
| })? | ||
| .require_network(network) | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "core address network mismatch: {e}" | ||
| )) | ||
| })?; | ||
| shielded | ||
| .withdraw(&parsed, amount, core_fee_per_byte, &prover) | ||
| .await |
There was a problem hiding this comment.
These new spending APIs still route into an unimplemented witness path.
shielded_transfer_to, shielded_unshield_to, and shielded_withdraw_to all end up in ShieldedWallet::{transfer,unshield,withdraw}, but the shared spend helper still bails with "Spending operations require a ShieldedStore that provides MerklePath witnesses. Not yet implemented." once a note is selected. As written, three of the four newly wired shielded send flows cannot succeed at runtime.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/rs-platform-wallet/src/wallet/platform_wallet.rs` around lines 377 -
459, The three new methods (shielded_transfer_to, shielded_unshield_to,
shielded_withdraw_to) call ShieldedWallet::{transfer, unshield, withdraw} which
still rely on the shared spend helper that errors out with "Spending operations
require a ShieldedStore that provides MerklePath witnesses. Not yet
implemented."; fix by wiring/implementing a ShieldedStore that returns
MerklePath witnesses (or update the shared spend helper to support a
witness-less code path), ensuring the store used by ShieldedWallet (the guard in
self.shielded) implements the witness provider used during note selection so the
calls from shielded_transfer_to / shielded_unshield_to / shielded_withdraw_to
succeed at runtime.
| /// Parsed amount expressed in **L1 duffs** (1 DASH = 1e8). Right | ||
| /// for Core sends; *wrong* for Platform / shielded sends, which | ||
| /// use the credits scale (1 DASH = 1e11) instead. Use [`amountCredits`] | ||
| /// for those paths — picking duffs underpays them by 1000×. | ||
| var amountDuffs: UInt64? { | ||
| guard let double = Double(amountString), double > 0 else { return nil } | ||
| return UInt64(double * 100_000_000) | ||
| } | ||
|
|
||
| /// Parsed amount expressed in Platform / shielded **credits** | ||
| /// (1 DASH = 1e11). Used for any flow that touches the credits | ||
| /// ledger (`platformToShielded`, `shieldedToShielded`, | ||
| /// `shieldedToPlatform`, `shieldedToCore`). | ||
| var amountCredits: UInt64? { | ||
| guard let double = Double(amountString), double > 0 else { return nil } | ||
| return UInt64(double * 100_000_000_000) | ||
| } | ||
|
|
||
| /// Backwards-compatibility shim — the original `amount` property | ||
| /// always returned duffs, so any leftover call site that hasn't | ||
| /// switched to the unit-explicit pair stays correct for Core | ||
| /// flows. | ||
| var amount: UInt64? { amountDuffs } | ||
|
|
||
| var canSend: Bool { | ||
| detectedFlow != nil && amount != nil && !isSending | ||
| detectedFlow != nil && amountDuffs != nil && !isSending | ||
| } |
There was a problem hiding this comment.
Make amount validation use the active unit after scaling.
These parsers accept any positive Double and then truncate, while canSend only checks amountDuffs != nil. That lets sub-unit values through until the backend sees 0, and it also validates shielded flows against the wrong unit. Please validate the scaled integer for the active flow (duffs for Core, credits for shielded/platform) and require it to be > 0 before enabling send.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift`
around lines 92 - 118, The amount parsers currently accept any positive Double
and truncate to UInt64, but canSend only checks amountDuffs so tiny values
become zero after scaling or the wrong unit is validated for shielded flows;
update amountDuffs and amountCredits to parse the Double, compute the scaled
UInt64 and only return it if the scaled integer is > 0 (so UInt64(double *
scale) must be > 0), then replace the canSend check to use the active unit based
on detectedFlow (use amountDuffs for Core flows and amountCredits for
Platform/shielded flows) — reference the existing computed properties
amountDuffs, amountCredits, amount (shim) and the canSend and detectedFlow logic
when making the change.
| case .platformToShielded: | ||
| // Platform → Shielded (Type 15): spend credits from | ||
| // the wallet's first Platform Payment account into | ||
| // the bound shielded pool. Credits scale. | ||
| guard let amountCredits else { | ||
| error = "Invalid amount" | ||
| return | ||
| } | ||
| _ = platformState | ||
| _ = shieldedService | ||
| _ = wallet | ||
| _ = modelContext | ||
| _ = sdk | ||
| error = "Shielded sending is being rebuilt — see follow-up PR" | ||
| return | ||
| let signer = KeychainSigner(modelContainer: modelContext.container) | ||
| try await walletManager.shieldedShield( | ||
| walletId: wallet.walletId, | ||
| accountIndex: 0, | ||
| amount: amountCredits, | ||
| addressSigner: signer | ||
| ) |
There was a problem hiding this comment.
This flow ignores the entered Orchard recipient and always self-shields.
walletManager.shieldedShield has no recipient parameter, and the Rust side shields into the bound wallet’s default Orchard address. So if the user types someone else’s Orchard address here, the app still reports success even though nothing was sent to that recipient. Either constrain this path to self-shield only, or block it unless the entered address matches the wallet’s own shielded address.
…ieldedStore
`extract_spends_and_anchor` returned `ShieldedBuildError("Spending
operations require a ShieldedStore that provides MerklePath
witnesses. Not yet implemented.")` for every note, so shielded
transfer / unshield / withdraw failed at runtime even when the
store had a real commitment tree. The persistent tree's
`ClientPersistentCommitmentTree::witness(position, depth) ->
Option<MerklePath>` was already available — the trait was just
sitting on a `Vec<u8>` placeholder.
Change `ShieldedStore::witness()` to return
`Result<Option<MerklePath>, _>` directly, wire
`FileBackedShieldedStore::witness` through
`tree.witness(Position::from(position), 0)` (depth 0 matches the
`tree_anchor()` that the same builder consumes), and have
`extract_spends_and_anchor` build real `SpendableNote { note,
merkle_path }` entries.
Side effects (deliberate):
- `InMemoryShieldedStore::witness` keeps its existing `Err`; that
store has no tree state, only a flat `Vec<[u8; 32]>` of
commitments. Spend paths require a real store.
- Trait module-doc was updated: the "no orchard types" claim was
already partially false (notes deserialize to `orchard::Note` at
the call site) and is now plainly false.
Tests: 11 existing shielded unit tests pass; clippy clean; iOS
xcframework + SwiftExampleApp rebuild succeeds.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`dbPath(for:)` was keyed only on network, so two wallets on the
same network bound `bind_shielded` to the *same* SQLite file.
`FileBackedShieldedStore`'s notes table has no `wallet_id`
column, so `store.get_unspent_notes()` returned every wallet's
notes — wallet B saw wallet A's shielded balance under its own
name even though B's seed (and FVK) is unrelated.
User reproduced this with two wallets on regtest, distinct
mnemonics: a freshly created Wallet2 with empty Core/Platform
balances reported the same 0.6 DASH shielded balance as the
funded Reg wallet.
Include the wallet id hex in the dbPath. Each wallet now has
its own commitment-tree file and will re-sync from genesis on
first bind. Per project memory ("pre-release: schema migrations
aren't a concern; dev DBs rebuild"), the resulting one-time
re-sync is acceptable. Long-term the right fix is to add a
`wallet_id` column to the notes table inside `FileBackedShieldedStore`
so wallets can share the tree but filter their own notes; that's
a bigger change tracked separately.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… detail `ShieldedService` is a singleton bound by `rebindWalletScopedServices()` to `walletManager.firstWallet`. The detail-view code path never re-bound it, so opening any wallet other than `firstWallet` showed `firstWallet`'s shielded balance under the wrong wallet's name. The previous per-wallet dbPath fix correctly isolated each wallet's notes in Rust, but the published `shieldedBalance` on the UI side stayed pinned to the first-bound wallet. `ShieldedService` now stashes `walletManager` / `resolver` / `network` on first `bind(...)` and exposes `switchTo(walletId:)` that reuses them — cheap and idempotent (the Rust-side `bind_shielded` already replaces its slot). `WalletDetailView` calls it from `.onAppear` and `.onChange(of: wallet.walletId)`, and grew the `@EnvironmentObject var shieldedService` it was missing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (3)
packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swift (2)
187-208: 💤 Low valueConsider clearing stashed
network/resolverinreset()for symmetry.
reset()(lines 241-258) nils outwalletManagerandwalletIdbut leaves the newnetworkandresolverfields populated. It's not a correctness bug today sinceswitchTo(walletId:)guards onwalletManagerbeing non-nil before using them, but the asymmetry will be a footgun if a future change re-checks any of these fields independently ofwalletManager.♻️ Proposed tweak
func reset() { syncStateCancellable?.cancel() syncEventCancellable?.cancel() walletManager = nil walletId = nil + network = nil + resolver = nil isSyncing = false🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swift` around lines 187 - 208, The reset() method currently nils walletManager and walletId but leaves network and resolver set, creating asymmetry with switchTo(walletId:) which relies on those being cleared only when walletManager is nil; update reset() to also clear the network and resolver properties (nil out network and resolver) so that reset() fully clears state and remains symmetric with the guard in switchTo(walletId:), ensuring bind(walletManager:walletId:network:resolver:) cannot be called with stale network/resolver values.
290-308: 💤 Low valueStale per-network DB files from prior installs are left behind.
The path scheme changed from
shielded_tree_<network>.sqlitetoshielded_tree_<network>_<walletHex>.sqlite. Existing users upgrading will keep the old per-network file orphaned in the documents directory forever (it's no longer referenced by any wallet). Low-impact disk leak but worth either a one-time cleanup pass or a brief note in the comment so it isn't forgotten.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swift` around lines 290 - 308, Existing per-network DB files named "shielded_tree_<network>.sqlite" are orphaned after switching to dbPath(for:network:walletId:) which names files "shielded_tree_<network>_<walletHex>.sqlite"; add a one-time cleanup to remove legacy files (or at least document it) by detecting the old filename pattern and deleting any matching file before/when creating the per-wallet DB. Implement this cleanup in the same initialization flow that opens/creates the shielded DB (e.g., in FileBackedShieldedStore init or just inside dbPath caller) so you remove "shielded_tree_\(network.networkName).sqlite" if present, and then proceed to return the new per-wallet path.packages/rs-platform-wallet/src/wallet/shielded/file_store.rs (1)
163-175: ⚡ Quick winUpdate the module docs to reflect that witness generation is live.
This implementation makes the header note at Lines 13-15 stale. Leaving "not implemented yet" in the file docs will send future debugging in the wrong direction.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/shielded/file_store.rs` around lines 163 - 175, Update the module-level documentation to remove the "not implemented yet" note and instead state that witness generation is implemented and live; mention that the FileShieldedStore::witness method locks the tree (tree.lock), converts the position with Position::from, and calls tree.witness with checkpoint_depth = 0 (producing a grovedb_commitment_tree::MerklePath) and that errors are wrapped in FileShieldedStoreError, so future readers know how witnesses are produced and where to look for failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@packages/rs-platform-wallet/src/wallet/shielded/file_store.rs`:
- Around line 163-175: Update the module-level documentation to remove the "not
implemented yet" note and instead state that witness generation is implemented
and live; mention that the FileShieldedStore::witness method locks the tree
(tree.lock), converts the position with Position::from, and calls tree.witness
with checkpoint_depth = 0 (producing a grovedb_commitment_tree::MerklePath) and
that errors are wrapped in FileShieldedStoreError, so future readers know how
witnesses are produced and where to look for failures.
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swift`:
- Around line 187-208: The reset() method currently nils walletManager and
walletId but leaves network and resolver set, creating asymmetry with
switchTo(walletId:) which relies on those being cleared only when walletManager
is nil; update reset() to also clear the network and resolver properties (nil
out network and resolver) so that reset() fully clears state and remains
symmetric with the guard in switchTo(walletId:), ensuring
bind(walletManager:walletId:network:resolver:) cannot be called with stale
network/resolver values.
- Around line 290-308: Existing per-network DB files named
"shielded_tree_<network>.sqlite" are orphaned after switching to
dbPath(for:network:walletId:) which names files
"shielded_tree_<network>_<walletHex>.sqlite"; add a one-time cleanup to remove
legacy files (or at least document it) by detecting the old filename pattern and
deleting any matching file before/when creating the per-wallet DB. Implement
this cleanup in the same initialization flow that opens/creates the shielded DB
(e.g., in FileBackedShieldedStore init or just inside dbPath caller) so you
remove "shielded_tree_\(network.networkName).sqlite" if present, and then
proceed to return the new per-wallet path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6c749d03-a814-4258-bdef-f71935e6ec37
📒 Files selected for processing (5)
packages/rs-platform-wallet/src/wallet/shielded/file_store.rspackages/rs-platform-wallet/src/wallet/shielded/operations.rspackages/rs-platform-wallet/src/wallet/shielded/store.rspackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/WalletDetailView.swift
…n B)
Refactor shielded internals so a single PlatformWallet can hold
multiple ZIP-32 Orchard accounts that share the network's
commitment tree but keep their decrypted notes / nullifiers /
sync watermarks scoped per-(wallet_id, account_index).
This replaces the per-wallet shielded SQLite path that was
shipped earlier — that change isolated wallets at the cost of a
duplicate tree per wallet, and didn't help with same-wallet
multi-account at all. The on-chain commitment stream is
chain-wide, so the tree should be too.
## What changes
**`ShieldedStore` trait** (rs-platform-wallet):
- New `SubwalletId { wallet_id: [u8; 32], account_index: u32 }`.
- Note + sync-state methods (`save_note`, `get_unspent_notes`,
`mark_spent`, `last_synced_note_index`,
`nullifier_checkpoint`, …) take `id: SubwalletId`. Tree
methods (`append_commitment`, `checkpoint_tree`,
`tree_anchor`, `witness`) stay scope-free.
- `InMemoryShieldedStore` and `FileBackedShieldedStore` now hold
a `BTreeMap<SubwalletId, SubwalletState>` and lazily allocate
per-subwallet entries.
**`ShieldedWallet`**:
- Holds `accounts: BTreeMap<u32, AccountState>` (per-account
keyset). New constructors `from_keysets`, `from_seed_accounts`;
`add_account_from_seed` for live add. New `account_indices`,
`keys_for(account)`, `default_address(account)`,
`balance(account)`, `balances`, `balance_total`. Per-wallet
`wallet_id` field threaded through every store call as
`SubwalletId`.
**Sync** (`shielded/sync.rs`):
- One sync pass covers every bound account: fetch raw chunks
via `sync_shielded_notes` once with the lowest-keyed
account's IVK, then locally trial-decrypt each chunk with
every other account's IVK via `dash_sdk::platform::shielded::
try_decrypt_note`. Append each cmx to the shared tree once
with `marked = (any account decrypted this position)`.
- `SyncNotesResult` and `ShieldedSyncSummary` carry per-account
maps; `total_new_notes`, `total_newly_spent`, `balance_total`
helpers fold them for the flat FFI surface.
**Operations** (`shielded/operations.rs`):
- `transfer`, `unshield`, `withdraw`, `shield`, `shield_from_asset_lock`
all take `account: u32` and route through the corresponding
`OrchardKeySet` and per-subwallet note set. Spends never
cross account boundaries.
**`PlatformWallet`**:
- `bind_shielded(seed, accounts: &[u32], db_path)` derives all
listed accounts at once. New `shielded_add_account(seed,
account)` for live add (with a docstring caveat that
historical retroactive marking requires a tree wipe + resync).
- `shielded_default_address(account)`, `shielded_balances()`,
`shielded_account_indices()`, plus the four spend helpers
(`shielded_transfer_to`, `shielded_unshield_to`,
`shielded_withdraw_to`, `shielded_shield_from_account`) all
take `account: u32`.
- `shielded_shield_from_account` now takes both
`shielded_account` and `payment_account` — they're distinct
concepts (Orchard recipient account vs Platform Payment funding
account) that previously shared one `account_index` parameter.
**FFI** (`rs-platform-wallet-ffi`):
- `platform_wallet_manager_bind_shielded` takes
`accounts_ptr: *const u32, accounts_len: usize` (1..=64).
- All four spend entry points + `shielded_default_address` take
`account: u32`. `shielded_shield` takes both
`shielded_account` and `payment_account`.
- `ShieldedSyncWalletResultFFI::ok` flattens per-account sums.
**Swift SDK + example app**:
- `bindShielded` takes `accounts: [UInt32] = [0]`; passes the
C buffer through.
- All shielded send wrappers take `account: UInt32 = 0`.
- `shieldedDefaultAddress(walletId:account:)` per-account.
- `ShieldedService.dbPath(for:network:)` reverts to per-network
(the per-(wallet,network) workaround is no longer needed —
notes are scoped at the column level inside the store).
## Persistence (deferred)
This commit ships the multi-account refactor with notes still
held only in memory (`Vec<ShieldedNote>` on `SubwalletState`).
Cold start = re-sync from genesis, same as before. SwiftData
persistence (`PersistentShieldedNote` keyed by
`(walletId, accountIndex, position)` driven through the
existing changeset model) is the planned next step but is its
own substantial slice — splitting it out keeps this commit
reviewable.
## Tests
11 existing shielded unit tests pass. New
`test_save_and_retrieve_notes`, `test_mark_spent`,
`test_sync_state_per_subwallet` cover SubwalletId scoping in
the in-memory store. iOS xcframework + SwiftExampleApp rebuild
green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…bind
Adds the Rust-side persistence wiring for the multi-account
shielded refactor. Sync passes and spend operations now emit
`ShieldedChangeSet` deltas to the wallet's persister, and
`bind_shielded` rehydrates the in-memory `SubwalletState` from
the persister's `ClientStartState` snapshot before kicking off
the first sync.
This is the Rust half of the deferred persistence slice; the
FFI callback that surfaces these changesets to the host
(SwiftData on iOS) and the matching Swift handler land in
follow-up commits in this same PR.
## What changes
**`changeset/`**:
- `ShieldedChangeSet` — per-`SubwalletId` `notes_saved`,
`nullifiers_spent`, `synced_indices`, `nullifier_checkpoints`.
Implements `Merge` (LWW on watermarks; append on note vecs).
Carried as a new `Option<ShieldedChangeSet>` field on
`PlatformWalletChangeSet` (feature-gated `shielded`).
- `ShieldedSyncStartState` — restore snapshot keyed by
`SubwalletId`. Lives on `ClientStartState.shielded`.
- Existing destructure sites in `apply.rs`, `manager/load.rs`,
`manager/wallet_lifecycle.rs`, and `platform_wallet.rs` updated
to drop the new field with a `#[cfg(feature = "shielded")]` arm.
**`wallet/shielded/mod.rs`**:
- `ShieldedWallet` grows an optional `WalletPersister` handle and a
`set_persister(...)` setter.
- New `queue_shielded_changeset(cs)` helper that wraps a
`ShieldedChangeSet` in a `PlatformWalletChangeSet` and pushes
it to the persister. No-op when no persister is attached.
- New `restore_from_snapshot(&ShieldedSyncStartState)` consumes
per-subwallet entries that match `(self.wallet_id, account)`
for any bound account, save_note's their notes, marks spent
ones, and replays the sync watermarks.
**`wallet/shielded/sync.rs`**:
- `sync_notes` accumulates a `ShieldedChangeSet` as it saves
decrypted notes / advances watermarks, then queues it on the
persister at the end of the pass (after dropping the store
write lock so the persister callback isn't nested under it).
- `check_nullifiers` does the same for spent marks +
nullifier checkpoints.
**`wallet/shielded/operations.rs`**:
- `mark_notes_spent` queues a changeset for each freshly-marked
nullifier so spend events propagate to durable storage
immediately rather than waiting for the next nullifier-sync
pass to rediscover them.
**`wallet/platform_wallet.rs`**:
- `bind_shielded` attaches the wallet's persister to the
`ShieldedWallet`, then calls `restore_from_snapshot` against
`self.persister.load()?.shielded` so the freshly-bound wallet
starts pre-populated with whatever the host already has on
disk for `(self.wallet_id, account)` for each requested account.
## Tests
11 existing shielded unit tests still pass. Clippy clean. The
load-side end-to-end flow ("host writes → cold start →
restore_from_snapshot → spend works") is exercised once the FFI
+ SwiftData sides land in the next commits.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Re-review at 3ffce1a confirms the prior blocking unshield address-format bug is fixed and most earlier suggestions (checked_add nonce, fee-reserve early return, trim-before-parse, bech32m diagnostic, fire-and-forget warm-up) are addressed. One new blocking issue stands: shielded spend paths (unshield/transfer/withdraw) call mark_notes_spent immediately after broadcast() — which only submits, not confirms — so a rejected/dropped transition strands notes locally. Six remaining suggestions/nitpicks: the &'static signer transmute (convergent across all reviewers), TOCTOU on per-address nonces in concurrent shields, TOCTOU on note selection in concurrent spends, missing Rust unit coverage on shielded_shield_from_account, drift from rs-sdk's canonical address-nonce helper, an amount==0 edge case, and a mut→const SignerHandle ergonomic.
Reviewed commit: 3ffce1a
🔴 1 blocking | 🟡 5 suggestion(s) | 💬 2 nitpick(s)
2 additional findings
🔴 blocking: Shielded spends mark notes spent on broadcast acceptance, before chain confirmation
packages/rs-platform-wallet/src/wallet/shielded/operations.rs (lines 355-362)
unshield, transfer, and withdraw all call mark_notes_spent(&selected_notes) immediately after state_transition.broadcast(...) returns Ok (operations.rs:355-362, 425-431, 498-504). BroadcastStateTransition::broadcast (rs-sdk/src/platform/transition/broadcast.rs:36-93) only submits the transition — wait_for_response and broadcast_and_wait are separately exposed for confirmation. A transition that is mempool-accepted and later rejected, dropped, or replaced will silently flip those notes to spent in the local ShieldedStore, which has no reconciliation path to clear a false spend. The user then sees the funds permanently unavailable locally even though the chain never consumed them. Fix is to either (a) use broadcast_and_wait here so notes are only marked once the proof result is observed, or (b) add a mark_notes_pending step under a write lock before broadcast and only promote to spent on observed inclusion (with a clearing path on rejection). The same write-after-broadcast shape interacts with finding #4 below — both are root-caused in the same fetch→broadcast→mutate sequence.
🟡 suggestion: Note selection → broadcast → mark_spent is non-atomic across concurrent spends
packages/rs-platform-wallet/src/wallet/shielded/operations.rs (lines 312-511)
unshield, transfer, and withdraw each (1) take a read lock on the store to select unspent notes, (2) build+broadcast the transition, then (3) take a write lock to mark notes spent. The read lock is released before broadcast, so two concurrent calls (user-initiated send + retry, or two flows from the UI) can both observe the same notes as unspent. The first transition wins; the second's nullifiers are already on chain, drive-abci rejects the duplicate, and the user sees a generic broadcast error after 30 s of proof work. Same shape and remedy as the shield-side TOCTOU above — single-flight at the wallet level (or a tentative mark_in_flight step under a write lock before broadcast) prevents the failure. This also dovetails with finding #1: a mark_in_flight / mark_pending step would naturally provide the missing reconciliation hook.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [BLOCKING] lines 355-362: Shielded spends mark notes spent on broadcast acceptance, before chain confirmation
`unshield`, `transfer`, and `withdraw` all call `mark_notes_spent(&selected_notes)` immediately after `state_transition.broadcast(...)` returns `Ok` (operations.rs:355-362, 425-431, 498-504). `BroadcastStateTransition::broadcast` (rs-sdk/src/platform/transition/broadcast.rs:36-93) only submits the transition — `wait_for_response` and `broadcast_and_wait` are separately exposed for confirmation. A transition that is mempool-accepted and later rejected, dropped, or replaced will silently flip those notes to spent in the local `ShieldedStore`, which has no reconciliation path to clear a false spend. The user then sees the funds permanently unavailable locally even though the chain never consumed them. Fix is to either (a) use `broadcast_and_wait` here so notes are only marked once the proof result is observed, or (b) add a `mark_notes_pending` step under a write lock before broadcast and only promote to spent on observed inclusion (with a clearing path on rejection). The same write-after-broadcast shape interacts with finding #4 below — both are root-caused in the same fetch→broadcast→mutate sequence.
- [SUGGESTION] lines 117-180: Concurrent shields on the same wallet TOCTOU on the fetched address nonces
`ShieldedWallet::shield` reads each input's last-used nonce via `AddressInfo::fetch_many`, locally increments with `checked_add(1)`, and hands the result to `build_shield_transition`. The `shielded` slot uses `RwLock` and only a read guard is taken, so two overlapping `shield` invocations for the same wallet (UI double-tap, retry while the first proof is still building, two host threads racing) both observe the same `info.nonce`, both build with `info.nonce + 1`, and the second to land at drive-abci is rejected with a nonce conflict — but only after each has spent ~30 s on a Halo 2 proof. Not a memory-safety or consensus issue, but the user-visible failure is opaque and expensive. Either serialise shield-class operations on a per-wallet `tokio::sync::Mutex` taken across fetch+build+broadcast, or document at the FFI boundary (`shielded_send.rs:251-299`) that hosts must enforce single-flight per wallet.
- [SUGGESTION] lines 312-511: Note selection → broadcast → mark_spent is non-atomic across concurrent spends
`unshield`, `transfer`, and `withdraw` each (1) take a read lock on the store to select unspent notes, (2) build+broadcast the transition, then (3) take a write lock to mark notes spent. The read lock is released before broadcast, so two concurrent calls (user-initiated send + retry, or two flows from the UI) can both observe the same notes as unspent. The first transition wins; the second's nullifiers are already on chain, drive-abci rejects the duplicate, and the user sees a generic broadcast error after 30 s of proof work. Same shape and remedy as the shield-side TOCTOU above — single-flight at the wallet level (or a tentative `mark_in_flight` step under a write lock before broadcast) prevents the failure. This also dovetails with finding #1: a `mark_in_flight` / `mark_pending` step would naturally provide the missing reconciliation hook.
- [SUGGESTION] lines 117-180: Reimplements rs-sdk's canonical address-nonce fetch instead of reusing it
rs-sdk has `fetch_inputs_with_nonce`, `nonce_inc`, and `ensure_address_balance` in `packages/rs-sdk/src/platform/transition/address_inputs.rs` that encapsulate exactly this fetch-and-increment dance plus a hard balance check. They are `pub(crate)` today, so `platform-wallet` can't reach them directly, but a single-line visibility change would let this code re-use the canonical helpers. As written, the new shield path will silently drift from the SDK's behaviour — for example the SDK enforces a balance check while this implementation only `warn!`s on `info.balance < credits` (operations.rs:150-157).
In `packages/rs-platform-wallet-ffi/src/shielded_send.rs`:
- [SUGGESTION] lines 269-291: Use the established usize round-trip pattern instead of transmuting the signer borrow to &'static
`block_on_worker` requires `F: Send + 'static` (runtime.rs:49-56), and the new shield path satisfies that with `mem::transmute::<&VTableSigner, &'static VTableSigner>(...)` at lines 276-279. It is sound today only because `block_on_worker` is `rt.block_on(async move { rt.spawn(future).await.expect(...) })` — the calling thread parks until the spawned task completes, so the host-owned `SignerHandle` outlives the borrow. Any future change that lets `block_on_worker` return early — a shutdown `select!`, a timeout, a cancellation token, or replacing `.expect` with a `?`-style return — silently turns this into a use-after-free across the FFI boundary, since Swift's `KeychainSigner.deinit` is free to destroy the handle as soon as the FFI call returns. The same crate already solves the identical constraint without the lifetime fiction: `identity_top_up.rs:113-126` round-trips the pointer through `usize` and re-materializes `&VTableSigner` *inside* the spawned future (capturing only `Send + 'static` data). Aligning the shield path with that precedent removes the `unsafe { transmute }` from the FFI surface at zero behavioural cost.
In `packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- [SUGGESTION] lines 480-627: shielded_shield_from_account selection rules have no Rust unit coverage
The selector carries non-trivial behaviour that directly determines whether shield broadcasts succeed: skipping leading addresses with `balance <= FEE_RESERVE_CREDITS` (only `>` is viable), reserving fee headroom only on input 0 via `balance.saturating_sub(FEE_RESERVE_CREDITS)`, walking BTreeMap key order so input 0 is the smallest-key entry, and accumulating until claim ≥ amount. None of this is covered by a focused Rust test, so a future refactor could regress any of these invariants — including reintroducing the original `viable_input_0` fall-through bug or the input-0 reserve regression — without tripping CI. Worth deterministic unit coverage against a synthetic `PlatformWalletInfo`/account: dust-first-address case, exact-reserve case (`balance == FEE_RESERVE`), single-address insufficient-with-reserve case, amount-equal-to-`(total_usable - FEE_RESERVE)`, and amount=0 (see #7).
| // SAFETY: the caller retains ownership of the signer handle | ||
| // and guarantees it outlives this call. We block until the | ||
| // worker future completes, so the `'static` lifetime we paint | ||
| // on the borrow does not actually outlive the host's handle. | ||
| // `VTableSigner` is `Send + Sync` per its `unsafe impl` in | ||
| // rs-sdk-ffi, so `&'static VTableSigner` is automatically | ||
| // `Send + 'static` — exactly what `block_on_worker` needs. | ||
| let address_signer: &'static VTableSigner = | ||
| std::mem::transmute::<&VTableSigner, &'static VTableSigner>( | ||
| &*(signer_address_handle as *const VTableSigner), | ||
| ); | ||
|
|
||
| // Run the proof on a worker thread (8 MB stack). Halo 2 circuit | ||
| // synthesis recurses past the ~512 KB iOS dispatch-thread stack | ||
| // and crashes with EXC_BAD_ACCESS at the first | ||
| // `synthesize(... measure(pass))` call when polled on the | ||
| // calling thread. | ||
| let result = block_on_worker(async move { | ||
| let prover = CachedOrchardProver::new(); | ||
| wallet | ||
| .shielded_shield_from_account(account_index, amount, address_signer, &prover) | ||
| .await | ||
| }); |
There was a problem hiding this comment.
🟡 Suggestion: Use the established usize round-trip pattern instead of transmuting the signer borrow to &'static
block_on_worker requires F: Send + 'static (runtime.rs:49-56), and the new shield path satisfies that with mem::transmute::<&VTableSigner, &'static VTableSigner>(...) at lines 276-279. It is sound today only because block_on_worker is rt.block_on(async move { rt.spawn(future).await.expect(...) }) — the calling thread parks until the spawned task completes, so the host-owned SignerHandle outlives the borrow. Any future change that lets block_on_worker return early — a shutdown select!, a timeout, a cancellation token, or replacing .expect with a ?-style return — silently turns this into a use-after-free across the FFI boundary, since Swift's KeychainSigner.deinit is free to destroy the handle as soon as the FFI call returns. The same crate already solves the identical constraint without the lifetime fiction: identity_top_up.rs:113-126 round-trips the pointer through usize and re-materializes &VTableSigner inside the spawned future (capturing only Send + 'static data). Aligning the shield path with that precedent removes the unsafe { transmute } from the FFI surface at zero behavioural cost.
💡 Suggested change
| // SAFETY: the caller retains ownership of the signer handle | |
| // and guarantees it outlives this call. We block until the | |
| // worker future completes, so the `'static` lifetime we paint | |
| // on the borrow does not actually outlive the host's handle. | |
| // `VTableSigner` is `Send + Sync` per its `unsafe impl` in | |
| // rs-sdk-ffi, so `&'static VTableSigner` is automatically | |
| // `Send + 'static` — exactly what `block_on_worker` needs. | |
| let address_signer: &'static VTableSigner = | |
| std::mem::transmute::<&VTableSigner, &'static VTableSigner>( | |
| &*(signer_address_handle as *const VTableSigner), | |
| ); | |
| // Run the proof on a worker thread (8 MB stack). Halo 2 circuit | |
| // synthesis recurses past the ~512 KB iOS dispatch-thread stack | |
| // and crashes with EXC_BAD_ACCESS at the first | |
| // `synthesize(... measure(pass))` call when polled on the | |
| // calling thread. | |
| let result = block_on_worker(async move { | |
| let prover = CachedOrchardProver::new(); | |
| wallet | |
| .shielded_shield_from_account(account_index, amount, address_signer, &prover) | |
| .await | |
| }); | |
| // Round-trip the signer pointer through `usize` so the spawned | |
| // future's capture is `Send + 'static` (the raw pointer is `!Send`, | |
| // but `usize` is). The underlying `Inner::Callback { ctx, vtable }` | |
| // is `Send + Sync` — see the unsafe impls in `rs-sdk-ffi/src/signer.rs`. | |
| let signer_addr = signer_address_handle as usize; | |
| // Run the proof on a worker thread (8 MB stack). Halo 2 circuit | |
| // synthesis recurses past the ~512 KB iOS dispatch-thread stack | |
| // and crashes with EXC_BAD_ACCESS at the first | |
| // `synthesize(... measure(pass))` call when polled on the | |
| // calling thread. | |
| let result = block_on_worker(async move { | |
| let address_signer: &VTableSigner = unsafe { &*(signer_addr as *const VTableSigner) }; | |
| let prover = CachedOrchardProver::new(); | |
| wallet | |
| .shielded_shield_from_account(account_index, amount, address_signer, &prover) | |
| .await | |
| }); |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet-ffi/src/shielded_send.rs`:
- [SUGGESTION] lines 269-291: Use the established usize round-trip pattern instead of transmuting the signer borrow to &'static
`block_on_worker` requires `F: Send + 'static` (runtime.rs:49-56), and the new shield path satisfies that with `mem::transmute::<&VTableSigner, &'static VTableSigner>(...)` at lines 276-279. It is sound today only because `block_on_worker` is `rt.block_on(async move { rt.spawn(future).await.expect(...) })` — the calling thread parks until the spawned task completes, so the host-owned `SignerHandle` outlives the borrow. Any future change that lets `block_on_worker` return early — a shutdown `select!`, a timeout, a cancellation token, or replacing `.expect` with a `?`-style return — silently turns this into a use-after-free across the FFI boundary, since Swift's `KeychainSigner.deinit` is free to destroy the handle as soon as the FFI call returns. The same crate already solves the identical constraint without the lifetime fiction: `identity_top_up.rs:113-126` round-trips the pointer through `usize` and re-materializes `&VTableSigner` *inside* the spawned future (capturing only `Send + 'static` data). Aligning the shield path with that precedent removes the `unsafe { transmute }` from the FFI surface at zero behavioural cost.
| // Fetch the current address nonces from Platform. Each | ||
| // input address has a per-address nonce that the next | ||
| // state transition must use as `last_used + 1`. | ||
| // `AddressInfo::fetch_many` returns the last-used nonce | ||
| // (and current balance) per address; we increment it. | ||
| // Without this the broadcast was rejected by drive-abci | ||
| // because every shield transition tried to use nonce 0. | ||
| use dash_sdk::platform::FetchMany; | ||
| use dash_sdk::query_types::AddressInfo; | ||
| use std::collections::BTreeSet; | ||
|
|
||
| let address_set: BTreeSet<PlatformAddress> = inputs.keys().copied().collect(); | ||
| let infos = AddressInfo::fetch_many(&self.sdk, address_set) | ||
| .await | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!("fetch input nonces: {e}")) | ||
| })?; | ||
|
|
||
| let mut inputs_with_nonce: BTreeMap<PlatformAddress, (u32, Credits)> = BTreeMap::new(); | ||
| for (addr, credits) in inputs { | ||
| let info = infos | ||
| .get(&addr) | ||
| .and_then(|opt| opt.as_ref()) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address not found on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; | ||
| // Surface a per-input diagnostic so the host can see what | ||
| // we're claiming vs what Platform actually reports — | ||
| // mismatches are the typical root cause of | ||
| // `AddressesNotEnoughFundsError` on shield broadcast. | ||
| if info.balance < credits { | ||
| warn!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input claims more credits than Platform reports — broadcast will likely fail" | ||
| ); | ||
| } else { | ||
| info!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input" | ||
| ); | ||
| } | ||
| // `AddressNonce` is `u32`; `info.nonce + 1` would panic in | ||
| // debug and wrap in release once an address reaches the | ||
| // ceiling. drive-abci treats `u32::MAX` as exhausted, so a | ||
| // wrap submits nonce 0 and gets rejected as a replay | ||
| // *after* the wallet has already spent ~30 s building the | ||
| // Halo 2 proof. Bail loudly here instead. | ||
| let next_nonce = info.nonce.checked_add(1).ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address nonce exhausted on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; | ||
| inputs_with_nonce.insert(addr, (next_nonce, credits)); | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Concurrent shields on the same wallet TOCTOU on the fetched address nonces
ShieldedWallet::shield reads each input's last-used nonce via AddressInfo::fetch_many, locally increments with checked_add(1), and hands the result to build_shield_transition. The shielded slot uses RwLock and only a read guard is taken, so two overlapping shield invocations for the same wallet (UI double-tap, retry while the first proof is still building, two host threads racing) both observe the same info.nonce, both build with info.nonce + 1, and the second to land at drive-abci is rejected with a nonce conflict — but only after each has spent ~30 s on a Halo 2 proof. Not a memory-safety or consensus issue, but the user-visible failure is opaque and expensive. Either serialise shield-class operations on a per-wallet tokio::sync::Mutex taken across fetch+build+broadcast, or document at the FFI boundary (shielded_send.rs:251-299) that hosts must enforce single-flight per wallet.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] lines 117-180: Concurrent shields on the same wallet TOCTOU on the fetched address nonces
`ShieldedWallet::shield` reads each input's last-used nonce via `AddressInfo::fetch_many`, locally increments with `checked_add(1)`, and hands the result to `build_shield_transition`. The `shielded` slot uses `RwLock` and only a read guard is taken, so two overlapping `shield` invocations for the same wallet (UI double-tap, retry while the first proof is still building, two host threads racing) both observe the same `info.nonce`, both build with `info.nonce + 1`, and the second to land at drive-abci is rejected with a nonce conflict — but only after each has spent ~30 s on a Halo 2 proof. Not a memory-safety or consensus issue, but the user-visible failure is opaque and expensive. Either serialise shield-class operations on a per-wallet `tokio::sync::Mutex` taken across fetch+build+broadcast, or document at the FFI boundary (`shielded_send.rs:251-299`) that hosts must enforce single-flight per wallet.
| pub async fn shielded_shield_from_account<S, P>( | ||
| &self, | ||
| account_index: u32, | ||
| amount: u64, | ||
| signer: &S, | ||
| prover: P, | ||
| ) -> Result<(), PlatformWalletError> | ||
| where | ||
| S: dpp::identity::signer::Signer<dpp::address_funds::PlatformAddress> + Send + Sync, | ||
| P: dpp::shielded::builder::OrchardProver, | ||
| { | ||
| // The shield transition uses `DeductFromInput(0)` as its fee | ||
| // strategy. drive-abci interprets that as "after each input | ||
| // address has had its `claim` deducted, take the fee out of | ||
| // input 0's *remaining* balance" (see | ||
| // `deduct_fee_from_outputs_or_remaining_balance_of_inputs_v0` | ||
| // in rs-dpp). "Input 0" is the smallest-key entry of the | ||
| // BTreeMap we hand to the builder. Therefore: | ||
| // | ||
| // * we must NOT claim each input's full balance — claiming | ||
| // `balance` leaves `remaining = 0`, and the fee | ||
| // deduction has nothing to bite into. | ||
| // * we must reserve at least `FEE_RESERVE_CREDITS` of | ||
| // unclaimed balance specifically on input 0 (the | ||
| // BTreeMap-smallest address). | ||
| // | ||
| // Empty-mempool fees on Type 15 transitions land at ~20M | ||
| // credits (~0.0002 DASH). Reserve 1e9 credits (0.01 DASH) — | ||
| // 50× headroom, still trivial relative to typical balances. | ||
| const FEE_RESERVE_CREDITS: u64 = 1_000_000_000; | ||
|
|
||
| // Build the inputs map under the wallet-manager read lock, | ||
| // then drop the lock before re-entering shielded so the | ||
| // guards don't nest unnecessarily. | ||
| let inputs: std::collections::BTreeMap< | ||
| dpp::address_funds::PlatformAddress, | ||
| dpp::fee::Credits, | ||
| > = { | ||
| let wm = self.wallet_manager.read().await; | ||
| let info = wm | ||
| .get_wallet_info(&self.wallet_id) | ||
| .ok_or_else(|| PlatformWalletError::WalletNotFound(hex::encode(self.wallet_id)))?; | ||
| let account = info | ||
| .core_wallet | ||
| .platform_payment_managed_account_at_index(account_index) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::AddressOperation(format!( | ||
| "no platform payment account at index {account_index}" | ||
| )) | ||
| })?; | ||
|
|
||
| // Collect (address, balance) for every funded address, | ||
| // sorted by address bytes — that determines BTreeMap | ||
| // key order downstream and therefore which input ends | ||
| // up at index 0. | ||
| let mut candidates: Vec<(dpp::address_funds::PlatformAddress, u64)> = account | ||
| .addresses | ||
| .addresses | ||
| .values() | ||
| .filter_map(|addr_info| { | ||
| let p2pkh = | ||
| key_wallet::PlatformP2PKHAddress::from_address(&addr_info.address).ok()?; | ||
| let balance = account.address_credit_balance(&p2pkh); | ||
| if balance == 0 { | ||
| None | ||
| } else { | ||
| Some(( | ||
| dpp::address_funds::PlatformAddress::P2pkh(p2pkh.to_bytes()), | ||
| balance, | ||
| )) | ||
| } | ||
| }) | ||
| .collect(); | ||
| candidates.sort_by_key(|(addr, _)| *addr); | ||
|
|
||
| // The address that will be the bundle's `input_0` must | ||
| // have balance > FEE_RESERVE so we can claim at least 1 | ||
| // credit while leaving the reserve untouched. Skip any | ||
| // leading dust address that can't satisfy that — the | ||
| // next address up will become input 0 instead. If | ||
| // every funded address is below the reserve, fail fast: | ||
| // the network would reject the broadcast on the | ||
| // boundary anyway, only after we've spent ~30 s | ||
| // building the Halo 2 proof. | ||
| let Some(viable_input_0) = candidates | ||
| .iter() | ||
| .position(|(_, balance)| *balance > FEE_RESERVE_CREDITS) | ||
| else { | ||
| let total: u64 = candidates.iter().map(|(_, b)| b).sum(); | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: total, | ||
| required: amount.saturating_add(FEE_RESERVE_CREDITS), | ||
| }); | ||
| }; | ||
| let usable: &[(dpp::address_funds::PlatformAddress, u64)] = | ||
| &candidates[viable_input_0..]; | ||
|
|
||
| let total_usable: u64 = usable.iter().map(|(_, b)| b).sum(); | ||
| let needed = amount.saturating_add(FEE_RESERVE_CREDITS); | ||
| if total_usable < needed { | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: total_usable, | ||
| required: needed, | ||
| }); | ||
| } | ||
|
|
||
| // Walk usable inputs in BTreeMap order, claiming only | ||
| // what's needed to cover `amount`. The fee reserve is | ||
| // taken off input 0's max claim so its post-claim | ||
| // remaining stays ≥ FEE_RESERVE_CREDITS for the | ||
| // network's `DeductFromInput(0)` step. | ||
| let mut chosen: std::collections::BTreeMap< | ||
| dpp::address_funds::PlatformAddress, | ||
| dpp::fee::Credits, | ||
| > = std::collections::BTreeMap::new(); | ||
| let mut accumulated_claim: u64 = 0; | ||
| for (i, (addr, balance)) in usable.iter().enumerate() { | ||
| if accumulated_claim >= amount { | ||
| break; | ||
| } | ||
| let max_claim = if i == 0 { | ||
| balance.saturating_sub(FEE_RESERVE_CREDITS) | ||
| } else { | ||
| *balance | ||
| }; | ||
| let still_need = amount - accumulated_claim; | ||
| let claim = max_claim.min(still_need); | ||
| if claim > 0 { | ||
| chosen.insert(*addr, claim); | ||
| accumulated_claim = accumulated_claim.saturating_add(claim); | ||
| } | ||
| } | ||
|
|
||
| if accumulated_claim < amount { | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: accumulated_claim, | ||
| required: amount, | ||
| }); | ||
| } | ||
| chosen | ||
| }; | ||
|
|
||
| let guard = self.shielded.read().await; | ||
| let shielded = guard | ||
| .as_ref() | ||
| .ok_or(PlatformWalletError::ShieldedNotBound)?; | ||
| shielded.shield(inputs, amount, signer, &prover).await | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: shielded_shield_from_account selection rules have no Rust unit coverage
The selector carries non-trivial behaviour that directly determines whether shield broadcasts succeed: skipping leading addresses with balance <= FEE_RESERVE_CREDITS (only > is viable), reserving fee headroom only on input 0 via balance.saturating_sub(FEE_RESERVE_CREDITS), walking BTreeMap key order so input 0 is the smallest-key entry, and accumulating until claim ≥ amount. None of this is covered by a focused Rust test, so a future refactor could regress any of these invariants — including reintroducing the original viable_input_0 fall-through bug or the input-0 reserve regression — without tripping CI. Worth deterministic unit coverage against a synthetic PlatformWalletInfo/account: dust-first-address case, exact-reserve case (balance == FEE_RESERVE), single-address insufficient-with-reserve case, amount-equal-to-(total_usable - FEE_RESERVE), and amount=0 (see #7).
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- [SUGGESTION] lines 480-627: shielded_shield_from_account selection rules have no Rust unit coverage
The selector carries non-trivial behaviour that directly determines whether shield broadcasts succeed: skipping leading addresses with `balance <= FEE_RESERVE_CREDITS` (only `>` is viable), reserving fee headroom only on input 0 via `balance.saturating_sub(FEE_RESERVE_CREDITS)`, walking BTreeMap key order so input 0 is the smallest-key entry, and accumulating until claim ≥ amount. None of this is covered by a focused Rust test, so a future refactor could regress any of these invariants — including reintroducing the original `viable_input_0` fall-through bug or the input-0 reserve regression — without tripping CI. Worth deterministic unit coverage against a synthetic `PlatformWalletInfo`/account: dust-first-address case, exact-reserve case (`balance == FEE_RESERVE`), single-address insufficient-with-reserve case, amount-equal-to-`(total_usable - FEE_RESERVE)`, and amount=0 (see #7).
| // Fetch the current address nonces from Platform. Each | ||
| // input address has a per-address nonce that the next | ||
| // state transition must use as `last_used + 1`. | ||
| // `AddressInfo::fetch_many` returns the last-used nonce | ||
| // (and current balance) per address; we increment it. | ||
| // Without this the broadcast was rejected by drive-abci | ||
| // because every shield transition tried to use nonce 0. | ||
| use dash_sdk::platform::FetchMany; | ||
| use dash_sdk::query_types::AddressInfo; | ||
| use std::collections::BTreeSet; | ||
|
|
||
| let address_set: BTreeSet<PlatformAddress> = inputs.keys().copied().collect(); | ||
| let infos = AddressInfo::fetch_many(&self.sdk, address_set) | ||
| .await | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!("fetch input nonces: {e}")) | ||
| })?; | ||
|
|
||
| let mut inputs_with_nonce: BTreeMap<PlatformAddress, (u32, Credits)> = BTreeMap::new(); | ||
| for (addr, credits) in inputs { | ||
| let info = infos | ||
| .get(&addr) | ||
| .and_then(|opt| opt.as_ref()) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address not found on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; | ||
| // Surface a per-input diagnostic so the host can see what | ||
| // we're claiming vs what Platform actually reports — | ||
| // mismatches are the typical root cause of | ||
| // `AddressesNotEnoughFundsError` on shield broadcast. | ||
| if info.balance < credits { | ||
| warn!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input claims more credits than Platform reports — broadcast will likely fail" | ||
| ); | ||
| } else { | ||
| info!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input" | ||
| ); | ||
| } | ||
| // `AddressNonce` is `u32`; `info.nonce + 1` would panic in | ||
| // debug and wrap in release once an address reaches the | ||
| // ceiling. drive-abci treats `u32::MAX` as exhausted, so a | ||
| // wrap submits nonce 0 and gets rejected as a replay | ||
| // *after* the wallet has already spent ~30 s building the | ||
| // Halo 2 proof. Bail loudly here instead. | ||
| let next_nonce = info.nonce.checked_add(1).ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address nonce exhausted on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; | ||
| inputs_with_nonce.insert(addr, (next_nonce, credits)); | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Reimplements rs-sdk's canonical address-nonce fetch instead of reusing it
rs-sdk has fetch_inputs_with_nonce, nonce_inc, and ensure_address_balance in packages/rs-sdk/src/platform/transition/address_inputs.rs that encapsulate exactly this fetch-and-increment dance plus a hard balance check. They are pub(crate) today, so platform-wallet can't reach them directly, but a single-line visibility change would let this code re-use the canonical helpers. As written, the new shield path will silently drift from the SDK's behaviour — for example the SDK enforces a balance check while this implementation only warn!s on info.balance < credits (operations.rs:150-157).
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] lines 117-180: Reimplements rs-sdk's canonical address-nonce fetch instead of reusing it
rs-sdk has `fetch_inputs_with_nonce`, `nonce_inc`, and `ensure_address_balance` in `packages/rs-sdk/src/platform/transition/address_inputs.rs` that encapsulate exactly this fetch-and-increment dance plus a hard balance check. They are `pub(crate)` today, so `platform-wallet` can't reach them directly, but a single-line visibility change would let this code re-use the canonical helpers. As written, the new shield path will silently drift from the SDK's behaviour — for example the SDK enforces a balance check while this implementation only `warn!`s on `info.balance < credits` (operations.rs:150-157).
| let mut accumulated_claim: u64 = 0; | ||
| for (i, (addr, balance)) in usable.iter().enumerate() { | ||
| if accumulated_claim >= amount { | ||
| break; | ||
| } | ||
| let max_claim = if i == 0 { | ||
| balance.saturating_sub(FEE_RESERVE_CREDITS) | ||
| } else { | ||
| *balance | ||
| }; | ||
| let still_need = amount - accumulated_claim; | ||
| let claim = max_claim.min(still_need); | ||
| if claim > 0 { | ||
| chosen.insert(*addr, claim); | ||
| accumulated_claim = accumulated_claim.saturating_add(claim); | ||
| } | ||
| } | ||
|
|
||
| if accumulated_claim < amount { | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: accumulated_claim, | ||
| required: amount, | ||
| }); | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: amount == 0 produces an empty inputs map and an opaque downstream failure
With amount = 0 and total_usable >= FEE_RESERVE_CREDITS, both pre-loop checks pass; the loop body's first if accumulated_claim >= amount { break; } fires immediately, leaving chosen empty. The post-loop accumulated_claim < amount check is 0 < 0 == false, so the empty map is handed to shielded.shield(...), which AddressInfo::fetch_manys an empty set and then surfaces an opaque downstream failure after the lock dance. Cheaper to early-return ShieldedBuildError("amount must be > 0") (or treat amount=0 as a no-op) at the top of shielded_shield_from_account. Edge case in practice — the UI guards against zero — but worth a defensive check at the library boundary.
source: ['claude']
| #[no_mangle] | ||
| pub unsafe extern "C" fn platform_wallet_manager_shielded_shield( | ||
| handle: Handle, | ||
| wallet_id_bytes: *const u8, | ||
| account_index: u32, | ||
| amount: u64, | ||
| signer_address_handle: *mut SignerHandle, | ||
| ) -> PlatformWalletFFIResult { | ||
| check_ptr!(wallet_id_bytes); | ||
| check_ptr!(signer_address_handle); |
There was a problem hiding this comment.
*💬 Nitpick: signer_address_handle is read-only — prefer const SignerHandle
The header explicitly states the caller retains ownership and the function does not destroy the handle, and the body only reads through it. Taking *mut SignerHandle invites callers to think the function may mutate or take ownership; *const SignerHandle matches the actual contract. Pure FFI ergonomics — no behavioural change.
source: ['claude']
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (4)
packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift (2)
273-291:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winBlock arbitrary Orchard recipients here until this flow can actually honor them.
This branch ignores
recipientAddressand callsshieldedShield(...), which only shields into the bound wallet's default Orchard address. Right now the UI can report success for a send to someone else's Orchard address even though the credits stay in the sender's own shielded pool.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift` around lines 273 - 291, The platformToShielded case is ignoring recipientAddress and always calls walletManager.shieldedShield(...) which only deposits into the wallet's own default Orchard pool; block/validate arbitrary external Orchard recipients by checking recipientAddress in the platformToShielded branch and short-circuiting (returning an error or setting error state) when recipientAddress is present and not the wallet's own bound Orchard address. Update the platformToShielded branch (referencing platformToShielded, recipientAddress, and shieldedShield) to validate the recipient before calling shieldedShield and ensure only shielding into the bound wallet's default address is allowed.
92-118:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winGate
canSendon the active unit, and reject values that round down to zero.Both parsers accept any positive
Doubleand truncate, so inputs smaller than 1 duff / 1 credit still yield0, andcanSendcurrently only checksamountDuffs. That enables shielded/platform submits with the wrong scale or a zero-credit amount.♻️ Minimal fix
var amountDuffs: UInt64? { guard let double = Double(amountString), double > 0 else { return nil } - return UInt64(double * 100_000_000) + let scaled = UInt64(double * 100_000_000) + return scaled > 0 ? scaled : nil } @@ var amountCredits: UInt64? { guard let double = Double(amountString), double > 0 else { return nil } - return UInt64(double * 100_000_000_000) + let scaled = UInt64(double * 100_000_000_000) + return scaled > 0 ? scaled : nil } @@ var canSend: Bool { - detectedFlow != nil && amountDuffs != nil && !isSending + let hasValidAmount: Bool + switch detectedFlow { + case .coreToCore: + hasValidAmount = amountDuffs != nil + case .platformToShielded, .shieldedToShielded, .shieldedToPlatform, .shieldedToCore: + hasValidAmount = amountCredits != nil + case nil: + hasValidAmount = false + } + return detectedFlow != nil && hasValidAmount && !isSending }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift` around lines 92 - 118, The canSend logic currently only checks amountDuffs and allows values that round to zero; update both amountDuffs and amountCredits so they return nil when the scaled UInt64 would be 0 (i.e. compute let value = UInt64(double * scale) and guard value > 0 else { return nil }), and change canSend to gate on the active unit by checking detectedFlow and requiring the correct non-nil amount (for flows platformToShielded/shieldedToShielded/shieldedToPlatform/shieldedToCore use amountCredits != nil, otherwise use amountDuffs != nil) while still ensuring !isSending. This uses the existing symbols amountDuffs, amountCredits, detectedFlow and canSend to locate and fix the code.packages/rs-platform-wallet/src/wallet/platform_wallet.rs (1)
595-606:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winReject
amount == 0before building a shield selection.With zero here, the selector falls through to an empty
inputsmap and the failure happens much later in the shield builder/broadcast path. A small guard keeps the error legible for FFI/SDK callers that bypass the UI.♻️ Minimal fix
pub async fn shielded_shield_from_account<S, P>( &self, shielded_account: u32, payment_account: u32, amount: u64, signer: &S, prover: P, ) -> Result<(), PlatformWalletError> where S: dpp::identity::signer::Signer<dpp::address_funds::PlatformAddress> + Send + Sync, P: dpp::shielded::builder::OrchardProver, { + if amount == 0 { + return Err(PlatformWalletError::ShieldedBuildError( + "amount must be > 0".to_string(), + )); + } + // The shield transition uses `DeductFromInput(0)` as its fee🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/platform_wallet.rs` around lines 595 - 606, In shielded_shield_from_account, add an early guard that rejects amount == 0 before any shield selection or builder work: check the amount at the top of the function (in pub async fn shielded_shield_from_account) and return a clear PlatformWalletError (e.g., InvalidAmount or a new variant indicating zero amount) so the function exits immediately instead of proceeding to the selector/inputs creation and later failing in the shield builder/broadcast path.packages/rs-platform-wallet/src/wallet/shielded/operations.rs (1)
116-124:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAccept proof-of-absence nonce lookups for first-time shield inputs.
AddressInfo::fetch_manycan return the requested address with noAddressInfopayload for a never-before-used address. This branch collapses that intoinput address not found, so the first shield from a freshly funded payment address fails instead of using nonce1.♻️ Minimal fix
- let info = infos - .get(&addr) - .and_then(|opt| opt.as_ref()) - .ok_or_else(|| { - PlatformWalletError::ShieldedBuildError(format!( - "input address not found on platform: {:?}", - addr - )) - })?; - if info.balance < credits { + let maybe_info = infos.get(&addr).ok_or_else(|| { + PlatformWalletError::ShieldedBuildError(format!( + "input address missing from nonce lookup: {:?}", + addr + )) + })?; + if let Some(info) = maybe_info.as_ref() { + if info.balance < credits { + warn!( + address = ?addr, + claimed_credits = credits, + platform_balance = info.balance, + platform_nonce = info.nonce, + "Shield input claims more credits than Platform reports — broadcast will likely fail" + ); + } else { + info!( + address = ?addr, + claimed_credits = credits, + platform_balance = info.balance, + platform_nonce = info.nonce, + "Shield input" + ); + } + } - let next_nonce = info.nonce.checked_add(1).ok_or_else(|| { + let next_nonce = maybe_info + .as_ref() + .map(|info| info.nonce) + .unwrap_or(0) + .checked_add(1) + .ok_or_else(|| { PlatformWalletError::ShieldedBuildError(format!( "input address nonce exhausted on platform: {:?}", addr )) })?;Dash SDK `AddressInfo::fetch_many` semantics for requested platform addresses that have never been used: does the response keep the key with `None`, and should the next nonce for the first state transition be `1`?Also applies to: 147-153
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs` around lines 116 - 124, The code treats a fetched entry of Some(None) from AddressInfo::fetch_many as a missing address and returns ShieldedBuildError; instead accept a present key with None payload and treat it as a first-time address with next nonce = 1. Modify the logic around infos.get(&addr) (where info is currently derived via .and_then(|opt| opt.as_ref()).ok_or_else(...)) to distinguish three cases: missing map entry -> error, Some(None) -> construct/use an AddressInfo-equivalent or set next_nonce = 1 for the shield input, and Some(Some(info)) -> use the existing info as before; apply the same change to the other similar branch (the code around the second occurrence that mirrors lines 147-153). Ensure any later code that reads info.nonce uses the computed next_nonce when info payload is None.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/rs-platform-wallet/src/changeset/shielded_sync_start_state.rs`:
- Around line 28-29: The field last_synced_index in the ShieldedSyncStartState
struct cannot represent "never scanned" and should be made explicit (e.g.,
Option<u64> or a small enum) so cold-start vs already-scanned-0 is unambiguous;
update the struct definition to change last_synced_index: u64 ->
last_synced_index: Option<u64> (or an enum like Never/Index(u64)), update
constructors/builders that set or read last_synced_index, and adjust any
(de)serialization/serde derives and consumers that assume a raw u64 to handle
None/enum variants accordingly so the absent state is encoded/decoded
explicitly.
In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- Around line 314-315: The call to mark_notes_spent currently returns an error
that bubbles up and can make a successful broadcast appear as a send failure;
change the call sites (the mark_notes_spent invocations in operations.rs) to
treat the write as best-effort: call self.mark_notes_spent(...).await and on
Err(e) do not return the error but log it (e.g., tracing::error! /
self.logger.error) with context like "failed to mark notes spent after
broadcast" so the function proceeds normally and relies on the next sync to heal
local state drift.
In `@packages/rs-platform-wallet/src/wallet/shielded/sync.rs`:
- Around line 201-205: Use a single authoritative resume index derived from
result.next_start_index for both the tree checkpoint and per-account watermark
to avoid drift: compute let resume_index = result.next_start_index as u32 (or an
appropriately typed variable) and pass resume_index to
store.checkpoint_tree(resume_index) and to calls to
set_last_synced_note_index(...) instead of using aligned_start +
result.total_notes_scanned; apply the same change to the similar block handling
lines ~246-255 so both checkpoint_tree and set_last_synced_note_index use the
same resume_index in both places.
In
`@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swift`:
- Around line 60-65: The docstring for PlatformWalletManagerShieldedSync says
"accounts" must be non-empty and at most 64 entries, but the Swift
implementation only checks for emptiness; before calling the Rust FFI you should
also enforce the 64-account cap. In the PlatformWalletManagerShieldedSync
initializer / method that accepts the accounts array (referencing the accounts
parameter and the code paths at lines around 60–65 and 89–93), add a guard that
validates accounts.count <= 64 and return or throw the same error path used for
the empty check so the contract is upheld in Swift and invalid input never
crosses the FFI boundary.
---
Duplicate comments:
In `@packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- Around line 595-606: In shielded_shield_from_account, add an early guard that
rejects amount == 0 before any shield selection or builder work: check the
amount at the top of the function (in pub async fn shielded_shield_from_account)
and return a clear PlatformWalletError (e.g., InvalidAmount or a new variant
indicating zero amount) so the function exits immediately instead of proceeding
to the selector/inputs creation and later failing in the shield
builder/broadcast path.
In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- Around line 116-124: The code treats a fetched entry of Some(None) from
AddressInfo::fetch_many as a missing address and returns ShieldedBuildError;
instead accept a present key with None payload and treat it as a first-time
address with next nonce = 1. Modify the logic around infos.get(&addr) (where
info is currently derived via .and_then(|opt| opt.as_ref()).ok_or_else(...)) to
distinguish three cases: missing map entry -> error, Some(None) -> construct/use
an AddressInfo-equivalent or set next_nonce = 1 for the shield input, and
Some(Some(info)) -> use the existing info as before; apply the same change to
the other similar branch (the code around the second occurrence that mirrors
lines 147-153). Ensure any later code that reads info.nonce uses the computed
next_nonce when info payload is None.
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift`:
- Around line 273-291: The platformToShielded case is ignoring recipientAddress
and always calls walletManager.shieldedShield(...) which only deposits into the
wallet's own default Orchard pool; block/validate arbitrary external Orchard
recipients by checking recipientAddress in the platformToShielded branch and
short-circuiting (returning an error or setting error state) when
recipientAddress is present and not the wallet's own bound Orchard address.
Update the platformToShielded branch (referencing platformToShielded,
recipientAddress, and shieldedShield) to validate the recipient before calling
shieldedShield and ensure only shielding into the bound wallet's default address
is allowed.
- Around line 92-118: The canSend logic currently only checks amountDuffs and
allows values that round to zero; update both amountDuffs and amountCredits so
they return nil when the scaled UInt64 would be 0 (i.e. compute let value =
UInt64(double * scale) and guard value > 0 else { return nil }), and change
canSend to gate on the active unit by checking detectedFlow and requiring the
correct non-nil amount (for flows
platformToShielded/shieldedToShielded/shieldedToPlatform/shieldedToCore use
amountCredits != nil, otherwise use amountDuffs != nil) while still ensuring
!isSending. This uses the existing symbols amountDuffs, amountCredits,
detectedFlow and canSend to locate and fix the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bbcc5df6-9a2b-4863-9677-4a9842a32775
📒 Files selected for processing (19)
packages/rs-platform-wallet-ffi/src/shielded_send.rspackages/rs-platform-wallet-ffi/src/shielded_sync.rspackages/rs-platform-wallet/src/changeset/changeset.rspackages/rs-platform-wallet/src/changeset/client_start_state.rspackages/rs-platform-wallet/src/changeset/mod.rspackages/rs-platform-wallet/src/changeset/shielded_changeset.rspackages/rs-platform-wallet/src/changeset/shielded_sync_start_state.rspackages/rs-platform-wallet/src/manager/load.rspackages/rs-platform-wallet/src/manager/wallet_lifecycle.rspackages/rs-platform-wallet/src/wallet/apply.rspackages/rs-platform-wallet/src/wallet/platform_wallet.rspackages/rs-platform-wallet/src/wallet/shielded/file_store.rspackages/rs-platform-wallet/src/wallet/shielded/mod.rspackages/rs-platform-wallet/src/wallet/shielded/operations.rspackages/rs-platform-wallet/src/wallet/shielded/store.rspackages/rs-platform-wallet/src/wallet/shielded/sync.rspackages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/ViewModels/SendViewModel.swift
| /// Highest global note index that the subwallet has scanned. | ||
| pub last_synced_index: u64, |
There was a problem hiding this comment.
Encode the unsynced state explicitly.
last_synced_index: u64 cannot represent "never scanned". That makes cold-start resume ambiguous between a fresh subwallet and one that has already processed note index 0, so consumers either need an undocumented sentinel or risk an off-by-one at the first note boundary.
Suggested shape
- pub last_synced_index: u64,
+ pub last_synced_index: Option<u64>,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Highest global note index that the subwallet has scanned. | |
| pub last_synced_index: u64, | |
| /// Highest global note index that the subwallet has scanned. | |
| pub last_synced_index: Option<u64>, |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/rs-platform-wallet/src/changeset/shielded_sync_start_state.rs`
around lines 28 - 29, The field last_synced_index in the ShieldedSyncStartState
struct cannot represent "never scanned" and should be made explicit (e.g.,
Option<u64> or a small enum) so cold-start vs already-scanned-0 is unambiguous;
update the struct definition to change last_synced_index: u64 ->
last_synced_index: Option<u64> (or an enum like Never/Index(u64)), update
constructors/builders that set or read last_synced_index, and adjust any
(de)serialization/serde derives and consumers that assume a raw u64 to handle
None/enum variants accordingly so the absent state is encoded/decoded
explicitly.
| self.mark_notes_spent(id, &selected_notes).await?; | ||
|
|
There was a problem hiding this comment.
Don't surface local spent-state write failures as send failures after broadcast.
At these call sites the transition has already been broadcast. If mark_notes_spent fails afterward, bubbling that error out makes the API report a failed send even though the network accepted it, which invites duplicate retries. This should be best-effort bookkeeping plus logging, with the next sync pass healing any local drift.
♻️ Suggested pattern
- self.mark_notes_spent(id, &selected_notes).await?;
+ if let Err(e) = self.mark_notes_spent(id, &selected_notes).await {
+ warn!(
+ account,
+ error = %e,
+ "Broadcast succeeded but local spent-state update failed; state will be repaired on the next sync"
+ );
+ }Also applies to: 378-379, 448-449
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs` around lines
314 - 315, The call to mark_notes_spent currently returns an error that bubbles
up and can make a successful broadcast appear as a send failure; change the call
sites (the mark_notes_spent invocations in operations.rs) to treat the write as
best-effort: call self.mark_notes_spent(...).await and on Err(e) do not return
the error but log it (e.g., tracing::error! / self.logger.error) with context
like "failed to mark notes spent after broadcast" so the function proceeds
normally and relies on the next sync to heal local state drift.
| /// `accounts` is the list of ZIP-32 account indices to derive. | ||
| /// Pass `[0]` for the single-account default; pass | ||
| /// `[0, 1, …]` to bind multiple accounts up front. Each entry | ||
| /// produces an independent FVK / IVK / OVK / default address; | ||
| /// notes are scoped per-`(walletId, accountIndex)` inside the | ||
| /// store. Must be non-empty and at most 64 entries. |
There was a problem hiding this comment.
Enforce the documented 64-account cap before crossing the FFI boundary.
The docstring says accounts must be non-empty and at most 64 entries, but the implementation only enforces the empty case. That makes the Swift contract misleading and pushes a cheap validation failure into Rust instead of failing fast here.
💡 Proposed fix
guard !accounts.isEmpty else {
throw PlatformWalletError.invalidParameter(
"accounts must be non-empty"
)
}
+ guard accounts.count <= 64 else {
+ throw PlatformWalletError.invalidParameter(
+ "accounts must contain at most 64 entries"
+ )
+ }Also applies to: 89-93
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletManagerShieldedSync.swift`
around lines 60 - 65, The docstring for PlatformWalletManagerShieldedSync says
"accounts" must be non-empty and at most 64 entries, but the Swift
implementation only checks for emptiness; before calling the Rust FFI you should
also enforce the 64-account cap. In the PlatformWalletManagerShieldedSync
initializer / method that accepts the accounts array (referencing the accounts
parameter and the code paths at lines around 60–65 and 89–93), add a guard that
validates accounts.count <= 64 and return or throw the same error path used for
the empty check so the contract is upheld in Swift and invalid input never
crosses the FFI boundary.
…ed notes Wires the Rust-side `ShieldedChangeSet` persister hook from the previous commit through the FFI to SwiftData, so decrypted shielded notes / nullifier-spent flags / per-subwallet sync watermarks survive across app launches. Cold start re-loads the state into the in-memory `ShieldedWallet` so spending and balance reads work without re-decrypting the chain. ## What changes **rs-platform-wallet-ffi**: - `shielded_persistence.rs` — new C-ABI types `ShieldedNoteFFI` / `ShieldedNullifierSpentFFI` / `ShieldedSyncedIndexFFI` / `ShieldedNullifierCheckpointFFI` for the persist path, and `ShieldedNoteRestoreFFI` / `ShieldedSubwalletSyncStateFFI` for the load path. - `PersistenceCallbacks` grows four `on_persist_shielded_*_fn` fields and four `on_load_shielded_*_fn` / free pairs. Inlined function signatures (rather than `pub type` aliases) so cbindgen walks into the referenced struct definitions and emits their full field layout in the generated header. - `FFIPersister::store` fans `changeset.shielded` out across the four persist callbacks. `FFIPersister::load` calls the two load callbacks and folds the results into `ClientStartState.shielded` keyed by `SubwalletId`. **swift-sdk**: - `PersistentShieldedNote` / `PersistentShieldedSyncState` SwiftData models. Notes keyed by `nullifier` (globally unique); sync states uniquely keyed by `(walletId, accountIndex)`. Both registered in `DashModelContainer.modelTypes`. - `PlatformWalletPersistenceHandler` grows handler methods + trampolines for the four persist callbacks (upserts / spent-flag flips / watermark advances / nullifier-checkpoint upserts) and the two load callbacks (host-allocated arrays with deferred free under `ShieldedLoadAllocation` / `ShieldedSyncStateLoadAllocation`). - `makeCallbacks()` wires every new callback into the `PersistenceCallbacks` struct handed to Rust. ## End-to-end flow Per-spend / per-sync passes on the Rust side build a `ShieldedChangeSet` and queue it on the persister. The FFI flushes that into the four typed callback batches, and the Swift handler upserts SwiftData rows. On cold start `bind_shielded` calls `persister.load()` which fires the load callbacks; the host streams every persisted row back as flat FFI arrays, Rust assembles a `ShieldedSyncStartState`, and `ShieldedWallet::restore_from_snapshot` rehydrates the in-memory `SubwalletState` before the first sync runs. ## Tests Existing 11 shielded unit tests pass. iOS xcframework + the SwiftExampleApp build green. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ync state Adds two read-only browsers next to the existing "TXOs" / "Pending Inputs" / etc. rows in the Storage Explorer: "Shielded Notes" (per-(wallet, account) decrypted notes, spent/unspent filterable) and "Shielded Sync State" (per- subwallet `last_synced_index` + nullifier checkpoint). Both scoped to the active network via the `walletId` denorm on the row, matching the pattern `TxoStorageListView` uses. Also wires the matching count entries into `loadCounts()` so the row counts on the Storage Explorer index page reflect the new tables. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`ShieldedService.bind(...)` now takes `accounts: [UInt32]` (default `[0]`); after a successful Rust-side `bindShielded` it populates `boundAccounts` and `addressesByAccount` by calling `shieldedDefaultAddress` per bound account. The legacy `orchardDisplayAddress` is preserved as the lowest-bound account's address so the existing single-account Receive sheet keeps working. `AccountListView` grows a "Shielded" section that mirrors the existing Core / Platform Payment account rows. One row per bound ZIP-32 account showing `Shielded #N` plus the truncated bech32m address, driven by `shieldedService.boundAccounts` / `addressesByAccount`. The whole-wallet "Shielded Balance" row on the balance card stays as-is for now since the FFI sync event still flattens balance to the wallet level; per-account balance breakdown needs a follow-up FFI lookup (`platform_wallet_manager_shielded_balance(walletId, account)`). `reset()` clears the new published fields so wallet switches don't leak the prior wallet's accounts/addresses into the new detail view. This is the third leg of the multi-account refactor (Rust internals + persistence + UI); the "Add account" affordance itself is deferred — it needs a new `shielded_add_account` FFI that re-uses the bind path's mnemonic resolver. Hosts can already bind multiple accounts up front by passing `accounts: [0, 1, …]` to `bind`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Orchard spend builder rejected proofs with `AnchorMismatch: failed to add spend` because the anchor we passed in (read via `store.tree_anchor()` → `ClientPersistentCommitmentTree::anchor()` → `root_at_checkpoint_depth(None)`) reflected the latest tree state, while each witness was generated by `witness_at_checkpoint_depth(0)` — the root of the most recent checkpoint. Whenever the two depths diverged (e.g. commitments appended after the last checkpoint, or any sequencing where "latest" got ahead of "depth 0") the builder rejected the bundle. Derive the anchor from the witness paths themselves via `MerklePath::root(extracted_cmx)`. By construction that's the root the witness will verify against inside the Halo 2 proof, so it can't disagree with the bundle. Also catches the case where multiple selected notes' witnesses came from different checkpoints (returns `ShieldedBuildError` immediately instead of letting the spend builder surface `AnchorMismatch` after the ~30 s proof generation). `store.tree_anchor()` is no longer called from the spend pipeline; the trait method stays in place for diagnostics. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
shardtree's `checkpoint(id)` silently dedups duplicate ids — a second `checkpoint(N)` call when checkpoint `N` already exists returns false (no-op) and the depth-0 view of the tree stays pinned at the first call's state. Sync was passing `result.next_start_index as u32` as the id, which the SDK rewinds to the last partial chunk's start so it can re-fetch that chunk on the next sync. Consecutive syncs that all ended on a partial chunk passed the SAME id; only the first checkpoint took, every subsequent one was a no-op even though each sync DID append fresh commitments. The witness computed at depth 0 then reflected an old tree state — its root was a snapshot Platform never recorded as a block-end anchor, and broadcast failed with `Anchor not found in the recorded anchors tree`. Switch to the high-water position (`aligned_start + total_notes_scanned` — one past the last appended) as the checkpoint id. Each sync that appends gets a strictly-greater id than the previous, depth 0 advances to the latest tree state, the witness's root tracks Platform's most recent recorded anchor, and broadcast validates. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
packages/rs-platform-wallet/src/wallet/shielded/operations.rs (1)
313-315:⚠️ Potential issue | 🟠 Major | ⚡ Quick winLocal spent-state error after a successful broadcast still aborts the call.
After
state_transition.broadcast(...)returnsOk, the transition is on Platform; failing the whole call whenmark_notes_spenterrors makes a successful send look failed to the host (and invites the user to retry, double-spending the same notes once the next sync sees them as spent). The next nullifier-sync pass already heals local drift, so this should be best-effort + warn.♻️ Proposed pattern (apply at all three call sites)
- self.mark_notes_spent(id, &selected_notes).await?; + if let Err(e) = self.mark_notes_spent(id, &selected_notes).await { + warn!( + account, + error = %e, + "Broadcast succeeded but local spent-state update failed; \ + state will be repaired on the next nullifier sync" + ); + }Also applies to: 377-379, 447-449
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs` around lines 313 - 315, After a successful state_transition.broadcast(...) the subsequent call to mark_notes_spent(...) must be best-effort only; change the three call sites where you call self.mark_notes_spent(id, &selected_notes).await? (and similar invocations at the other two sites) so that you do not propagate errors when broadcast returned Ok—capture the Result, log a warning with context (including the error and the associated id/nullifiers) and continue without returning Err; only propagate mark_notes_spent failures if the broadcast itself failed, otherwise treat failures as non-fatal and let the normal nullifier-sync heal the local state.
🧹 Nitpick comments (3)
packages/rs-platform-wallet/src/wallet/shielded/sync.rs (3)
215-219: 💤 Low valueCompute
new_indexonce and reuse for both checkpoint id and watermarks.
new_index = aligned_start + result.total_notes_scannedis computed twice — once for the checkpoint id at Line 215 and again at Line 262 for the per-account watermark loop. Hoisting it above theif appended > 0block keeps the two values in lockstep by construction (which is exactly the invariant the past review comment was about) and avoids any future drift if one site is touched without the other.♻️ Proposed refactor
- if appended > 0 { + let new_index = aligned_start + result.total_notes_scanned; + if appended > 0 { // ... (existing comment) ... - let new_index = aligned_start + result.total_notes_scanned; let checkpoint_id: u32 = new_index.try_into().unwrap_or(u32::MAX); store .checkpoint_tree(checkpoint_id) .map_err(|e| PlatformWalletError::ShieldedTreeUpdateFailed(e.to_string()))?; } @@ - // Update every account's watermark to the same global - // tree position so the next sync resumes coherently. - let new_index = aligned_start + result.total_notes_scanned; + // Update every account's watermark to the same global + // tree position so the next sync resumes coherently. for &account in &account_indices {Also applies to: 260-269
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/shielded/sync.rs` around lines 215 - 219, Compute new_index once and reuse it for both the checkpoint id and the per-account watermark updates: hoist let new_index = aligned_start + result.total_notes_scanned above the if appended > 0 block, derive checkpoint_id from that single new_index (as you already do for checkpoint_tree via store.checkpoint_tree(checkpoint_id)), and use the same new_index when computing watermarks in the per-account watermark loop so both checkpoint and watermarks are guaranteed to stay in sync.
17-17: ⚡ Quick winDrop the dead-code stub by removing the unused
PaymentAddressimport.
PaymentAddressis no longer referenced in this module; the only thing keeping it alive is the dummy_unused_payment_addresshelper. Removing both is cleaner than carrying a no-op function with#[allow(dead_code)].♻️ Proposed cleanup
-use grovedb_commitment_tree::{Note as OrchardNote, PaymentAddress, PreparedIncomingViewingKey}; +use grovedb_commitment_tree::{Note as OrchardNote, PreparedIncomingViewingKey}; @@ -// Suppress dead_code on `address` field — kept for future use -// (e.g. surfacing diversifier index per discovered note). -#[allow(dead_code)] -fn _unused_payment_address(_pa: PaymentAddress) {} -Also applies to: 406-409
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/shielded/sync.rs` at line 17, Remove the unused PaymentAddress import and its dead-code helper: delete PaymentAddress from the use list in sync.rs (the grovedb_commitment_tree import) and remove the helper function named _unused_payment_address (and its #[allow(dead_code)] attribute) so the module no longer carries a no-op stub; ensure only used symbols (OrchardNote, PreparedIncomingViewingKey) remain imported.
154-172: Trial-decryption is O(non-driver-accounts × all_notes) per chunk — fine today, but worth noting.For each non-driver account this loop trial-decrypts every fetched note locally. With
CHUNK_SIZE = 2048and a small number of accounts this is negligible, but ifaccount_indicesgrows (multi-account UI, restored wallets) the cost scales linearly with both. If that becomes a hot path, batching the IVKs into a single SDK call (or usingbatch_decrypt_notesif the SDK exposes one) would be worth investigating. No change requested for this PR.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/rs-platform-wallet/src/wallet/shielded/sync.rs` around lines 154 - 172, The loop over prepared.iter().skip(1) performs trial decryption of every note for each non-driver account (using try_decrypt_note over result.all_notes) which is O(accounts × all_notes); to address future scaling, refactor to batch-decrypt IVKs instead of per-account per-note calls: replace the nested loop that fills decrypted_by_account with a batched decrypt call (or SDK.batch_decrypt_notes if available) that accepts the collection of ivks and notes and returns which notes decrypt to which ivk, then map those results to DiscoveredNote entries (preserving position and cmx extraction logic) to avoid repeated work in the try_decrypt_note path.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packages/rs-platform-wallet-ffi/src/persistence.rs`:
- Around line 1157-1191: Before calling the host loader, check that each "load"
callback is paired with its corresponding "free" callback and return an Err if
one is set without the other; specifically validate on_load_shielded_notes_fn
with on_load_shielded_notes_free_fn (and likewise
on_load_shielded_sync_states_fn with on_load_shielded_sync_states_free_fn)
before invoking the loader, and only construct the NotesGuard (or equivalent
SyncStates guard) after confirming both callbacks are present so host buffers
are freed; apply the same paired validation to both restore paths around the
code that calls load_notes/load_sync_states and constructs the guard.
In `@packages/rs-platform-wallet/src/wallet/shielded/sync.rs`:
- Around line 215-216: The current code sets checkpoint_id with
new_index.try_into().unwrap_or(u32::MAX), which silently collapses all
subsequent checkpoint IDs to u32::MAX when new_index overflows u32 and
re-introduces the non-monotonic dedup failure; change the behavior to fail
loudly on overflow instead: replace the unwrap_or fallback with an explicit
error/expectation on the try_into result (e.g. propagate an error or use expect
with a clear message) so that when aligned_start + result.total_notes_scanned
(new_index) cannot fit into a u32 the function returns an error or panics rather
than silently using u32::MAX, preserving strict monotonic checkpoint IDs (refer
to new_index, checkpoint_id, aligned_start, result.total_notes_scanned and the
try_into call).
In
`@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift`:
- Around line 2189-2232: The loop writes into buf at the original row index
while skipping malformed rows, leaving uninitialized slots but returning
allocation.entriesInitialized as the count; fix by compacting initialized
entries so the published buffer is contiguous: either pre-scan rows to count
valid entries and allocate buf of that size, or write to
buf[allocation.entriesInitialized] (incrementing entriesInitialized only when
you populate an entry) instead of buf[idx]; ensure you set entriesPtr to the
pointer of the first initialized element and use that for the
shieldedLoadAllocations key and resultCount; apply the same change to
loadShieldedSyncStates() so ShieldedNoteRestoreFFI entries (and analogous
structs) are tightly packed before exposing them to Rust.
- Around line 2167-2179: The current loaders fetch all PersistentShieldedNote
rows (and similarly PersistentShieldedSyncState around lines 2258-2270) and risk
returning notes from other networks; restrict them to this handler's network by
first obtaining the current wallet ids (e.g. via loadWalletList() or the same
wallet-list source used by loadWalletList()) and then replace rows = try
backgroundContext.fetch(descriptor) with a filtered result: fetch then filter by
wallet id (e.g. rows.filter { walletIds.contains($0.walletId) }) before creating
ShieldedLoadAllocation and proceeding; apply the identical wallet-id filtering
to the sync-state loader that marshals PersistentShieldedSyncState rows.
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swift`:
- Around line 221-241: The switchTo(walletId: Data) path currently drops the
requested shielded accounts and falls back to [0]; preserve and pass the
requested account list through to bind so boundAccounts/addressesByAccount
remain correct. Modify switchTo to accept or retrieve the same accounts
parameter used during initial bind (the requested shielded account set, e.g.,
"accounts" or "requestedAccounts") and forward it into the bind(...) call
(bind(walletManager:walletId:network:resolver:accounts:)), or stash the
previously requested accounts on the ShieldedService instance and use that stash
when calling bind; ensure boundAccounts and addressesByAccount are populated
from that accounts list rather than defaulting to [0].
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/AccountListView.swift`:
- Around line 61-67: shieldedAccountsForThisWallet currently returns
shieldedService.boundAccounts unfiltered, so it can show accounts from a
previously-bound wallet; change it to only return boundAccounts when the
service's bound wallet id matches the view's wallet id (e.g., compare
shieldedService.boundWalletId or shieldedService.currentWalletId to the
view/model's wallet.id) and otherwise return []; locate the computed property
shieldedAccountsForThisWallet and add this wallet-id guard around
shieldedService.boundAccounts (or clear/replace the bound accounts earlier in
navigation if that pattern is preferred).
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageModelListViews.swift`:
- Around line 1708-1750: The overlay currently shows when visible.isEmpty which
hides the segmented picker even for filtered-empty views; change the overlay
condition to only show when the full store is empty by gating the overlay on
scoped.isEmpty (i.e., replace the overlay's if visible.isEmpty check with if
scoped.isEmpty) so the inline filtered-empty Section (handled where
visible.isEmpty && !scoped.isEmpty) still allows the user to switch filters;
update the overlay closure that contains ContentUnavailableView accordingly.
---
Duplicate comments:
In `@packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- Around line 313-315: After a successful state_transition.broadcast(...) the
subsequent call to mark_notes_spent(...) must be best-effort only; change the
three call sites where you call self.mark_notes_spent(id,
&selected_notes).await? (and similar invocations at the other two sites) so that
you do not propagate errors when broadcast returned Ok—capture the Result, log a
warning with context (including the error and the associated id/nullifiers) and
continue without returning Err; only propagate mark_notes_spent failures if the
broadcast itself failed, otherwise treat failures as non-fatal and let the
normal nullifier-sync heal the local state.
---
Nitpick comments:
In `@packages/rs-platform-wallet/src/wallet/shielded/sync.rs`:
- Around line 215-219: Compute new_index once and reuse it for both the
checkpoint id and the per-account watermark updates: hoist let new_index =
aligned_start + result.total_notes_scanned above the if appended > 0 block,
derive checkpoint_id from that single new_index (as you already do for
checkpoint_tree via store.checkpoint_tree(checkpoint_id)), and use the same
new_index when computing watermarks in the per-account watermark loop so both
checkpoint and watermarks are guaranteed to stay in sync.
- Line 17: Remove the unused PaymentAddress import and its dead-code helper:
delete PaymentAddress from the use list in sync.rs (the grovedb_commitment_tree
import) and remove the helper function named _unused_payment_address (and its
#[allow(dead_code)] attribute) so the module no longer carries a no-op stub;
ensure only used symbols (OrchardNote, PreparedIncomingViewingKey) remain
imported.
- Around line 154-172: The loop over prepared.iter().skip(1) performs trial
decryption of every note for each non-driver account (using try_decrypt_note
over result.all_notes) which is O(accounts × all_notes); to address future
scaling, refactor to batch-decrypt IVKs instead of per-account per-note calls:
replace the nested loop that fills decrypted_by_account with a batched decrypt
call (or SDK.batch_decrypt_notes if available) that accepts the collection of
ivks and notes and returns which notes decrypt to which ivk, then map those
results to DiscoveredNote entries (preserving position and cmx extraction logic)
to avoid repeated work in the try_decrypt_note path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 330c3010-0058-4cbd-813a-0e5a92df7e51
📒 Files selected for processing (13)
packages/rs-platform-wallet-ffi/src/lib.rspackages/rs-platform-wallet-ffi/src/persistence.rspackages/rs-platform-wallet-ffi/src/shielded_persistence.rspackages/rs-platform-wallet/src/wallet/shielded/operations.rspackages/rs-platform-wallet/src/wallet/shielded/sync.rspackages/swift-sdk/Sources/SwiftDashSDK/Persistence/DashModelContainer.swiftpackages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentShieldedNote.swiftpackages/swift-sdk/Sources/SwiftDashSDK/Persistence/Models/PersistentShieldedSyncState.swiftpackages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Services/ShieldedService.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/AccountListView.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageExplorerView.swiftpackages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Views/StorageModelListViews.swift
| if let Some(load_notes) = self.callbacks.on_load_shielded_notes_fn { | ||
| let mut notes_ptr: *const ShieldedNoteRestoreFFI = std::ptr::null(); | ||
| let mut notes_count: usize = 0; | ||
| let rc = | ||
| unsafe { load_notes(self.callbacks.context, &mut notes_ptr, &mut notes_count) }; | ||
| if rc != 0 { | ||
| return Err( | ||
| format!("on_load_shielded_notes_fn returned error code {}", rc).into(), | ||
| ); | ||
| } | ||
| struct NotesGuard { | ||
| context: *mut c_void, | ||
| free_fn: Option< | ||
| unsafe extern "C" fn( | ||
| context: *mut c_void, | ||
| entries: *const ShieldedNoteRestoreFFI, | ||
| count: usize, | ||
| ), | ||
| >, | ||
| entries: *const ShieldedNoteRestoreFFI, | ||
| count: usize, | ||
| } | ||
| impl Drop for NotesGuard { | ||
| fn drop(&mut self) { | ||
| if let Some(free_fn) = self.free_fn { | ||
| unsafe { free_fn(self.context, self.entries, self.count) }; | ||
| } | ||
| } | ||
| } | ||
| let _notes_guard = NotesGuard { | ||
| context: self.callbacks.context, | ||
| free_fn: self.callbacks.on_load_shielded_notes_free_fn, | ||
| entries: notes_ptr, | ||
| count: notes_count, | ||
| }; |
There was a problem hiding this comment.
Validate the shielded load/free callbacks as pairs before restoring.
Both restore paths treat the free callback as optional, so wiring on_load_shielded_notes_fn or on_load_shielded_sync_states_fn without its mate leaks the host-allocated buffers on every successful load(). Fail fast on mismatched pairs before invoking either loader.
💡 Suggested fix
#[cfg(feature = "shielded")]
{
+ if self.callbacks.on_load_shielded_notes_fn.is_some()
+ != self.callbacks.on_load_shielded_notes_free_fn.is_some()
+ {
+ return Err(
+ "on_load_shielded_notes_fn and on_load_shielded_notes_free_fn must be provided together"
+ .to_string()
+ .into(),
+ );
+ }
+ if self.callbacks.on_load_shielded_sync_states_fn.is_some()
+ != self.callbacks.on_load_shielded_sync_states_free_fn.is_some()
+ {
+ return Err(
+ "on_load_shielded_sync_states_fn and on_load_shielded_sync_states_free_fn must be provided together"
+ .to_string()
+ .into(),
+ );
+ }
+
use crate::shielded_persistence::*;
use platform_wallet::changeset::{ShieldedSubwalletStartState, ShieldedSyncStartState};
use platform_wallet::wallet::shielded::{ShieldedNote, SubwalletId};Also applies to: 1221-1258
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/rs-platform-wallet-ffi/src/persistence.rs` around lines 1157 - 1191,
Before calling the host loader, check that each "load" callback is paired with
its corresponding "free" callback and return an Err if one is set without the
other; specifically validate on_load_shielded_notes_fn with
on_load_shielded_notes_free_fn (and likewise on_load_shielded_sync_states_fn
with on_load_shielded_sync_states_free_fn) before invoking the loader, and only
construct the NotesGuard (or equivalent SyncStates guard) after confirming both
callbacks are present so host buffers are freed; apply the same paired
validation to both restore paths around the code that calls
load_notes/load_sync_states and constructs the guard.
| let new_index = aligned_start + result.total_notes_scanned; | ||
| let checkpoint_id: u32 = new_index.try_into().unwrap_or(u32::MAX); |
There was a problem hiding this comment.
unwrap_or(u32::MAX) re-introduces the non-monotonic-id failure mode at the u32 ceiling.
The fix in this hunk explicitly relies on a strictly-monotonic checkpoint id to avoid shardtree's silent dedup (per the comment block). Once new_index exceeds u32::MAX (i.e. > ~4.29B notes scanned across all chunks), every subsequent checkpoint pins to the same u32::MAX id, which is the exact dedup behavior the rewrite to aligned_start + total_notes_scanned was meant to escape. Practically unreachable today, but a hard error here matches the conservative "fail loudly before proving" posture used in operations.rs::shield for the address-nonce overflow.
🛡️ Proposed fix
- let new_index = aligned_start + result.total_notes_scanned;
- let checkpoint_id: u32 = new_index.try_into().unwrap_or(u32::MAX);
+ let new_index = aligned_start + result.total_notes_scanned;
+ let checkpoint_id: u32 = new_index.try_into().map_err(|_| {
+ PlatformWalletError::ShieldedTreeUpdateFailed(format!(
+ "checkpoint id overflows u32: {new_index}"
+ ))
+ })?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let new_index = aligned_start + result.total_notes_scanned; | |
| let checkpoint_id: u32 = new_index.try_into().unwrap_or(u32::MAX); | |
| let new_index = aligned_start + result.total_notes_scanned; | |
| let checkpoint_id: u32 = new_index.try_into().map_err(|_| { | |
| PlatformWalletError::ShieldedTreeUpdateFailed(format!( | |
| "checkpoint id overflows u32: {new_index}" | |
| )) | |
| })?; |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@packages/rs-platform-wallet/src/wallet/shielded/sync.rs` around lines 215 -
216, The current code sets checkpoint_id with
new_index.try_into().unwrap_or(u32::MAX), which silently collapses all
subsequent checkpoint IDs to u32::MAX when new_index overflows u32 and
re-introduces the non-monotonic dedup failure; change the behavior to fail
loudly on overflow instead: replace the unwrap_or fallback with an explicit
error/expectation on the try_into result (e.g. propagate an error or use expect
with a clear message) so that when aligned_start + result.total_notes_scanned
(new_index) cannot fit into a u32 the function returns an error or panics rather
than silently using u32::MAX, preserving strict monotonic checkpoint IDs (refer
to new_index, checkpoint_id, aligned_start, result.total_notes_scanned and the
try_into call).
| onQueue { | ||
| let descriptor = FetchDescriptor<PersistentShieldedNote>() | ||
| let rows: [PersistentShieldedNote] | ||
| do { | ||
| rows = try backgroundContext.fetch(descriptor) | ||
| } catch { | ||
| resultErrored = true | ||
| return | ||
| } | ||
| if rows.isEmpty { | ||
| return | ||
| } | ||
| let allocation = ShieldedLoadAllocation() |
There was a problem hiding this comment.
Scope shielded restore to the handler's bound network.
loadWalletList() already filters wallets by self.network, but these two loaders still marshal every persisted shielded note/sync-state row. That lets a per-network manager rehydrate foreign-network shielded state, which is especially risky now that the tree DB is also network-scoped. Please filter these rows through the current network's wallet ids before handing them to Rust.
Also applies to: 2258-2270
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift`
around lines 2167 - 2179, The current loaders fetch all PersistentShieldedNote
rows (and similarly PersistentShieldedSyncState around lines 2258-2270) and risk
returning notes from other networks; restrict them to this handler's network by
first obtaining the current wallet ids (e.g. via loadWalletList() or the same
wallet-list source used by loadWalletList()) and then replace rows = try
backgroundContext.fetch(descriptor) with a filtered result: fetch then filter by
wallet id (e.g. rows.filter { walletIds.contains($0.walletId) }) before creating
ShieldedLoadAllocation and proceeding; apply the identical wallet-id filtering
to the sync-state loader that marshals PersistentShieldedSyncState rows.
| for (idx, row) in rows.enumerated() { | ||
| guard row.walletId.count == 32 else { continue } | ||
| guard row.cmx.count == 32 else { continue } | ||
| guard row.nullifier.count == 32 else { continue } | ||
| let noteDataBuf = UnsafeMutablePointer<UInt8>.allocate(capacity: row.noteData.count) | ||
| row.noteData.copyBytes(to: noteDataBuf, count: row.noteData.count) | ||
| allocation.scalarBuffers.append((noteDataBuf, row.noteData.count)) | ||
|
|
||
| var walletIdTuple: FFIByteTuple32 = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) | ||
| row.walletId.withUnsafeBytes { src in | ||
| Swift.withUnsafeMutableBytes(of: &walletIdTuple) { dst in | ||
| dst.copyMemory(from: src) | ||
| } | ||
| } | ||
| var cmxTuple: FFIByteTuple32 = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) | ||
| row.cmx.withUnsafeBytes { src in | ||
| Swift.withUnsafeMutableBytes(of: &cmxTuple) { dst in | ||
| dst.copyMemory(from: src) | ||
| } | ||
| } | ||
| var nullifierTuple: FFIByteTuple32 = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) | ||
| row.nullifier.withUnsafeBytes { src in | ||
| Swift.withUnsafeMutableBytes(of: &nullifierTuple) { dst in | ||
| dst.copyMemory(from: src) | ||
| } | ||
| } | ||
| buf[idx] = ShieldedNoteRestoreFFI( | ||
| wallet_id: walletIdTuple, | ||
| account_index: row.accountIndex, | ||
| position: row.position, | ||
| cmx: cmxTuple, | ||
| nullifier: nullifierTuple, | ||
| block_height: row.blockHeight, | ||
| is_spent: row.isSpent ? 1 : 0, | ||
| value: row.value, | ||
| note_data_ptr: UnsafePointer(noteDataBuf), | ||
| note_data_len: UInt(row.noteData.count) | ||
| ) | ||
| allocation.entriesInitialized += 1 | ||
| } | ||
| let entriesPtr = UnsafePointer(buf) | ||
| shieldedLoadAllocations[UnsafeRawPointer(entriesPtr)] = allocation | ||
| resultEntries = entriesPtr | ||
| resultCount = allocation.entriesInitialized |
There was a problem hiding this comment.
Compact the shielded restore buffers before publishing them.
Both loops allocate rows.count, continue on malformed rows, but still write into buf[idx] while returning entriesInitialized as the count. If an early row is skipped, Rust will read the first count slots even though some of them were never initialized.
Suggested fix pattern
- for (idx, row) in rows.enumerated() {
+ var written = 0
+ for row in rows {
guard row.walletId.count == 32 else { continue }
guard row.cmx.count == 32 else { continue }
guard row.nullifier.count == 32 else { continue }
let noteDataBuf = UnsafeMutablePointer<UInt8>.allocate(capacity: row.noteData.count)
row.noteData.copyBytes(to: noteDataBuf, count: row.noteData.count)
allocation.scalarBuffers.append((noteDataBuf, row.noteData.count))
@@
- buf[idx] = ShieldedNoteRestoreFFI(
+ buf[written] = ShieldedNoteRestoreFFI(
wallet_id: walletIdTuple,
account_index: row.accountIndex,
position: row.position,
cmx: cmxTuple,
nullifier: nullifierTuple,
@@
- allocation.entriesInitialized += 1
+ written += 1
}
+ allocation.entriesInitialized = written
let entriesPtr = UnsafePointer(buf)
shieldedLoadAllocations[UnsafeRawPointer(entriesPtr)] = allocation
resultEntries = entriesPtr
- resultCount = allocation.entriesInitialized
+ resultCount = writtenApply the same pattern to loadShieldedSyncStates() as well.
Also applies to: 2276-2297
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift`
around lines 2189 - 2232, The loop writes into buf at the original row index
while skipping malformed rows, leaving uninitialized slots but returning
allocation.entriesInitialized as the count; fix by compacting initialized
entries so the published buffer is contiguous: either pre-scan rows to count
valid entries and allocate buf of that size, or write to
buf[allocation.entriesInitialized] (incrementing entriesInitialized only when
you populate an entry) instead of buf[idx]; ensure you set entriesPtr to the
pointer of the first initialized element and use that for the
shieldedLoadAllocations key and resultCount; apply the same change to
loadShieldedSyncStates() so ShieldedNoteRestoreFFI entries (and analogous
structs) are tightly packed before exposing them to Rust.
| private var shieldedAccountsForThisWallet: [UInt32] { | ||
| // Filter by wallet id so navigating between wallet | ||
| // details doesn't briefly show the previous wallet's | ||
| // accounts before the singleton service rebinds. | ||
| guard shieldedService.boundAccounts.isEmpty == false else { return [] } | ||
| return shieldedService.boundAccounts | ||
| } |
There was a problem hiding this comment.
shieldedAccountsForThisWallet isn't actually wallet-scoped.
It returns the singleton service's current boundAccounts unchanged, so a newly shown detail view can briefly display the previous wallet's shielded rows until the rebind completes. If you want the "no stale rows" behavior from the comment, gate this on the service's current wallet id or clear the shielded state earlier in the navigation flow.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/Core/Views/AccountListView.swift`
around lines 61 - 67, shieldedAccountsForThisWallet currently returns
shieldedService.boundAccounts unfiltered, so it can show accounts from a
previously-bound wallet; change it to only return boundAccounts when the
service's bound wallet id matches the view's wallet id (e.g., compare
shieldedService.boundWalletId or shieldedService.currentWalletId to the
view/model's wallet.id) and otherwise return []; locate the computed property
shieldedAccountsForThisWallet and add this wallet-id guard around
shieldedService.boundAccounts (or clear/replace the bound accounts earlier in
navigation if that pattern is preferred).
…d network `recoverWallet` was calling `walletManager.createWallet(network:)` on the env-injected active manager — bound to whatever network the user happened to be viewing (typically testnet). Even with the correct `network` parameter threaded into the FFI, the wallet ended up registered inside the active manager and its persister callback fired through that manager's `PlatformWalletPersistenceHandler`, pinning the SwiftData row's `networkRaw` to the active network instead of the wallet's actual one. Result: every recovered orphan landed on whichever network was visible at recovery time. Add `WalletManagerStore.getOrCreateManager(network:sdk:)` that lazily spins up the manager for any network — same configure + load-from-persistor side effects as `activate`, but doesn't change `activeManager` so a multi-network recovery doesn't flicker the user's UI between networks. Inject the store as an environment object so `ContentView` can reach it. `recoverWallet` now builds an SDK for `restoredNetwork`, asks the store for the matching manager, and routes the createWallet call through it. The wallet ends up registered in the right manager, the persister callback fires through that manager's handler, and the SwiftData row gets the correct `networkRaw`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…nable messages Recovery surfaced only "Failed to recover wallet" with no detail when an SDK spin-up failed for a local-only network (regtest / devnet) — the user couldn't tell whether their local stack was down, the manager couldn't configure, or createWallet itself rejected the mnemonic. `recoverWallet` now returns `String?` (nil on success, message on failure) and splits the failure surface into three distinct cases: SDK-init error (with a "is your local <network> stack running?" hint when the network is regtest or devnet), manager get-or-create error, and createWallet error. `authorizeAndRecover` aggregates per-wallet failures into the existing `perWalletFailures` array — moved up so the shared-prompt loop can append to it too — and joins them into one combined alert at the end of the run, matching the auth-failure aggregation pattern that was already there. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The aggregated alert is great for the user but vanishes once dismissed. Mirror each failure into `SDKLogger.error` (including the raw error for debugging) so the messages survive in the console for diagnosis after the dialog closes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Previous attempt only logged the recoverWallet inner failure paths and
relied on Swift.print, which is easy to miss if the user isn't watching
stdout. This broadens coverage:
* SDKLogger.error now also emits via NSLog so errors land in the
unified log (Console.app, Xcode debug area, device console) without
depending on stdout capture.
* authorizeAndRecover logs every recoveryError-setting branch
(shared-prompt unavailable/failed, per-wallet unavailable/failed,
the aggregated final message) and a startup line announcing how
many wallets are being recovered, so a silent failure is now
impossible to confuse with "the function never ran".
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ed set
The "Anchor not found in the recorded anchors tree" broadcast failure
was the depth-0 root of the local tree not matching any of Platform's
per-block recorded anchors. Two ways our local depth-0 root drifts off
a Platform-recorded state:
1. Platform records anchors only at block boundaries
(record_shielded_pool_anchor_if_changed). If a sync chunk ends
mid-block, our depth-0 root reflects a state that never existed
at any block-end and matches nothing.
2. Tree corruption (e.g. multi-account re-sync re-appending committed
positions) puts the local tree into a state Platform never had.
Both surface the same way at broadcast time, ~30 s after the proof was
built — which is too late to recover.
Switch the spend pre-flight to ask Platform what anchors are valid
(getShieldedAnchors RPC, retention 1000 blocks) and walk the local
checkpoint depths until we find one whose root is in that set. The
first matching depth becomes the depth used for every selected note's
witness, so the bundle's anchor is in the recorded set by construction.
If no local depth matches any Platform anchor, the local tree has
fundamentally drifted; surface that as ShieldedTreeDiverged with a
count of anchors tried and depths walked, so the host can drive a
re-sync instead of failing at broadcast.
Trait change: ShieldedStore::witness now takes a checkpoint_depth.
FileBackedShieldedStore passes it through to shardtree's
witness_at_checkpoint_depth; the in-memory store ignores it (still
unsupported).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…en set is empty
The previous attempt failed at the SDK boundary: getShieldedAnchors
returns an empty list when the anchors tree has nothing recorded yet,
the proof verifier maps empty → None, and FetchCurrent then turns
that into a "shielded anchors not found" error. From the wallet's
side that error was indistinguishable from a transport failure, so
the spend bailed without trying the second source we have for valid
anchors.
Two changes:
* rs-sdk: add FetchCurrent impl for MostRecentShieldedAnchor — same
shape as ShieldedAnchors / ShieldedPoolState but for the live
most-recent slot.
* platform-wallet: in extract_spends_and_anchor, treat both fetches
as best-effort, fold both results into a single anchor set, and
only fail with ShieldedBuildError when *both* came back empty.
The most-recent anchor is the one likeliest to match a freshly-
synced wallet's depth-0 root, and on a chain where the
record-anchor upgrade hasn't backfilled it's the only valid
target we can spend against.
When no local depth matches any Platform anchor, log our depth-0 root
and a sample of the Platform anchor set so the divergence is
debuggable from the trace alone.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…hielded-spend-ffi # Conflicts: # packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/ContentView.swift # packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/SwiftExampleAppApp.swift # packages/swift-sdk/SwiftExampleApp/SwiftExampleApp/WalletManagerStore.swift
The script lives at packages/swift-sdk/build_ios.sh; the previous packages/rs-sdk-ffi/build_ios.sh path doesn't exist and was misleading every Claude session that read this file. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…store is fully empty The list had two empty-state branches: an in-list Section that fires when the segmented filter excludes everything (`!scoped.isEmpty && visible.isEmpty`), and a full-list `.overlay` that fires whenever `visible.isEmpty`. Both fired together when the user picked a filter with no matches — duplicating the empty placeholder and visually covering the filter picker. Gate the overlay on `scoped.isEmpty` so it only shows when the store has no notes for any filter; the inline Section keeps handling the filtered-empty case. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…d models The Verify-explorer-covers-all-SwiftData-models CI check was failing because PersistentShieldedNote and PersistentShieldedSyncState had no detail views in StorageRecordDetailViews.swift. Add Form-based detail views for both, surfacing every persisted field, and wrap the existing list-row cells in NavigationLink so the rows are tappable. With these in place the explorer-coverage script reports all 25 model types covered. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…th-0 Reverts 83054cb (validate spend anchor against Platform's recorded set) and e532eef (fall back to most-recent shielded anchor when set is empty). Both were defensive workarounds for behaviour that has since been corrected on the Platform side — most directly by 7b23bc7 ("retire SHIELDED_MOST_RECENT_ANCHOR_KEY; derive most-recent from [8] and never empty it") which makes the empty-set branch dead code, and the broader anchor-recording refactor in 6dfa0fb / 08a0bbc. Net effect on the spend pipeline: * Witnesses are taken at depth 0. * The bundle's anchor is derived from the witness path itself (still 2daf333 — kept; that's Halo 2 builder math, not Platform compensation). * Build proof, broadcast. * If drive-abci rejects with "anchor not in recorded set", the actual rejection text surfaces to the host instead of being pre-empted by a 128-depth walk that obscures the real failure. Drops `ShieldedTreeDiverged` error variant, the `getShieldedAnchors` round-trip, the `MostRecentShieldedAnchor::fetch_current` round-trip, and the `checkpoint_depth` parameter on `ShieldedStore::witness`. Net 208 lines removed across operations.rs / file_store.rs / store.rs / error.rs / rs-sdk shielded.rs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…nc status When the local commitment-tree's depth-0 root doesn't match any Platform-recorded anchor, the wallet has no UI surface for "how far have we appended into the tree" — making it hard to tell whether a divergence is a sync gap, a watermark/checkpoint drift, or a Platform-side cadence question. Add a `ShieldedSyncIndexRows` subview to the Sync Status screen that renders one row per ZIP-32 account showing the persisted `PersistentShieldedSyncState.lastSyncedIndex` (and the nullifier checkpoint height if present). Reads straight from SwiftData rather than via `ShieldedService`, so the values reflect what's actually on disk for the next cold start. Promote `ShieldedService.walletId` to a published `boundWalletId` so the diagnostic subview can scope the per-account watermark query to the active shielded wallet without re-plumbing the id from the call site. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ent handler Missed call site in the previous boundWalletId rename — the `handleShieldedSyncEvent` guard still referenced the old `walletId` shorthand binding, which the warnings-as-errors Swift SDK build surfaced as `cannot find 'walletId' in scope`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…hide solo acct label
Two diagnostics on the Sync Status screen for the shielded section:
* Add a per-account "balance (persisted)" column read straight
from `PersistentShieldedNote` (sum of value over unspent rows
for the bound walletId/account). The existing `Shielded
Balance` is mirrored from Rust sync events via
`ShieldedService.shieldedBalance`; when the two disagree, the
divergence is in the event path. When both agree at zero
while notes are clearly present, the divergence is in the
persister / cold-start restore. Either way, the user has a
direct comparison without leaving the Sync Status screen.
* Hide the `acct N` label when only one account is bound. In
the single-account default `acct 0` is just noise; the row's
synced-index + balance still carry the diagnostic without it.
Re-shown automatically once a second account binds.
Both read SwiftData (no FFI changes); the persistence-side query
filters by the new `boundWalletId` published on `ShieldedService`
so the values are scoped to the active shielded wallet.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…hielded-spend-ffi # Conflicts: # packages/rs-platform-wallet-ffi/src/persistence.rs # packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
I verified the cited code paths against the checked-out 33267c5766d0e5ef8dd79dfad9d163430969999f worktree. The earlier unshield bech32m boundary bug is fixed on this head, but one blocking spend-lifecycle bug remains, along with several valid robustness gaps in the new shielded send paths. The weaker architecture-only findings about helper reuse and pointer constness do not rise to review comments on this PR.
Reviewed commit: 33267c5
🔴 1 blocking | 🟡 4 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [BLOCKING] lines 308-314: Shielded spends mark notes spent before the transition is actually confirmed
`unshield`, `transfer`, and `withdraw` all call `state_transition.broadcast(&self.sdk, None).await?` and then immediately `mark_notes_spent(...)`. In this SDK, `broadcast()` is only the submission step; confirmation is a separate `wait_for_response()` / `broadcast_and_wait()` path in `packages/rs-sdk/src/platform/transition/broadcast.rs`. The local store has only one-way `mark_spent` operations, and both the immediate spend path and later nullifier sync only ever mark more notes as spent; there is no compensating path that clears a false-positive spend. As a result, a transition that is accepted for relay and later rejected, dropped, or never included will still permanently hide those notes from the wallet's local state and persist that false spend through the queued changeset.
- [SUGGESTION] lines 100-154: Concurrent shield calls can build with the same per-address nonce set
`ShieldedWallet::shield` fetches each input address nonce from Platform, increments it locally, and then builds the transition without any wallet-level single-flight guard. The surrounding `self.shielded.read().await` in `shielded_shield_from_account` does not serialize callers; multiple readers can enter concurrently. Two overlapping shield calls for the same wallet can therefore observe the same `info.nonce` values, both build with `nonce + 1`, and only fail after expensive proving when one reaches the network second. This is not a consensus bug, but it is a real race in the wallet API under concurrent use.
- [SUGGESTION] lines 274-314: Spend note selection and local state mutation are not atomic across concurrent sends
`unshield`, `transfer`, and `withdraw` each select unspent notes under a read lock, drop that lock while building and broadcasting the proof, and only later reacquire a write lock in `mark_notes_spent`. That allows two overlapping spend calls to observe the same notes as unspent and both attempt to spend them. One transition wins; the other burns proving time and then fails with duplicate nullifiers or an equivalent broadcast error. This is a separate race from the confirmation bug above: even if notes were only marked spent after confirmation, the API would still advertise the same notes as spendable to concurrent callers until one operation finishes.
In `packages/rs-platform-wallet-ffi/src/shielded_send.rs`:
- [SUGGESTION] lines 276-300: The shield FFI path fabricates a `'static` signer borrow to satisfy the spawned future
`platform_wallet_manager_shielded_shield` converts the host-owned signer handle into `&VTableSigner` and then uses `transmute` to treat that borrow as `&'static VTableSigner` before passing it into `block_on_worker`. This is only sound under the current exact implementation of `block_on_worker`, which blocks until `rt.spawn(future)` completes. If that helper ever changes to allow early return on timeout, cancellation, shutdown, or different error propagation, Rust will still hold a forged `'static` reference after the FFI call returns, while Swift is free to destroy the `KeychainSigner` handle. The same crate already has the safer pattern in `identity_top_up.rs`: capture the pointer as a `usize`, then reconstruct the borrow inside the spawned future so only genuinely `Send + 'static` data crosses the boundary.
In `packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- [SUGGESTION] lines 595-744: The new shield input-selection logic has no focused Rust coverage
`shielded_shield_from_account` now contains the logic that decides whether a Type 15 transition can be built at all: skipping leading funded addresses that cannot serve as `input_0`, reserving `FEE_RESERVE_CREDITS` only on the first selected input, accumulating claims in sorted order, and failing fast when every funded address is below the reserve. I could not find a dedicated Rust test for this function or these invariants in `packages/rs-platform-wallet/src` or `packages/rs-platform-wallet/tests`. Given that this selection code already changed materially during review, the lack of deterministic unit coverage leaves behaviorally important edge cases unpinned in CI.
| .await | ||
| .map_err(|e| PlatformWalletError::ShieldedBroadcastFailed(e.to_string()))?; | ||
|
|
||
| // Mark spent notes in store | ||
| self.mark_notes_spent(&selected_notes).await?; | ||
| self.mark_notes_spent(id, &selected_notes).await?; |
There was a problem hiding this comment.
🔴 Blocking: Shielded spends mark notes spent before the transition is actually confirmed
unshield, transfer, and withdraw all call state_transition.broadcast(&self.sdk, None).await? and then immediately mark_notes_spent(...). In this SDK, broadcast() is only the submission step; confirmation is a separate wait_for_response() / broadcast_and_wait() path in packages/rs-sdk/src/platform/transition/broadcast.rs. The local store has only one-way mark_spent operations, and both the immediate spend path and later nullifier sync only ever mark more notes as spent; there is no compensating path that clears a false-positive spend. As a result, a transition that is accepted for relay and later rejected, dropped, or never included will still permanently hide those notes from the wallet's local state and persist that false spend through the queued changeset.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [BLOCKING] lines 308-314: Shielded spends mark notes spent before the transition is actually confirmed
`unshield`, `transfer`, and `withdraw` all call `state_transition.broadcast(&self.sdk, None).await?` and then immediately `mark_notes_spent(...)`. In this SDK, `broadcast()` is only the submission step; confirmation is a separate `wait_for_response()` / `broadcast_and_wait()` path in `packages/rs-sdk/src/platform/transition/broadcast.rs`. The local store has only one-way `mark_spent` operations, and both the immediate spend path and later nullifier sync only ever mark more notes as spent; there is no compensating path that clears a false-positive spend. As a result, a transition that is accepted for relay and later rejected, dropped, or never included will still permanently hide those notes from the wallet's local state and persist that false spend through the queued changeset.
| // SAFETY: the caller retains ownership of the signer handle | ||
| // and guarantees it outlives this call. We block until the | ||
| // worker future completes, so the `'static` lifetime we paint | ||
| // on the borrow does not actually outlive the host's handle. | ||
| // `VTableSigner` is `Send + Sync` per its `unsafe impl` in | ||
| // rs-sdk-ffi, so `&'static VTableSigner` is automatically | ||
| // `Send + 'static` — exactly what `block_on_worker` needs. | ||
| let address_signer: &'static VTableSigner = | ||
| std::mem::transmute::<&VTableSigner, &'static VTableSigner>( | ||
| &*(signer_address_handle as *const VTableSigner), | ||
| ); | ||
|
|
||
| // Run the proof on a worker thread (8 MB stack). Halo 2 circuit | ||
| // synthesis recurses past the ~512 KB iOS dispatch-thread stack | ||
| // and crashes with EXC_BAD_ACCESS at the first | ||
| // `synthesize(... measure(pass))` call when polled on the | ||
| // calling thread. | ||
| let result = block_on_worker(async move { | ||
| let prover = CachedOrchardProver::new(); | ||
| wallet | ||
| .shielded_shield_from_account( | ||
| shielded_account, | ||
| payment_account, | ||
| amount, | ||
| address_signer, |
There was a problem hiding this comment.
🟡 Suggestion: The shield FFI path fabricates a 'static signer borrow to satisfy the spawned future
platform_wallet_manager_shielded_shield converts the host-owned signer handle into &VTableSigner and then uses transmute to treat that borrow as &'static VTableSigner before passing it into block_on_worker. This is only sound under the current exact implementation of block_on_worker, which blocks until rt.spawn(future) completes. If that helper ever changes to allow early return on timeout, cancellation, shutdown, or different error propagation, Rust will still hold a forged 'static reference after the FFI call returns, while Swift is free to destroy the KeychainSigner handle. The same crate already has the safer pattern in identity_top_up.rs: capture the pointer as a usize, then reconstruct the borrow inside the spawned future so only genuinely Send + 'static data crosses the boundary.
💡 Suggested change
| // SAFETY: the caller retains ownership of the signer handle | |
| // and guarantees it outlives this call. We block until the | |
| // worker future completes, so the `'static` lifetime we paint | |
| // on the borrow does not actually outlive the host's handle. | |
| // `VTableSigner` is `Send + Sync` per its `unsafe impl` in | |
| // rs-sdk-ffi, so `&'static VTableSigner` is automatically | |
| // `Send + 'static` — exactly what `block_on_worker` needs. | |
| let address_signer: &'static VTableSigner = | |
| std::mem::transmute::<&VTableSigner, &'static VTableSigner>( | |
| &*(signer_address_handle as *const VTableSigner), | |
| ); | |
| // Run the proof on a worker thread (8 MB stack). Halo 2 circuit | |
| // synthesis recurses past the ~512 KB iOS dispatch-thread stack | |
| // and crashes with EXC_BAD_ACCESS at the first | |
| // `synthesize(... measure(pass))` call when polled on the | |
| // calling thread. | |
| let result = block_on_worker(async move { | |
| let prover = CachedOrchardProver::new(); | |
| wallet | |
| .shielded_shield_from_account( | |
| shielded_account, | |
| payment_account, | |
| amount, | |
| address_signer, | |
| let signer_addr = signer_address_handle as usize; | |
| let result = block_on_worker(async move { | |
| let address_signer: &VTableSigner = unsafe { &*(signer_addr as *const VTableSigner) }; | |
| let prover = CachedOrchardProver::new(); | |
| wallet | |
| .shielded_shield_from_account( | |
| shielded_account, | |
| payment_account, | |
| amount, | |
| address_signer, | |
| &prover, | |
| ) | |
| .await | |
| }); |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet-ffi/src/shielded_send.rs`:
- [SUGGESTION] lines 276-300: The shield FFI path fabricates a `'static` signer borrow to satisfy the spawned future
`platform_wallet_manager_shielded_shield` converts the host-owned signer handle into `&VTableSigner` and then uses `transmute` to treat that borrow as `&'static VTableSigner` before passing it into `block_on_worker`. This is only sound under the current exact implementation of `block_on_worker`, which blocks until `rt.spawn(future)` completes. If that helper ever changes to allow early return on timeout, cancellation, shutdown, or different error propagation, Rust will still hold a forged `'static` reference after the FFI call returns, while Swift is free to destroy the `KeychainSigner` handle. The same crate already has the safer pattern in `identity_top_up.rs`: capture the pointer as a `usize`, then reconstruct the borrow inside the spawned future so only genuinely `Send + 'static` data crosses the boundary.
| // Fetch the current address nonces from Platform. Each | ||
| // input address has a per-address nonce that the next | ||
| // state transition must use as `last_used + 1`. | ||
| use dash_sdk::platform::FetchMany; | ||
| use dash_sdk::query_types::AddressInfo; | ||
| use std::collections::BTreeSet; | ||
|
|
||
| let address_set: BTreeSet<PlatformAddress> = inputs.keys().copied().collect(); | ||
| let infos = AddressInfo::fetch_many(&self.sdk, address_set) | ||
| .await | ||
| .map_err(|e| { | ||
| PlatformWalletError::ShieldedBuildError(format!("fetch input nonces: {e}")) | ||
| })?; | ||
|
|
||
| let mut inputs_with_nonce: BTreeMap<PlatformAddress, (u32, Credits)> = BTreeMap::new(); | ||
| for (addr, credits) in inputs { | ||
| let info = infos | ||
| .get(&addr) | ||
| .and_then(|opt| opt.as_ref()) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address not found on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; | ||
| if info.balance < credits { | ||
| warn!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input claims more credits than Platform reports — broadcast will likely fail" | ||
| ); | ||
| } else { | ||
| info!( | ||
| address = ?addr, | ||
| claimed_credits = credits, | ||
| platform_balance = info.balance, | ||
| platform_nonce = info.nonce, | ||
| "Shield input" | ||
| ); | ||
| } | ||
| // `AddressNonce` is `u32`; `info.nonce + 1` would | ||
| // wrap silently in release once an address reaches | ||
| // u32::MAX. drive-abci treats wrap-to-0 as a replay | ||
| // and rejects it after the wallet has spent ~30 s on | ||
| // a Halo 2 proof. Bail loudly here instead. | ||
| let next_nonce = info.nonce.checked_add(1).ok_or_else(|| { | ||
| PlatformWalletError::ShieldedBuildError(format!( | ||
| "input address nonce exhausted on platform: {:?}", | ||
| addr | ||
| )) | ||
| })?; | ||
| inputs_with_nonce.insert(addr, (next_nonce, credits)); | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Concurrent shield calls can build with the same per-address nonce set
ShieldedWallet::shield fetches each input address nonce from Platform, increments it locally, and then builds the transition without any wallet-level single-flight guard. The surrounding self.shielded.read().await in shielded_shield_from_account does not serialize callers; multiple readers can enter concurrently. Two overlapping shield calls for the same wallet can therefore observe the same info.nonce values, both build with nonce + 1, and only fail after expensive proving when one reaches the network second. This is not a consensus bug, but it is a real race in the wallet API under concurrent use.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] lines 100-154: Concurrent shield calls can build with the same per-address nonce set
`ShieldedWallet::shield` fetches each input address nonce from Platform, increments it locally, and then builds the transition without any wallet-level single-flight guard. The surrounding `self.shielded.read().await` in `shielded_shield_from_account` does not serialize callers; multiple readers can enter concurrently. Two overlapping shield calls for the same wallet can therefore observe the same `info.nonce` values, both build with `nonce + 1`, and only fail after expensive proving when one reaches the network second. This is not a consensus bug, but it is a real race in the wallet API under concurrent use.
| @@ -232,51 +311,45 @@ impl<S: ShieldedStore> ShieldedWallet<S> { | |||
| .await | |||
| .map_err(|e| PlatformWalletError::ShieldedBroadcastFailed(e.to_string()))?; | |||
|
|
|||
| // Mark spent notes in store | |||
| self.mark_notes_spent(&selected_notes).await?; | |||
| self.mark_notes_spent(id, &selected_notes).await?; | |||
There was a problem hiding this comment.
🟡 Suggestion: Spend note selection and local state mutation are not atomic across concurrent sends
unshield, transfer, and withdraw each select unspent notes under a read lock, drop that lock while building and broadcasting the proof, and only later reacquire a write lock in mark_notes_spent. That allows two overlapping spend calls to observe the same notes as unspent and both attempt to spend them. One transition wins; the other burns proving time and then fails with duplicate nullifiers or an equivalent broadcast error. This is a separate race from the confirmation bug above: even if notes were only marked spent after confirmation, the API would still advertise the same notes as spendable to concurrent callers until one operation finishes.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [SUGGESTION] lines 274-314: Spend note selection and local state mutation are not atomic across concurrent sends
`unshield`, `transfer`, and `withdraw` each select unspent notes under a read lock, drop that lock while building and broadcasting the proof, and only later reacquire a write lock in `mark_notes_spent`. That allows two overlapping spend calls to observe the same notes as unspent and both attempt to spend them. One transition wins; the other burns proving time and then fails with duplicate nullifiers or an equivalent broadcast error. This is a separate race from the confirmation bug above: even if notes were only marked spent after confirmation, the API would still advertise the same notes as spendable to concurrent callers until one operation finishes.
| pub async fn shielded_shield_from_account<S, P>( | ||
| &self, | ||
| shielded_account: u32, | ||
| payment_account: u32, | ||
| amount: u64, | ||
| signer: &S, | ||
| prover: P, | ||
| ) -> Result<(), PlatformWalletError> | ||
| where | ||
| S: dpp::identity::signer::Signer<dpp::address_funds::PlatformAddress> + Send + Sync, | ||
| P: dpp::shielded::builder::OrchardProver, | ||
| { | ||
| // The shield transition uses `DeductFromInput(0)` as its fee | ||
| // strategy. drive-abci interprets that as "after each input | ||
| // address has had its `claim` deducted, take the fee out of | ||
| // input 0's *remaining* balance" (see | ||
| // `deduct_fee_from_outputs_or_remaining_balance_of_inputs_v0` | ||
| // in rs-dpp). "Input 0" is the smallest-key entry of the | ||
| // BTreeMap we hand to the builder. Therefore: | ||
| // | ||
| // * we must NOT claim each input's full balance — claiming | ||
| // `balance` leaves `remaining = 0`, and the fee | ||
| // deduction has nothing to bite into. | ||
| // * we must reserve at least `FEE_RESERVE_CREDITS` of | ||
| // unclaimed balance specifically on input 0 (the | ||
| // BTreeMap-smallest address). | ||
| // | ||
| // Empty-mempool fees on Type 15 transitions land at ~20M | ||
| // credits (~0.0002 DASH). Reserve 1e9 credits (0.01 DASH) — | ||
| // 50× headroom, still trivial relative to typical balances. | ||
| const FEE_RESERVE_CREDITS: u64 = 1_000_000_000; | ||
|
|
||
| // Build the inputs map under the wallet-manager read lock, | ||
| // then drop the lock before re-entering shielded so the | ||
| // guards don't nest unnecessarily. | ||
| let inputs: std::collections::BTreeMap< | ||
| dpp::address_funds::PlatformAddress, | ||
| dpp::fee::Credits, | ||
| > = { | ||
| let wm = self.wallet_manager.read().await; | ||
| let info = wm | ||
| .get_wallet_info(&self.wallet_id) | ||
| .ok_or_else(|| PlatformWalletError::WalletNotFound(hex::encode(self.wallet_id)))?; | ||
| let account = info | ||
| .core_wallet | ||
| .platform_payment_managed_account_at_index(payment_account) | ||
| .ok_or_else(|| { | ||
| PlatformWalletError::AddressOperation(format!( | ||
| "no platform payment account at index {payment_account}" | ||
| )) | ||
| })?; | ||
|
|
||
| // Collect (address, balance) for every funded address, | ||
| // sorted by address bytes — that determines BTreeMap | ||
| // key order downstream and therefore which input ends | ||
| // up at index 0. | ||
| let mut candidates: Vec<(dpp::address_funds::PlatformAddress, u64)> = account | ||
| .addresses | ||
| .addresses | ||
| .values() | ||
| .filter_map(|addr_info| { | ||
| let p2pkh = | ||
| key_wallet::PlatformP2PKHAddress::from_address(&addr_info.address).ok()?; | ||
| let balance = account.address_credit_balance(&p2pkh); | ||
| if balance == 0 { | ||
| None | ||
| } else { | ||
| Some(( | ||
| dpp::address_funds::PlatformAddress::P2pkh(p2pkh.to_bytes()), | ||
| balance, | ||
| )) | ||
| } | ||
| }) | ||
| .collect(); | ||
| candidates.sort_by_key(|(addr, _)| *addr); | ||
|
|
||
| // The address that will be the bundle's `input_0` must | ||
| // have balance > FEE_RESERVE so we can claim at least 1 | ||
| // credit while leaving the reserve untouched. Skip any | ||
| // leading dust address that can't satisfy that — the | ||
| // next address up will become input 0 instead. If | ||
| // every funded address is below the reserve, fail fast: | ||
| // the network would reject the broadcast on the | ||
| // boundary anyway, only after we've spent ~30 s | ||
| // building the Halo 2 proof. | ||
| let Some(viable_input_0) = candidates | ||
| .iter() | ||
| .position(|(_, balance)| *balance > FEE_RESERVE_CREDITS) | ||
| else { | ||
| let total: u64 = candidates.iter().map(|(_, b)| b).sum(); | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: total, | ||
| required: amount.saturating_add(FEE_RESERVE_CREDITS), | ||
| }); | ||
| }; | ||
| let usable: &[(dpp::address_funds::PlatformAddress, u64)] = | ||
| &candidates[viable_input_0..]; | ||
|
|
||
| let total_usable: u64 = usable.iter().map(|(_, b)| b).sum(); | ||
| let needed = amount.saturating_add(FEE_RESERVE_CREDITS); | ||
| if total_usable < needed { | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: total_usable, | ||
| required: needed, | ||
| }); | ||
| } | ||
|
|
||
| // Walk usable inputs in BTreeMap order, claiming only | ||
| // what's needed to cover `amount`. The fee reserve is | ||
| // taken off input 0's max claim so its post-claim | ||
| // remaining stays ≥ FEE_RESERVE_CREDITS for the | ||
| // network's `DeductFromInput(0)` step. | ||
| let mut chosen: std::collections::BTreeMap< | ||
| dpp::address_funds::PlatformAddress, | ||
| dpp::fee::Credits, | ||
| > = std::collections::BTreeMap::new(); | ||
| let mut accumulated_claim: u64 = 0; | ||
| for (i, (addr, balance)) in usable.iter().enumerate() { | ||
| if accumulated_claim >= amount { | ||
| break; | ||
| } | ||
| let max_claim = if i == 0 { | ||
| balance.saturating_sub(FEE_RESERVE_CREDITS) | ||
| } else { | ||
| *balance | ||
| }; | ||
| let still_need = amount - accumulated_claim; | ||
| let claim = max_claim.min(still_need); | ||
| if claim > 0 { | ||
| chosen.insert(*addr, claim); | ||
| accumulated_claim = accumulated_claim.saturating_add(claim); | ||
| } | ||
| } | ||
|
|
||
| if accumulated_claim < amount { | ||
| return Err(PlatformWalletError::ShieldedInsufficientBalance { | ||
| available: accumulated_claim, | ||
| required: amount, | ||
| }); | ||
| } | ||
| chosen | ||
| }; | ||
|
|
||
| let guard = self.shielded.read().await; | ||
| let shielded = guard | ||
| .as_ref() | ||
| .ok_or(PlatformWalletError::ShieldedNotBound)?; | ||
| shielded | ||
| .shield(shielded_account, inputs, amount, signer, &prover) | ||
| .await |
There was a problem hiding this comment.
🟡 Suggestion: The new shield input-selection logic has no focused Rust coverage
shielded_shield_from_account now contains the logic that decides whether a Type 15 transition can be built at all: skipping leading funded addresses that cannot serve as input_0, reserving FEE_RESERVE_CREDITS only on the first selected input, accumulating claims in sorted order, and failing fast when every funded address is below the reserve. I could not find a dedicated Rust test for this function or these invariants in packages/rs-platform-wallet/src or packages/rs-platform-wallet/tests. Given that this selection code already changed materially during review, the lack of deterministic unit coverage leaves behaviorally important edge cases unpinned in CI.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/platform_wallet.rs`:
- [SUGGESTION] lines 595-744: The new shield input-selection logic has no focused Rust coverage
`shielded_shield_from_account` now contains the logic that decides whether a Type 15 transition can be built at all: skipping leading funded addresses that cannot serve as `input_0`, reserving `FEE_RESERVE_CREDITS` only on the first selected input, accumulating claims in sorted order, and failing fast when every funded address is below the reserve. I could not find a dedicated Rust test for this function or these invariants in `packages/rs-platform-wallet/src` or `packages/rs-platform-wallet/tests`. Given that this selection code already changed materially during review, the lack of deterministic unit coverage leaves behaviorally important edge cases unpinned in CI.
…hielded-spend-ffi
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
The reviewed head still has one real blocking issue in the new shielded spend flow. The other Codex suggestions were checked against the surrounding code and dropped because they are either speculative future-proofing or non-concrete concurrency/coverage concerns for this run.
Reviewed commit: c1bb53a
🔴 1 blocking
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [BLOCKING] lines 308-448: Shielded spend paths persist notes as spent after submission, not after execution
`unshield`, `transfer`, and `withdraw` all call `state_transition.broadcast(&self.sdk, None).await?` and then immediately call `mark_notes_spent(...)`. In this SDK, `broadcast()` only submits the transition for relay; `packages/rs-sdk/src/platform/transition/broadcast.rs` explicitly states that the broadcast response is empty and that the actual execution result comes from the separate wait stream. The local shielded store only supports one-way spent mutations (`mark_spent` and nullifier sync both only move notes toward spent), and `mark_notes_spent()` immediately queues that state for persistence. If the transition is accepted for relay but later rejected, dropped, or never included, the wallet permanently hides still-unspent notes from its own local state with no rollback path. That is a real denial-of-funds bug for honest network failures and for malicious DAPI peers that acknowledge submission without delivering a successful execution result.
| .await | ||
| .map_err(|e| PlatformWalletError::ShieldedBroadcastFailed(e.to_string()))?; | ||
|
|
||
| self.mark_notes_spent(&selected_notes).await?; | ||
| self.mark_notes_spent(id, &selected_notes).await?; |
There was a problem hiding this comment.
🔴 Blocking: Shielded spend paths persist notes as spent after submission, not after execution
unshield, transfer, and withdraw all call state_transition.broadcast(&self.sdk, None).await? and then immediately call mark_notes_spent(...). In this SDK, broadcast() only submits the transition for relay; packages/rs-sdk/src/platform/transition/broadcast.rs explicitly states that the broadcast response is empty and that the actual execution result comes from the separate wait stream. The local shielded store only supports one-way spent mutations (mark_spent and nullifier sync both only move notes toward spent), and mark_notes_spent() immediately queues that state for persistence. If the transition is accepted for relay but later rejected, dropped, or never included, the wallet permanently hides still-unspent notes from its own local state with no rollback path. That is a real denial-of-funds bug for honest network failures and for malicious DAPI peers that acknowledge submission without delivering a successful execution result.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [BLOCKING] lines 308-448: Shielded spend paths persist notes as spent after submission, not after execution
`unshield`, `transfer`, and `withdraw` all call `state_transition.broadcast(&self.sdk, None).await?` and then immediately call `mark_notes_spent(...)`. In this SDK, `broadcast()` only submits the transition for relay; `packages/rs-sdk/src/platform/transition/broadcast.rs` explicitly states that the broadcast response is empty and that the actual execution result comes from the separate wait stream. The local shielded store only supports one-way spent mutations (`mark_spent` and nullifier sync both only move notes toward spent), and `mark_notes_spent()` immediately queues that state for persistence. If the transition is accepted for relay but later rejected, dropped, or never included, the wallet permanently hides still-unspent notes from its own local state with no rollback path. That is a real denial-of-funds bug for honest network failures and for malicious DAPI peers that acknowledge submission without delivering a successful execution result.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
I verified the flagged code against the checked-out c1b0eaf3b4fa5c48754e4ff6dd16b0bc1ffb9fa0 worktree and confirmed two blocking issues remain on this head. The earlier unshield platform-address encoding bug and the u32 nonce-overflow bug are fixed here, but the new shielded persistence/restore plumbing still has one FFI memory-safety defect and one durable wallet-state corruption defect.
Reviewed commit: c1b0eaf
🔴 2 blocking
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift`:
- [BLOCKING] lines 2209-2317: Shielded restore callbacks can hand Rust sparse arrays with uninitialized entries
Both `loadShieldedNotes()` and `loadShieldedSyncStates()` skip malformed SwiftData rows with `continue`, but they still write each accepted row into `buf[idx]` from `rows.enumerated()` and return `resultCount = allocation.entriesInitialized`. That means the returned prefix `[0..count)` is only valid if every earlier row was also valid. When an early row is skipped, Rust's `FFIPersister::load()` still trusts the callback contract, builds `slice::from_raw_parts(ptr, count)`, and reads the first `count` structs as fully initialized contiguous entries. In the same scenario, Swift's free path deinitializes the first `entriesInitialized` slots even though the initialized structs may live at higher indexes. This is undefined behavior across the FFI boundary: a single malformed persisted row can make Rust ingest uninitialized wallet IDs/nullifiers or crash during restore.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [BLOCKING] lines 308-448: Shielded spend flows persist notes as spent before the transition is actually executed
`unshield`, `transfer`, and `withdraw` all call `state_transition.broadcast(&self.sdk, None).await?` and then immediately `mark_notes_spent(...)`. In this SDK, `broadcast()` is only the submission step: `packages/rs-sdk/src/platform/transition/broadcast.rs` explicitly separates it from `wait_for_response()` / `broadcast_and_wait()`, and documents that the broadcast response is empty. The follow-up `mark_notes_spent()` mutation is one-way in both the in-memory store and the persisted SwiftData state: `mark_spent` only flips `is_spent` to `true`, the queued `ShieldedChangeSet.nullifiers_spent` is forwarded through `FFIPersister::store`, and Swift's `persistShieldedNullifiersSpent` commits `PersistentShieldedNote.isSpent = true` with no rollback or pending state. If a peer accepts relay but the transition is later rejected, dropped, or never included, the wallet permanently hides still-unspent notes and can strand funds locally until state is rebuilt. This needs to wait for confirmed execution before promoting notes to spent, or persist a distinct pending state that can be reconciled on failure.
| for (idx, row) in rows.enumerated() { | ||
| guard row.walletId.count == 32 else { continue } | ||
| guard row.cmx.count == 32 else { continue } | ||
| guard row.nullifier.count == 32 else { continue } | ||
| let noteDataBuf = UnsafeMutablePointer<UInt8>.allocate(capacity: row.noteData.count) | ||
| row.noteData.copyBytes(to: noteDataBuf, count: row.noteData.count) | ||
| allocation.scalarBuffers.append((noteDataBuf, row.noteData.count)) | ||
|
|
||
| var walletIdTuple: FFIByteTuple32 = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) | ||
| row.walletId.withUnsafeBytes { src in | ||
| Swift.withUnsafeMutableBytes(of: &walletIdTuple) { dst in | ||
| dst.copyMemory(from: src) | ||
| } | ||
| } | ||
| var cmxTuple: FFIByteTuple32 = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) | ||
| row.cmx.withUnsafeBytes { src in | ||
| Swift.withUnsafeMutableBytes(of: &cmxTuple) { dst in | ||
| dst.copyMemory(from: src) | ||
| } | ||
| } | ||
| var nullifierTuple: FFIByteTuple32 = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) | ||
| row.nullifier.withUnsafeBytes { src in | ||
| Swift.withUnsafeMutableBytes(of: &nullifierTuple) { dst in | ||
| dst.copyMemory(from: src) | ||
| } | ||
| } | ||
| buf[idx] = ShieldedNoteRestoreFFI( | ||
| wallet_id: walletIdTuple, | ||
| account_index: row.accountIndex, | ||
| position: row.position, | ||
| cmx: cmxTuple, | ||
| nullifier: nullifierTuple, | ||
| block_height: row.blockHeight, | ||
| is_spent: row.isSpent ? 1 : 0, | ||
| value: row.value, | ||
| note_data_ptr: UnsafePointer(noteDataBuf), | ||
| note_data_len: UInt(row.noteData.count) | ||
| ) | ||
| allocation.entriesInitialized += 1 | ||
| } | ||
| let entriesPtr = UnsafePointer(buf) | ||
| shieldedLoadAllocations[UnsafeRawPointer(entriesPtr)] = allocation | ||
| resultEntries = entriesPtr | ||
| resultCount = allocation.entriesInitialized | ||
| } | ||
| return (resultEntries, resultCount, resultErrored) | ||
| } | ||
|
|
||
| func loadShieldedNotesFree(entries: UnsafeRawPointer?) { | ||
| onQueue { | ||
| guard let entries = entries, | ||
| let allocation = shieldedLoadAllocations.removeValue(forKey: entries) else { | ||
| return | ||
| } | ||
| allocation.release() | ||
| } | ||
| } | ||
|
|
||
| /// Build the host-allocated `ShieldedSubwalletSyncStateFFI` | ||
| /// array Rust reads at boot. Same allocation pattern as | ||
| /// `loadShieldedNotes`. | ||
| func loadShieldedSyncStates() -> ( | ||
| entries: UnsafePointer<ShieldedSubwalletSyncStateFFI>?, | ||
| count: Int, | ||
| errored: Bool | ||
| ) { | ||
| var resultEntries: UnsafePointer<ShieldedSubwalletSyncStateFFI>? | ||
| var resultCount: Int = 0 | ||
| var resultErrored = false | ||
| onQueue { | ||
| let descriptor = FetchDescriptor<PersistentShieldedSyncState>() | ||
| let rows: [PersistentShieldedSyncState] | ||
| do { | ||
| rows = try backgroundContext.fetch(descriptor) | ||
| } catch { | ||
| resultErrored = true | ||
| return | ||
| } | ||
| if rows.isEmpty { | ||
| return | ||
| } | ||
| let allocation = ShieldedSyncStateLoadAllocation() | ||
| let buf = UnsafeMutablePointer<ShieldedSubwalletSyncStateFFI>.allocate( | ||
| capacity: rows.count | ||
| ) | ||
| allocation.entries = buf | ||
| allocation.entriesCount = rows.count | ||
| for (idx, row) in rows.enumerated() { | ||
| guard row.walletId.count == 32 else { continue } | ||
| var walletIdTuple: FFIByteTuple32 = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) | ||
| row.walletId.withUnsafeBytes { src in | ||
| Swift.withUnsafeMutableBytes(of: &walletIdTuple) { dst in | ||
| dst.copyMemory(from: src) | ||
| } | ||
| } | ||
| buf[idx] = ShieldedSubwalletSyncStateFFI( | ||
| wallet_id: walletIdTuple, | ||
| account_index: row.accountIndex, | ||
| last_synced_index: row.lastSyncedIndex, | ||
| has_nullifier_checkpoint: row.hasNullifierCheckpoint ? 1 : 0, | ||
| nullifier_checkpoint_height: row.nullifierCheckpointHeight, | ||
| nullifier_checkpoint_timestamp: row.nullifierCheckpointTimestamp | ||
| ) | ||
| allocation.entriesInitialized += 1 | ||
| } | ||
| let entriesPtr = UnsafePointer(buf) | ||
| shieldedSyncStateLoadAllocations[UnsafeRawPointer(entriesPtr)] = allocation | ||
| resultEntries = entriesPtr | ||
| resultCount = allocation.entriesInitialized |
There was a problem hiding this comment.
🔴 Blocking: Shielded restore callbacks can hand Rust sparse arrays with uninitialized entries
Both loadShieldedNotes() and loadShieldedSyncStates() skip malformed SwiftData rows with continue, but they still write each accepted row into buf[idx] from rows.enumerated() and return resultCount = allocation.entriesInitialized. That means the returned prefix [0..count) is only valid if every earlier row was also valid. When an early row is skipped, Rust's FFIPersister::load() still trusts the callback contract, builds slice::from_raw_parts(ptr, count), and reads the first count structs as fully initialized contiguous entries. In the same scenario, Swift's free path deinitializes the first entriesInitialized slots even though the initialized structs may live at higher indexes. This is undefined behavior across the FFI boundary: a single malformed persisted row can make Rust ingest uninitialized wallet IDs/nullifiers or crash during restore.
source: ['codex-ffi-engineer']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/swift-sdk/Sources/SwiftDashSDK/PlatformWallet/PlatformWalletPersistenceHandler.swift`:
- [BLOCKING] lines 2209-2317: Shielded restore callbacks can hand Rust sparse arrays with uninitialized entries
Both `loadShieldedNotes()` and `loadShieldedSyncStates()` skip malformed SwiftData rows with `continue`, but they still write each accepted row into `buf[idx]` from `rows.enumerated()` and return `resultCount = allocation.entriesInitialized`. That means the returned prefix `[0..count)` is only valid if every earlier row was also valid. When an early row is skipped, Rust's `FFIPersister::load()` still trusts the callback contract, builds `slice::from_raw_parts(ptr, count)`, and reads the first `count` structs as fully initialized contiguous entries. In the same scenario, Swift's free path deinitializes the first `entriesInitialized` slots even though the initialized structs may live at higher indexes. This is undefined behavior across the FFI boundary: a single malformed persisted row can make Rust ingest uninitialized wallet IDs/nullifiers or crash during restore.
| .await | ||
| .map_err(|e| PlatformWalletError::ShieldedBroadcastFailed(e.to_string()))?; | ||
|
|
||
| self.mark_notes_spent(&selected_notes).await?; | ||
| self.mark_notes_spent(id, &selected_notes).await?; |
There was a problem hiding this comment.
🔴 Blocking: Shielded spend flows persist notes as spent before the transition is actually executed
unshield, transfer, and withdraw all call state_transition.broadcast(&self.sdk, None).await? and then immediately mark_notes_spent(...). In this SDK, broadcast() is only the submission step: packages/rs-sdk/src/platform/transition/broadcast.rs explicitly separates it from wait_for_response() / broadcast_and_wait(), and documents that the broadcast response is empty. The follow-up mark_notes_spent() mutation is one-way in both the in-memory store and the persisted SwiftData state: mark_spent only flips is_spent to true, the queued ShieldedChangeSet.nullifiers_spent is forwarded through FFIPersister::store, and Swift's persistShieldedNullifiersSpent commits PersistentShieldedNote.isSpent = true with no rollback or pending state. If a peer accepts relay but the transition is later rejected, dropped, or never included, the wallet permanently hides still-unspent notes and can strand funds locally until state is rebuilt. This needs to wait for confirmed execution before promoting notes to spent, or persist a distinct pending state that can be reconciled on failure.
source: ['codex-ffi-engineer', 'codex-general', 'codex-rust-quality', 'codex-security-auditor']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/wallet/shielded/operations.rs`:
- [BLOCKING] lines 308-448: Shielded spend flows persist notes as spent before the transition is actually executed
`unshield`, `transfer`, and `withdraw` all call `state_transition.broadcast(&self.sdk, None).await?` and then immediately `mark_notes_spent(...)`. In this SDK, `broadcast()` is only the submission step: `packages/rs-sdk/src/platform/transition/broadcast.rs` explicitly separates it from `wait_for_response()` / `broadcast_and_wait()`, and documents that the broadcast response is empty. The follow-up `mark_notes_spent()` mutation is one-way in both the in-memory store and the persisted SwiftData state: `mark_spent` only flips `is_spent` to `true`, the queued `ShieldedChangeSet.nullifiers_spent` is forwarded through `FFIPersister::store`, and Swift's `persistShieldedNullifiersSpent` commits `PersistentShieldedNote.isSpent = true` with no rollback or pending state. If a peer accepts relay but the transition is later rejected, dropped, or never included, the wallet permanently hides still-unspent notes and can strand funds locally until state is rebuilt. This needs to wait for confirmed execution before promoting notes to spent, or persist a distinct pending state that can be reconciled on failure.
Status:⚠️ NOT FINISHED — Blocked on Platform-side fix
The Swift / FFI / platform-wallet plumbing in this PR is end-to-end:
warm-up + spend pre-flight + witness construction + broadcast all
work, and
getShieldedAnchors/getMostRecentShieldedAnchorareboth consulted to validate the anchor before paying the ~30 s
proof cost.
What's blocking actual broadcast success on a live regtest is a
Platform-side desync between
recorded_anchors_treeand themost_recent_anchorslot that surfaces only after the shieldedpool sits idle for ≥ 1000 blocks. Diagnosis below; the fix lives
in a separate branch (
platform-wallet/shielded-prune-keep-recent)and a follow-up PR.
What we observed on a stuck regtest
Verified via grpcurl against the local DAPI at block 9214:
getShieldedPoolState60000000000credits (0.6 DASH — pool non-empty)getMostRecentShieldedAnchor([..., "s", [7]])fb8a9c94e565b397a887d92f6583b7238eeb2d3446cede393059c0b01ad8163fgetShieldedAnchors([..., "s", [6]]){}— emptyThe most-recent slot's value matches the wallet's local depth-0
root and therefore matches the
InvalidAnchorError.anchorfieldreturned by every spend broadcast on this regtest:
validate_anchor_existsonly reads[6], so it rejects theanchor that the rest of Platform is reporting as the live root.
Root cause (Platform-side)
End-of-block runs two methods in this order
(run_block_proposal/v0/mod.rs:353):
record_shielded_pool_anchor_if_changed_v0is conditional:should_store = current_anchor != most_recent_anchor && current_anchor != [0;32].Once the pool stops adding commitments for ≥ 1000 blocks, this
no-ops every block —
[7]keeps its value,[6]and[8]getno new entries.
prune_shielded_pool_anchors_v0is unconditional oncadence (every 100 blocks past
retention_blocks=1000) andscans
[8]by height, deleting entries from[6]and[8]with
block_height < block_height - 1000. It neverconsults
[7].So
[6]'s last entry — the one whose value still matches thelive
[7]— gets pruned out by height, leaving the validator'slookup table empty while the pool is healthy. Every subsequent
spend hits
InvalidAnchorErroruntil something writes to[6]again. And the only way to write to
[6]is for the anchor tochange (a new shield/transfer), which is exactly what the
wallet is trying to do but can't, because
validate_anchor_existsruns first.
Shields (Type 15 / 18) bypass
validate_anchor_exists— that'sthe only escape hatch. A wallet that's only got spend ops in
front of it is permanently stuck.
Fix path (separate branch / PR)
In
prune_shielded_pool_anchors_v0: readmost_recent_anchoronce at the top of the prune; never delete the entry whose
anchor_bytes == most_recent_anchor. Cheap, contained, preservesthe by-height retention invariant for every other entry. Plus a
regression test that reproduces the desync (record → idle past
retention with no new activity → spend should still validate).
Once that lands, every chain that's currently stuck unsticks
itself on the next block.
Issue being fixed or feature implemented
The Send Dash sheet's four shielded flows all fell through to a
placeholder error ("Shielded sending is being rebuilt — see
follow-up PR") even though
ShieldedWallet::transfer/unshield/withdraw/shieldalready exist on the Rustside. Three of them needed only the bound shielded wallet's
cached
SpendAuthorizingKey(no host signer); the fourth(
shield, Type 15) needed a hostSigner<PlatformAddress>plusa real per-input nonce fetch (the spend builder previously
stubbed nonces to 0, which drive-abci rejected on broadcast).
This PR threads all four flows end-to-end so the full Send Dash
matrix actually works.
What was done?
platform-wallet
New
PlatformWalletError::ShieldedNotBoundto distinguish"wallet has no shielded sub-wallet" from build / broadcast
failures.
New
PlatformWalletwrappers (feature-gatedshielded):shielded_transfer_to(recipient_raw_43, amount, prover)— Type 16shielded_unshield_to(to_platform_addr_bytes, amount, prover)— Type 17shielded_withdraw_to(to_core_address, amount, core_fee_per_byte, prover)— Type 19shielded_shield_from_account(account_index, amount, signer, prover)— Type 15Each takes the prover by value because
OrchardProverisimpl'd on
&CachedOrchardProver. Theshield_from_accounthelper auto-selects input addresses from the named Platform
Payment account in ascending derivation order, covering
amount + 0.01 DASHfee buffer (on-chain fee comes offinput 0 via
DeductFromInput(0)).ShieldedWallet::shieldnow fetches per-input nonces fromPlatform via
AddressInfo::fetch_manyand increments thembefore handing to
build_shield_transition. Removes thelong-standing
nonce=0placeholder + TODO.Spend pre-flight (this PR's safety net):
extract_spends_and_anchornow fetchesgetShieldedAnchorsgetMostRecentShieldedAnchorand walks local checkpointdepths until the derived root is in the union, so the spend
bundle's anchor matches a Platform-recorded anchor by
construction. Returns
ShieldedTreeDiverged { tried, depths_walked }with our local depth-0 root and a sample of Platform's
anchors logged at warn-level when nothing matches.
rs-platform-wallet-ffi
New module
shielded_send(feature-gatedshielded):platform_wallet_shielded_warm_up_prover()— fire-and-forgetglobal, no manager handle.
platform_wallet_shielded_prover_is_ready()— bool getterfor a UI affordance.
platform_wallet_manager_shielded_transfer / unshield / withdraw— manager-handle FFIs that resolve the wallet,instantiate a
CachedOrchardProver, and forward to thewallet wrappers via
runtime().block_on(...).platform_wallet_manager_shielded_shield(handle, wallet_id, account_index, amount, signer_address_handle)— additionallytakes a
*mut SignerHandle(Swift'sKeychainSigner.handle)cast to
&VTableSigner. Same shapeplatform_address_wallet_transferuses;VTableSigneralready implements bothSigner<PlatformAddress>andSigner<IdentityPublicKey>.swift-sdk
New async methods on
PlatformWalletManager:shieldedTransfer(walletId:recipientRaw43:amount:),shieldedUnshield(walletId:toPlatformAddress:amount:),shieldedWithdraw(walletId:toCoreAddress:amount:coreFeePerByte:),shieldedShield(walletId:accountIndex:amount:addressSigner:).All run on
Task.detached(priority: .userInitiated)so the~30 s first-call Halo 2 proof build doesn't block the main
actor.
shieldedShieldkeeps theKeychainSigneralive acrossthe detached work the same way
topUpFromAddressesdoes.Static helpers
PlatformWalletManager.warmUpShieldedProver()and
PlatformWalletManager.isShieldedProverReady.SDKLogger.erroralso writes viaNSLogso error paths landin the unified log (Console.app / Xcode debug area), which made
the orphan-recovery diagnostics tractable.
swift-example-app
SendViewModel.executeSendgains awalletManagerparameter and replaces all four shielded placeholder
branches with the real FFI calls. The
.platformToShieldedbranch constructs a
KeychainSignerfrom the modelContextthe same way
TopUpIdentityView/RegisterNameView/FriendsViewalready do.SwiftExampleAppApp.bootstrapfireswarmUpShieldedProver()on a background task at app startso the first user-initiated shielded send doesn't pay the
build cost inline.
intended-network manager (not the active manager), aggregates
per-wallet failures into one alert, and logs every failure
branch with actionable hints (e.g. "is your local
regtest stack running?" for SDK-init errors).
Send matrix after this PR
Type 18 (
shield_from_asset_lock— direct Core L1 → Shieldedwithout going through Platform first) is still unwired; tracked
separately.
How Has This Been Tested?
cargo fmt --all,cargo clippy --workspace --all-features --locked -- --no-deps -D warningsclean.bash build_ios.sh --target sim --profile devgreen.above; once the prune fix lands the rest of the path is
verified end-to-end (anchor selection, witness construction,
proof, broadcast).
Breaking Changes
None at the consensus level.
SendViewModel.executeSendgains a requiredwalletManagerparameter, but the only call site is in-tree
(
SendTransactionView) and is updated in the same commit set.ShieldedStore::witnessnow takes(position, checkpoint_depth)instead of
(position); both impls in this crate are updatedand there are no out-of-tree consumers.
Checklist:
🤖 Generated with Claude Code