Conversation
aab69bb to
6b456bb
Compare
cmd/cli/eth_oracle_e2e.go
Outdated
| @@ -0,0 +1,340 @@ | |||
| package main | |||
There was a problem hiding this comment.
Initial Reveiw
Good
- Code looks really high quality
- Dynamically extracts the ERC20 info by using eth_call
- Testing is robust and well done
- Good abstraction
Concerns
- Inspecting every ERC20 before checking if it includes the lock order / close order bytes is expensive and dangerous. Could lead to the oracle falling behind. Why not skip any ERC20 transfer that doesn't have 'extra data'?
- "remove lock order from store if it was not found in the order book" is wrong - it should remove lock from the store if the order was 'locked'
- Safe block logic and potential re-orgs (non-actionable - just noting)
- Use of JSON file is
non-atomicand very susceptible to corruption. Should use a badgerDB instance instead. - Does
err := <-sub.Err():mean that the connection dropped? If notp.connectWithRetry()could potentially cause multiple simultaneous subscribers. - I don't think a separate package at 'root' level for testing is necessary. Please move to the oracle package if possible
- Potentially too much abstraction at the ETH transaction level. Would like to see code footprint reduced a bit if possible
Missing logic:
- Marking orders included in a 'proposal' as 'pending' to not duplicate orders
- Smart 'resubmit' logic
- Smart 'waiting period' for preventing brand new locks and closes from being submitted to allow slow nodes to process orders on Ethereum
- Removal of native 'lock' and 'close' order logic
There was a problem hiding this comment.
Files are written atomically by writing to a temporary file then moving that file into place. This ensures only a successfully written file is moved into place.
This should be sufficient for safety guarantees and also lends itself to easier administration and debugging.
There was a problem hiding this comment.
Yes, but speed is a concern - as we can't have the oracle falling behind. Do you forsee this being a bottleneck?
There was a problem hiding this comment.
Yes, but speed is a concern - as we can't have the oracle falling behind. Do you forsee this being a bottleneck?
In general I do not. On my system a benchmark wrote, read and removed 80k orders per second with sync.
How many orders should we be prepared to handle on a node that is very far behind?
One advantage to using BadgerDB though is adding commands to the canopy binary rather than using standard shell tools to query & troubleshoot.
26efc88 to
1a1ccaf
Compare
a54880f to
8f492a1
Compare
Change lock order safe height check from Debug to Info for better visibility when orders are being held back due to confirmation requirements.
- Update shouldSubmit() to log specific hold reasons at Info level - Log propose delay with blocks needed until eligible - Log resubmit delay with exact eligibility root height - Log lock order cooldown with blocks remaining - Log already submitted cases for both lock and close orders - Remove redundant logs from WitnessedOrders that assumed wrong reason
Rename the CanopyOrders RPC endpoint and related code to OracleOrders for better clarity about the endpoint's purpose. - Route path: /v1/query/canopy-orders -> /v1/query/oracle-orders - CLI command: canopy-orders -> oracle-orders - Handler, client method, and response type renamed accordingly
Eth oracle dev
Eth oracle dev
The eth-oracle branches had the explorer/wallet build steps commented out, causing the embedded web UI to be empty and returning 404 on port 50001.
4aa1863 to
742c9c1
Compare
This PR implements an Ethereum transaction Oracle allowing for cross-chain transfers to Canopy.