Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(protocol): update multihop_bridging_deployment.md #15772

Merged
merged 4 commits into from
Feb 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file removed packages/protocol/docs/multihop/L1_to_L2.png
Binary file not shown.
Binary file removed packages/protocol/docs/multihop/L2A_to_L3.png
Binary file not shown.
Binary file removed packages/protocol/docs/multihop/L2_to_L1.png
Binary file not shown.
Binary file removed packages/protocol/docs/multihop/L2_to_L2.png
Binary file not shown.
Binary file added packages/protocol/docs/multihop/bridge_1hop.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added packages/protocol/docs/multihop/bridge_2hop.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added packages/protocol/docs/multihop/cache_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added packages/protocol/docs/multihop/cache_1_done.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added packages/protocol/docs/multihop/l1_l2_sync.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added packages/protocol/docs/multihop/merkle_proof.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added packages/protocol/docs/multihop/state.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added packages/protocol/docs/multihop/three_chains.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
121 changes: 56 additions & 65 deletions packages/protocol/docs/multihop_bridging_deployment.md
Original file line number Diff line number Diff line change
@@ -1,99 +1,90 @@
# Deployment for Multi-Hop Briding
# Multi-hop cross-chain bridging

We expect that bridging acorss multiple layers are supported natively by Taiko. I'd like to explain how this is done.
This document explains how multi-hop cross-chain bridging works in Taiko.

First of all, we need to ensures some contracts are shared by multiple Taiko deployments. For example, if we deploy two layer 2s, L2A and L2B, if we would like users to deposit Ether to L2A, then bridge Ether from L2A directly to L2B, then withdraw the Ether on L1, then the contract that holds Ether must be shared by L2A and L2B.
## L1<->L2 data synchronization
We'll use this diagram to illustrate a blockchain's state. The large triangle represents the world state, while the smaller triangle represents the storage tree of a special contract named the "Signal Service," deployed on both L1 and L2.

## Shared contracts
![State Diagram](./multihop/state.png)

On L2 or any layer, then following contracts shall be deployed as sigletons shared by multiple TaikoL1 deployments.
When a signal is sent by the Signal Service, a unique slot in its storage is updated with a value of `1`, as shown in the Solidity code below:

- SignalService
- Bridge
- and all token vaults e.g., ERC20Vault
- An AddressManager used by the above contracts.

There are some inter-dependency among these shared contracts. Specificly

- Bridge.sol depends on SignalService;
- Token vaults depend on Bridge.sol;

These 1-to-1 dependency relations are acheived by AddressResolver with a name-based address resolution (lookup).

### SignalService
```solidity
function _sendSignal(address sender, bytes32 signal) internal returns (bytes32 slot) {
if (signal == 0) revert SS_INVALID_SIGNAL();
slot = getSignalSlot(uint64(block.chainid), sender, signal);
assembly {
sstore(slot, 1)
}
}

function getSignalSlot(uint64 chainId, address app, bytes32 signal) public pure returns (bytes32) {
return keccak256(abi.encodePacked("SIGNAL", chainId, app, signal));
}
```

SignalService also uses AuthorizableContract to authorize multiple TaikoL1 and TaikoL2 contracts deployed **on each chain** that is part of the path of multi-hop bridging.
Merkle proofs can verify signals sent by specific senders when the signal service's state root is known on another chain. A full merkle proof comprises an *account proof* and a *storage proof*. However, if the signal service's storage root (or the *signal root*) is known on another chain, only a storage proof is necessary to verify the signal's source.

For each TaikoL1/TaikoL2 contracts, we need to perform the following:
![Merkle Proof](./multihop/merkle_proof.png)

```solidity
// 1 is Ethereum's chainID
SignalService(sharedSignalServiceAddr).authorize(address(TaikoL1A), 1);
SignalService(sharedSignalServiceAddr).authorize(address(TaikoL1B), 1);
Taiko's core protocol code (TaikoL1.sol and TaikoL2.sol) automatically synchronizes or relays the state roots between L1 and L2.

// 10001 is the L2A's chainId
SignalService(sharedSignalServiceAddr).authorize(address(TaikoL2A), 10001);
When chainA's state root is relayed to chainB, a special signal is sent in chainB's signal service. This signal is calculated incorporating chainA's block ID. These special signals are always sent by the target chain's signal service.

// 10002 is the L2B's chainId
SignalService(sharedSignalServiceAddr).authorize(address(TaikoL2B), 10002);
...
```
![L1-L2 Sync](./multihop/l1_l2_sync.png)

The label **must be** the id of the chain where the smart contract has been deployed to.
If you deploy more chains using Taiko protocol, you can create a chain of relayed state roots between them.

To guarantee this design works, each pre-deployed contract must have a unique address on L2 and L3 chains, incorporating the chain ID into the address (as a prefix).
![Three Chains](./multihop/three_chains.png)

### Bridge
## Verifying bridged messages

Bridge depends on a local SignalService .Therefore, we need to registered the service as:
### One-hop bridging
Consider the 1-hop example below.

```solidity
addManager.setAddress(block.chainId, "signal_service", localSignalService);
```
To verify that "some app" has sent a custom message, we verify if the corresponding signal (associated with the message sender, "some app") has been set by the signal service (0x1000A) on L1. After L1's state root is relayed to L2, we need the following info on L2 to verify the message on L1:

Bridge also need to know each and every conterparty bridge deployed **on each chain** that is part of the multi-hop bridging.
1. Message's signal and its sender, to compute the storage slot now supposed to be 1.
2. A full merkle proof generated by an L1 node for the above slot.
3. L1 signal service's address associated with the merkle proof.
4. L2 signal service's address to verify that L1's state root has been relayed to L2 already.

```solidity
addManager.setAddress(remoteChainId1, "bridge", remoteBridge1);
addManager.setAddress(remoteChainId2, "bridge", remoteBridge2);
...
```
![1-Hop Bridging](./multihop/bridge_1hop.png)

### ERC20Vault
### Multi-hop bridging
In the 2-hop example below, two merkle proofs are needed, and the signal service addresses for L1 and L2 need verification. L3's signal service address does not need verification as the bridging verification occurs in L3's signal service contract, with L3's signal service address being `address(this)`.

ERC20Vault (and other token vaults) depends on a local Bridge, you must have:
![2-Hop Bridging](./multihop/bridge_2hop.png)

```solidity
addressManager.setAddress(block.chainId, "bridge", localBridge)
```
## Caching

Similiar with Bridge, ERC20Vault also needs to know their conterpart vaults **on each chain** that is part of the path of multi-hop bridging. Therefore, we must perform:
Caching is optional and is activated per hop when the transaction intends to reuse some state root or signal root for future bridging verification.

```solidity
addressManager.setAddress(remoteChainId1, "erc20_vault", remoteERC20Vault1);
addressManager.setAddress(remoteChainId2, "erc20_vault", remoteERC20Vault2);
...

```
In the diagram below with 2 hops, L1's state root and L2's signal root can be cached to L3 if specified.

### Dedicated AddressManager
![Cache Example 1](./multihop/cache_1.png)

A dedicated AddressManager should be deployed on each chain to support only these shared contracts. This AddressManager shall not be used by the TaikoL1 deployments.
If both are cached, two more signals will be sent in L3's signal service.

## Bridging
![Cache Example 1 Done](./multihop/cache_1_done.png)

### L1 to L2
Depending on the type of data (state root or signal root), the signal is generated differently.

![L1_to_L2](./multihop/L1_to_L2.png "L1 to L2")
```solidity
function signalForChainData(uint64 chainId, bytes32 kind, bytes32 data) public pure returns (bytes32) {
return keccak256(abi.encode(chainId, kind, data));
}
```

### L2 to L1
Once cached on L3, one full merkle proof is sufficient to verify everything that happened on L1 before or when L1's state root becomes 0x1111. This allows skipping the middle-hop.

![L2_to_L1](./multihop/L2_to_L1.png "L2 to L1")
![Cache Use 1](./multihop/cache_1_use_1.png)

### L2 to L2
If L1's state root is not cached on L3 but only L2's signal root is, then one full merkle proof for L1 and a storage proof for L2 are used to verify a bridged message.

![L2_to_L2](./multihop/L2_to_L2.png "L2 to L2")
![Cache Use 2](./multihop/cache_1_use_2.png)

### L2 to L3 on another L2
Note that the last hop (L2)'s state root has already been auto-relayed to L3, so it cannot be recached. Therefore, only the last hop's signal root can be cached.

![L2A_to_L3](./multihop/L2A_to_L3.png "L2A to L3")
For all other non-last hops, if a full proof is used, the state root can be cached; if a storage proof is used, the signal root can be cached. But by default, caching is all disabled.
Loading