Skip to content

Commit

Permalink
Merge branch 'dev' into recursion/public-values
Browse files Browse the repository at this point in the history
  • Loading branch information
tcoratger authored Jan 28, 2025
2 parents 47a0183 + 73d08ca commit 60a4b45
Show file tree
Hide file tree
Showing 105 changed files with 548 additions and 235 deletions.
6 changes: 4 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ jobs:
pull_token: ${{ secrets.PULL_TOKEN }}

# If it's a nightly release, tag with the release time. If the tag is `main`, we want to use
# `latest` as the tag name. Else, use the tag name as is.
# `latest` as the tag name. else, use the tag name as is.
- name: Compute release name and tag
id: release_info
run: |
Expand Down Expand Up @@ -220,6 +220,8 @@ jobs:
fi
# Creates the release for this specific version
# If this is for latest, this will override the files there, but not change the commit to the current main.
# todo(n): change this to override the commit as well somehow.
- name: Create release
uses: softprops/action-gh-release@v2
with:
Expand Down Expand Up @@ -272,7 +274,7 @@ jobs:

- name: "Install SP1"
env:
SP1UP_VERSION: ${{ github.ref_name }}
SP1UP_VERSION: ${{ github.ref_name == 'main' && 'latest' || github.ref_name }}
run: |
cd sp1up
chmod +x sp1up
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/toolchain-ec2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ jobs:
- name: "Install SP1"
env:
SP1UP_VERSION: ${{ github.ref_name }}
SP1UP_VERSION: ${{ github.ref_name == 'main' && 'latest' || github.ref_name }}
run: |
cd sp1up
chmod +x sp1up
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,6 @@ examples/fibonacci/fibonacci-plonk.bin
book/.pnp.cjs
book/.pnp.loader.mjs
book/.yarnrc.yml

# Generated by Intellij-based IDEs.
.idea
24 changes: 0 additions & 24 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 3 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -157,3 +157,6 @@ default.extend-ignore-re = [
"CommitCommitedValuesDigest",
]
default.extend-ignore-words-re = ["(?i)groth", "TRE"]

[workspace.lints.clippy]
print_stdout = "deny"
199 changes: 199 additions & 0 deletions audits/sp1-v4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,199 @@
# SP1 V4 Audit Report

This audit was done by [rkm0959](https://github.com/rkm0959), who also audited SP1's v1.0.0 and v3.0.0 releases.

The audit commit is of SP1 v4.0.0 release, which is 4a1dcea0749021ce6e2596bce5bb45f2def7a95c.

The audit was done between November 25th to December 13th for 3 engineer weeks, prior to the release of SP1 v4.0.0.

The first two bugs shown in the audit report were in previous versions, and they were fixed before releasing SP1 v4.0.0. For more information, we refer the readers to the security advisory [here](https://github.com/succinctlabs/sp1/security/advisories/GHSA-c873-wfhp-wx5m). This is also linked in the report below.

## 1. [V3] Malicious `chip_ordering` in Rust verifier is not checked

**This bug does not affect usage of SP1 when using on-chain verifiers**.

This issue was in V3, and is explained in the first section of the security advisory [here](https://github.com/succinctlabs/sp1/security/advisories/GHSA-c873-wfhp-wx5m).

## 2. [V3] `is_complete` bypass

This issue was in V3, and is explained in the second section of the security advisory [here](https://github.com/succinctlabs/sp1/security/advisories/GHSA-c873-wfhp-wx5m).

This issue was also found by a combined effort from Aligned, LambdaClass and 3MI Labs.

## 3. [Low] `assume_init_mut` used on uninitialized entry

In the recursion executor, the memory write was implemented as follows.

Here, in the first write to the memory, the `entry` will be in an uninitialized state, but `assume_init_mut` is called to write to the `entry`. This is not memory-safe.

```rust
pub unsafe fn mw_unchecked(&self, addr: Address<F>, val: Block<F>) {
match self.0.get(addr.as_usize()).map(|c| unsafe { &mut *c.0.get() }) {
Some(entry) => *entry.assume_init_mut() = MemoryEntry { val },
None => panic!(
"expected address {} to be less than length {}",
addr.as_usize(),
self.0.len()
),
}
}
```

This was fixed by writing `entry.write()` instead of using `assume_init_mut`.

## 4. [High] `send_to_table` may be nonzero in padding

In the ECALL specific chip, the `send_syscall` happens with multiplicity `send_to_table`, which is stored in the syscall information. Previously, this was checked to be zero when the row is not handling an ECALL instruction. This check was mistakenly removed during the implementation of ECALL chip, but was added back during the course of the audit.

```rust=
builder.send_syscall(
local.shard,
local.clk,
syscall_id,
local.op_b_value.reduce::<AB>(),
local.op_c_value.reduce::<AB>(),
send_to_table,
InteractionScope::Local,
);
```

We added this check to enforce `send_to_table = 0` when the row is a padding.
```rust=
builder.when(AB::Expr::one() - local.is_real).assert_zero(send_to_table);
```

## 5. [High] `is_memory` underconstrained

The new CPU chip had a column `is_memory`, which is used to send shard and timestamp information to the opcode specific chips. The idea is that the information is sent only for memory and syscall instructions. The sent values were computed as follows.

```rust=
let expected_shard_to_send =
builder.if_else(local.is_memory + local.is_syscall, local.shard, AB::Expr::zero());
let expected_clk_to_send =
builder.if_else(local.is_memory + local.is_syscall, clk.clone(), AB::Expr::zero());
```

However, `is_memory` was not sent to the opcode specific chips, hence they were underconstrained. This allows arbitrary `is_memory`, which could be used to modify shard and clock information sent to the opcode specific chips, leading to incorrect behavior.

We fixed this by sending the `is_memory` value as well in the interaction, and checking `is_memory = 1` in memory chip and `is_memory = 0` in all other chips.

## 6. [High] `next_pc` underconstrained on ECALL

In the opcode specific chip design, each chips handle a certain opcode, and they are responsible for constraining key values used for the CPU to keep track of the execution. One of these values is the `next_pc`, the next program counter.

In the ECALL chip, the `next_pc` was constrained to be `0` when the instruction was determined to be a `HALT`. However, the constraint that the `pc` increased by `4`, i.e. `next_pc == pc + 4`, was missing in the case where the instruction wasn't a `HALT`.

This was fixed by adding the following constraint.

```rust=
// If the syscall is not halt, then next_pc should be pc + 4.
// `next_pc` is constrained for the case where `is_halt` is false to be `pc + 4`
builder
.when(local.is_real)
.when(AB::Expr::one() - local.is_halt)
.assert_eq(local.next_pc, local.pc + AB::Expr::from_canonical_u32(4));
```

## 7. [High] Global interactions with different `InteractionKind` could lead to the same digest

The global interaction system works as follows. Each chip that needs to send a global interaction, first sends an interaction with `InteractionKind::Global` locally. Then, the `GlobalChip` receives these local interactions with `InteractionKind::Global`, then converts these messages into digests and accumulates them, making the results global.

However, while these informations are sent locally with `InteractionKind::Global`, there are actually two different "actual" `InteractionKind`s - `Memory` and `Syscall`.

The vulnerability was in that the actual underlying `InteractionKind` was not sent as a part of the local interaction between the chips and `GlobalChip`. Therefore, a "memory" interaction could be regarded as "syscall" interaction, and vice versa.

We fixed this by adding the underlying `InteractionKind` to the interaction message, then incorporating this `InteractionKind` to the message when hashing it to the digest.

```rust=
// GlobalChip
builder.receive(
AirInteraction::new(
vec![
local.message[0].into(),
local.message[1].into(),
local.message[2].into(),
local.message[3].into(),
local.message[4].into(),
local.message[5].into(),
local.message[6].into(),
local.is_send.into(),
local.is_receive.into(),
local.kind.into(), // `kind` is added
],
local.is_real.into(),
InteractionKind::Global,
),
InteractionScope::Local,
);
// GlobalInteractionOperation
let m_trial = [
// note that `kind` is incorporated with `values[0]`, a 16 bit range checked value
values[0].clone() + AB::Expr::from_canonical_u32(1 << 16) * kind,
values[1].clone(),
values[2].clone(),
values[3].clone(),
values[4].clone(),
values[5].clone(),
values[6].clone(),
offset.clone(),
AB::Expr::zero(),
AB::Expr::zero(),
AB::Expr::zero(),
AB::Expr::zero(),
AB::Expr::zero(),
AB::Expr::zero(),
AB::Expr::zero(),
AB::Expr::zero(),
];
```

## 8. [High] `vk`'s hash misses initial global cumulative sum

The `vk` now includes `initial_global_cumulative_sum`, which is the preprocessed set of global interactions in digest form. However, in hashing the `vk`, this addition was not incorporated, so the hash did not include this value. This allowed different set of `initial_global_cumulative_sum`, which could lead to incorrect memory state.

We fixed this by adding the `initial_global_cumulative_sum` to the hash.

```rust=
pub fn observe_into<Challenger>(&self, builder: &mut Builder<C>, challenger: &mut Challenger)
where
Challenger: CanObserveVariable<C, Felt<C::F>> + CanObserveVariable<C, SC::DigestVariable>,
{
// Observe the commitment.
challenger.observe(builder, self.commitment);
// Observe the pc_start.
challenger.observe(builder, self.pc_start);
// Observe the initial global cumulative sum.
challenger.observe_slice(builder, self.initial_global_cumulative_sum.0.x.0);
challenger.observe_slice(builder, self.initial_global_cumulative_sum.0.y.0);
// Observe the padding.
let zero: Felt<_> = builder.eval(C::F::zero());
challenger.observe(builder, zero);
}
/// Hash the verifying key + prep domains into a single digest.
/// poseidon2( commit[0..8] || pc_start || initial_global_cumulative_sum || prep_domains[N].{log_n, .size, .shift, .g})
pub fn hash(&self, builder: &mut Builder<C>) -> SC::DigestVariable
where
C::F: TwoAdicField,
SC::DigestVariable: IntoIterator<Item = Felt<C::F>>,
{
let prep_domains = self.chip_information.iter().map(|(_, domain, _)| domain);
let num_inputs = DIGEST_SIZE + 1 + 14 + (4 * prep_domains.len());
let mut inputs = Vec::with_capacity(num_inputs);
inputs.extend(self.commitment);
inputs.push(self.pc_start);
inputs.extend(self.initial_global_cumulative_sum.0.x.0);
inputs.extend(self.initial_global_cumulative_sum.0.y.0);
for domain in prep_domains {
inputs.push(builder.eval(C::F::from_canonical_usize(domain.log_n)));
let size = 1 << domain.log_n;
inputs.push(builder.eval(C::F::from_canonical_usize(size)));
let g = C::F::two_adic_generator(domain.log_n);
inputs.push(builder.eval(domain.shift));
inputs.push(builder.eval(g));
}
SC::hash(builder, &inputs)
}
```
4 changes: 3 additions & 1 deletion book/docs/getting-started/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,9 @@ If this works, go to the [next section](./quickstart.md) to compile and prove a

If you experience [rate-limiting](https://docs.github.com/en/rest/using-the-rest-api/getting-started-with-the-rest-api?apiVersion=2022-11-28#rate-limiting) when using the `sp1up` command, you can resolve this by using the `--token` flag and providing your GitHub token. To create a Github token, follow the instructions [here](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic).

<!-- TODO: We should add an example command here -->
```bash
sp1up --token ghp_YOUR_GITHUB_TOKEN_HERE
```

#### Unsupported OS Architectures

Expand Down
6 changes: 6 additions & 0 deletions book/docs/getting-started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,12 @@ cargo prove new --evm fibonacci
cd fibonacci
```

:::note
If you use the `--evm` option, you will need to install `foundry` to compile the solidity contracts. Please follow the instructions [on the official Foundry docs](https://book.getfoundry.sh/getting-started/installation).

Then, you'll have to setup contracts development by running `forge install` in the `contracts` directory.
:::

### Option 2: Project Template (Solidity Contracts for Onchain Verification)

If you want to use SP1 to generate proofs that will eventually be verified on an EVM chain, you should use the [SP1 project template](https://github.com/succinctlabs/sp1-project-template/tree/main). This Github template is scaffolded with a SP1 program, a script to generate proofs, and also a contracts folder that contains a Solidity contract that can verify SP1 proofs on any EVM chain.
Expand Down
5 changes: 3 additions & 2 deletions book/docs/security/safe-precompile-usage.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# Safe Usage of SP1 Precompiles

This section outlines the key assumptions and properties of each precompile. Advanced users interacting directly with the precompiles are expected to ensure these assumptions are met.
This section outlines the key assumptions and properties of each precompile. As explained in [Precompiles](../writing-programs/precompiles.mdx), we recommend you to interact with precompiles through [patches](../writing-programs/patched-crates.md). Advanced users interacting directly with the precompiles are expected to ensure these assumptions are met.

If you need to interact with the precompiles directly, we strongly recommend using the API described in [Precompiles](../writing-programs/precompiles.mdx) rather than making an `ecall` directly using unsafe Rust.
## Do not use direct ECALL
If you need to interact with the precompiles directly, you must use the API described in [Precompiles](../writing-programs/precompiles.mdx) instead of making the `ecall` directly using unsafe Rust. As some of our syscalls have critical functionalities and complex security properties around them, **we highly recommend not calling the syscalls directly with `ecall`**. For example, directly calling `HALT` to stop the program execution leads to security vulnerabilities. As in our [security model](./security-model.md), it is critical for safe usage that the program compiled into SP1 is correct.

## Alignment of Pointers

Expand Down
Binary file modified book/static/SP1_Turbo_Memory_Argument.pdf
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# Safe Usage of SP1 Precompiles

This section outlines the key assumptions and properties of each precompile. Advanced users interacting directly with the precompiles are expected to ensure these assumptions are met.
This section outlines the key assumptions and properties of each precompile. As explained in [Precompiles](../writing-programs/precompiles.mdx), we recommend you to interact with precompiles through [patches](../writing-programs/patched-crates.md). Advanced users interacting directly with the precompiles are expected to ensure these assumptions are met.

If you need to interact with the precompiles directly, we strongly recommend using the API described in [Precompiles](../writing-programs/precompiles.mdx) rather than making an `ecall` directly using unsafe Rust.
## Do not use direct ECALL
If you need to interact with the precompiles directly, you must use the API described in [Precompiles](../writing-programs/precompiles.mdx) instead of making the `ecall` directly using unsafe Rust. As some of our syscalls have critical functionalities and complex security properties around them, **we highly recommend not calling the syscalls directly with `ecall`**. For example, directly calling `HALT` to stop the program execution leads to security vulnerabilities. As in our [security model](./security-model.md), it is critical for safe usage that the program compiled into SP1 is correct.

## Alignment of Pointers

Expand Down
23 changes: 22 additions & 1 deletion crates/build/src/command/docker.rs
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,28 @@ pub(crate) fn create_docker_command(
.expect("Failed to canonicalize program directory")
.try_into()
.unwrap();
let workspace_root = &program_metadata.workspace_root;

let workspace_root: &Utf8PathBuf = &args
.workspace_directory
.as_deref()
.map(|workspace_path| {
std::path::Path::new(workspace_path)
.to_path_buf()
.canonicalize()
.expect("Failed to canonicalize workspace directory")
.try_into()
.unwrap()
})
.unwrap_or_else(|| program_metadata.workspace_root.clone());

// Ensure the workspace directory is parent of the program
if !program_metadata.workspace_root.starts_with(workspace_root) {
eprintln!(
"Workspace root ({}) must be a parent of the program directory ({}).",
workspace_root, program_metadata.workspace_root
);
exit(1);
}

// Check if docker is installed and running.
let docker_check = Command::new("docker")
Expand Down
9 changes: 9 additions & 0 deletions crates/build/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,14 @@ pub struct BuildArgs {
pub elf_name: Option<String>,
#[clap(alias = "out-dir", long, action, help = "Copy the compiled ELF to this directory")]
pub output_directory: Option<String>,

#[clap(
alias = "workspace-dir",
long,
action,
help = "The top level directory to be used in the docker invocation."
)]
pub workspace_directory: Option<String>,
}

// Implement default args to match clap defaults.
Expand All @@ -91,6 +99,7 @@ impl Default for BuildArgs {
output_directory: None,
locked: false,
no_default_features: false,
workspace_directory: None,
}
}
}
Expand Down
Loading

0 comments on commit 60a4b45

Please sign in to comment.