Skip to content

Replace 'sync' with 'cancellable' in wait/poll/yield #548

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions design/mvp/Async.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,9 @@ additional goals and requirements for native async support:
* Maintain meaningful cross-language call stacks (for the benefit of debugging,
logging and tracing).
* Provide mechanisms for applying and observing backpressure.
* Allow non-reentrant synchronous and event-loop-driven core wasm code (that,
e.g., assumes a single global linear memory stack) to not have to worry about
additional reentrancy.


## High-level Approach
Expand Down Expand Up @@ -474,6 +477,16 @@ wants to accept new accept new export calls while waiting or not.
See the [`canon_backpressure_set`] function and [`Task.enter`] method in the
Canonical ABI explainer for the setting and implementation of backpressure.

In addition to *explicit* backpressure set by wasm code, there is also an
*implicit* source of backpressure used to protect non-reentrant core wasm
code. In particular, when an export is lifted synchronously or using an
`async callback`, a component-instance-wide lock is implicitly acquired every
time core wasm is executed. By returning to the event loop after every event
(instead of once at the end of the task), `async callback` exports release
the lock between every event, allowing a higher degree of concurrency than
synchronous exports. `async` (stackful) exports ignore the lock entirely and
thus achieve the highest degree of (cooperative) concurrency.

Once a task is allowed to start according to these backpressure rules, its
arguments are lowered into the callee's linear memory and the task is in
the "started" state.
Expand Down Expand Up @@ -607,6 +620,9 @@ defined by the Component Model:
* Whenever a task yields or waits on (or polls) a waitable set with an already
pending event, whether the task "blocks" and transfers execution to its async
caller is nondeterministic.
* If multiple tasks are waiting on [backpressure](#backpressure), and the
backpressure is disabled, the order in which these pending tasks (and new
tasks started while there are still pending tasks) start is nondeterministic.

Despite the above, the following scenarios do behave deterministically:
* If a component `a` asynchronously calls the export of another component `b`,
Expand Down
10 changes: 7 additions & 3 deletions design/mvp/Binary.md
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,7 @@ canon ::= 0x00 0x00 f:<core:funcidx> opts:<opts> ft:<typeidx> => (canon lift
| 0x05 => (canon task.cancel (core func)) 🔀
| 0x0a 0x7f i:<u32> => (canon context.get i32 i (core func)) 🔀
| 0x0b 0x7f i:<u32> => (canon context.set i32 i (core func)) 🔀
| 0x0c async?:<async>? => (canon yield async? (core func)) 🔀
| 0x0c cancel?:<cancel?> => (canon yield cancel? (core func)) 🔀
| 0x06 async?:<async?> => (canon subtask.cancel async? (core func)) 🔀
| 0x0d => (canon subtask.drop (core func)) 🔀
| 0x0e t:<typeidx> => (canon stream.new t (core func)) 🔀
Expand All @@ -316,15 +316,17 @@ canon ::= 0x00 0x00 f:<core:funcidx> opts:<opts> ft:<typeidx> => (canon lift
| 0x1d opts:<opts> => (canon error-context.debug-message opts (core func)) 📝
| 0x1e => (canon error-context.drop (core func)) 📝
| 0x1f => (canon waitable-set.new (core func)) 🔀
| 0x20 async?:<async>? m:<core:memidx> => (canon waitable-set.wait async? (memory m) (core func)) 🔀
| 0x21 async?:<async>? m:<core:memidx> => (canon waitable-set.poll async? (memory m) (core func)) 🔀
| 0x20 cancel?:<cancel?> m:<core:memidx> => (canon waitable-set.wait cancel? (memory m) (core func)) 🔀
| 0x21 cancel?:<cancel?> m:<core:memidx> => (canon waitable-set.poll cancel? (memory m) (core func)) 🔀
| 0x22 => (canon waitable-set.drop (core func)) 🔀
| 0x23 => (canon waitable.join (core func)) 🔀
| 0x40 ft:<typeidx> => (canon thread.spawn_ref ft (core func)) 🧵
| 0x41 ft:<typeidx> tbl:<core:tableidx> => (canon thread.spawn_indirect ft tbl (core func)) 🧵
| 0x42 => (canon thread.available_parallelism (core func)) 🧵
async? ::= 0x00 =>
| 0x01 => async
cancel? ::= 0x00 =>
| 0x01 => cancellable 🚟
opts ::= opt*:vec(<canonopt>) => opt*
canonopt ::= 0x00 => string-encoding=utf8
| 0x01 => string-encoding=utf16
Expand Down Expand Up @@ -505,6 +507,8 @@ named once.
* The `0x00` variant of `importname'` and `exportname'` will be removed. Any
remaining variant(s) will be renumbered or the prefix byte will be removed or
repurposed.
* Most built-ins should have a `<canonopt>*` immediate instead of an ad hoc
subset of `canonopt`s.


[`core:byte`]: https://webassembly.github.io/spec/core/binary/values.html#binary-byte
Expand Down
Loading