Skip to content

Add synchronous blocking channel receive#23

Merged
benoitc merged 9 commits intomainfrom
feature/sync-blocking-channel-receive
Mar 10, 2026
Merged

Add synchronous blocking channel receive#23
benoitc merged 9 commits intomainfrom
feature/sync-blocking-channel-receive

Conversation

@benoitc
Copy link
Owner

@benoitc benoitc commented Mar 10, 2026

Summary

  • Implement blocking receive for channels that suspends Python while waiting for data
  • Releases the dirty scheduler worker during the wait, allowing other Python handlers to run
  • Uses Erlang message passing for sync waiter notification

Changes

  • Add sync_waiter_pid and has_sync_waiter fields to channel struct
  • Add channel_register_sync_waiter NIF to register calling process as waiter
  • Modify channel_send to notify sync waiter via channel_data_ready message
  • Modify channel_close to notify sync waiter via channel_closed message
  • Implement blocking handle_receive using Erlang receive to wait for data
  • Add tests for immediate, delayed, closed channel, and multiple waiter cases

benoitc added 4 commits March 10, 2026 08:26
Implement blocking receive for channels that suspends Python while
waiting for data and releases the dirty scheduler worker.

- Add sync_waiter_pid and has_sync_waiter fields to py_channel_t
- Add channel_register_sync_waiter NIF to register calling process
- Modify channel_send to notify sync waiter via Erlang message
- Modify channel_close to notify sync waiter of channel closure
- Implement blocking handle_receive using Erlang receive to wait
- Add tests for immediate, delayed, and closed channel cases
Test verifies blocking channel receive works with Python 3.12+
subinterpreter contexts. Skips gracefully on older Python versions.
- Use py_context:start_link/2 with unique integer ID
- Use py_context:stop/1 instead of gen_server:stop
- Test immediate receive, blocking receive with delayed send,
  try_receive on empty, and closed channel detection
@benoitc benoitc force-pushed the feature/sync-blocking-channel-receive branch from cc2241c to f15fd78 Compare March 10, 2026 09:55
benoitc added 4 commits March 10, 2026 11:09
- Check if data is available when registering sync waiter to handle race
  between try_receive returning empty and register_sync_waiter being called
- Return 'has_data' atom when data arrived in the window, caller retries
- Notify sync waiter in channel destructor when channel is GC'd
- Do not notify async waiter in destructor to avoid use-after-free when
  event loop is destroyed concurrently
- Update test to consume data before re-registering waiter
The ChannelBuffer type was defined but never used. Removing dead code.
- Reject duplicate/mixed waiters: both async and sync waiter registration
  now return {error, waiter_exists} if any waiter already exists
- Fix lost wakeups: event_loop_add_pending now returns bool; waiter state
  is only cleared after successful dispatch
- Add null checks for enif_alloc_env in sync waiter notifications
- Add tests for mixed waiter rejection scenarios
The asgi_run and wsgi_run NIF functions are deprecated. Removed tests
that call these functions, keeping only the deprecation attribute tests.
@benoitc benoitc force-pushed the feature/sync-blocking-channel-receive branch 2 times, most recently from f9ab7e1 to 748d933 Compare March 10, 2026 21:56
Clear tl_pending_args to NULL whenever tl_pending_callback is set to
false. Previously, the thread-local pointer was left dangling after
callback completion. When a dirty scheduler thread later handled a
different subinterpreter's code, Py_XDECREF on the stale pointer
would attempt to free memory from the wrong allocator.
@benoitc benoitc force-pushed the feature/sync-blocking-channel-receive branch from 748d933 to d760e05 Compare March 10, 2026 22:07
@benoitc benoitc merged commit 5ce4df0 into main Mar 10, 2026
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant