Skip to content

🔄 Sync with upstream changes#17

Open
h0lybyte wants to merge 188 commits intomainfrom
upstream-main
Open

🔄 Sync with upstream changes#17
h0lybyte wants to merge 188 commits intomainfrom
upstream-main

Conversation

@h0lybyte
Copy link
Member

Upstream Sync

This PR contains the latest changes from the upstream repository.

Changes included:

  • Synced from upstream/main
  • Auto-generated by upstream sync workflow

Review checklist:

  • Review the changes for any breaking changes
  • Check for conflicts with local modifications
  • Verify tests pass (if applicable)

This PR was automatically created by the upstream sync workflow

filipecabaco and others added 30 commits September 2, 2025 15:45
currently the user would need to have enabled from the beginning of the channel. this will enable users to enable presence later in the flow by sending a track message which will enable presence messages for them
cowboy 2.13.0 set the default active_n=1
Currently all text frames as handled only with JSON which already requires UTF-8
This change reduces the impact of slow DB setup impacting other tenants
trying to connect at the same time that landed on the same partition
Verify that replication connection is able to reconnect when faced with WAL bloat issues
A new index was created on inserted_at DESC, topic WHERE private IS TRUE AND extension = "broadast"

The hardcoded limit is 25 for now.
Add a PubSub adapter that uses gen_rpc to send messages to other nodes.

It uses :gen_rpc.abcast/3 instead of :erlang.send/2

The adapter works very similarly to the PG2 adapter. It consists of
multiple workers that forward to the local node using PubSub.local_broadcast.

The way to choose the worker to be used is based on the sending process
just like PG2 adapter does

The number of workers is controlled by `:pool_size` or `:broadcast_pool_size`.
This distinction exists because Phoenix.PubSub uses `:pool_size` to
define how many partitions the PubSub registry will use. It's possible
to control them separately by using `:broadcast_pool_size`
---------

Co-authored-by: Eduardo Gurgel Pinho <eduardo.gurgel@supabase.io>
* fix: set max process heap size to 500MB instead of 8GB
* feat: set websocket transport max heap size

WEBSOCKET_MAX_HEAP_SIZE can be used to configure it
Issues:

* Single gen_rpc_dispatcher that can be a bottleneck if the connecting takes some time
* Many calls can land on the dispatcher but the node might be gone already. If we don't validate the node it might keep trying to connect until it times out instead of quickly giving up due to not being an actively connected node.
Include initial_call, ancestors, registered_name, message_queue_len and total_heap_size

Also bump long_schedule and long_gc
On bad connection, we rate limit the Connect module so we prevent abuses and too much logging of errors
Currently, whenever you push any commit to your branch, the old builds are still running and a new build is started. Once a new commit is added, the old test results no longer matter and it's just a waste of CI resources. Also reduces confusion with multiple builds running in parallel for the same branch/possibly blocking any merges.

With this little change, we ensure that whenever a new commit is added, the previous build is immediately canceled/stopped and only the build (latest commit) runs.
* fix: reduce max_frame_size to 5MB
* fix: fullsweep_after=100 on gen rpc pub sub workers

---------

Co-authored-by: Eduardo Gurgel Pinho <eduardo.gurgel@supabase.io>
semantic-release-bot and others added 30 commits February 4, 2026 00:10
increase rate to avoid timing issues
* feat: expose PRESENCE_POOL_SIZE option
* feat: expose more presence configs
* feat: add per-client rate limiting for presence events

Adds rate limiting at the individual WebSocket connection level to prevent
a single client from exhausting the tenant's presence quota. Each client
is limited to a configurable number of presence calls within a time window
(defaults to 10 calls per 60 seconds).

new CLIENT_PRESENCE_MAX_CALLS and CLIENT_PRESENCE_WINDOW_MS options

This feature prevents individual misbehaving or malicious clients from consuming
the entire tenant's presence rate limit quota, improving
  fairness and abuse prevention.

* chore: add realtime channel tests

* fix: log new error message
In certain scenarios we end up with a difference between our metrics and the users metrics when it comes to migrations so this will self correct by checking the real count the user has in their database and we update our records
…e#1715)

* Increasing test coverage of overall code
* Reduce flakiness on certain tests
* Simplified and removed some hardcoded variables
* Partition CI for tests
* Properly cache docker images
* Cleanup actions and user blacksmith runners
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants