Skip to content

perf(cosmos): share partition key range cache across clients per endpoint#46308

Closed
tvaron3 wants to merge 1 commit intoAzure:mainfrom
tvaron3:fix/shared-pk-range-cache
Closed

perf(cosmos): share partition key range cache across clients per endpoint#46308
tvaron3 wants to merge 1 commit intoAzure:mainfrom
tvaron3:fix/shared-pk-range-cache

Conversation

@tvaron3
Copy link
Copy Markdown
Member

@tvaron3 tvaron3 commented Apr 14, 2026

Summary

Clients targeting the same Cosmos DB endpoint now share a single CollectionRoutingMap cache instead of each maintaining an independent copy. This eliminates N-1 redundant copies of partition key range data when N clients connect to the same account.

Problem

When PPCB is enabled, each CosmosClient eagerly loads the full partition key range routing map via create_pk_range_wrapper(). Each client creates its own SmartRoutingMapProvider with its own _collection_routing_map_by_item dict, resulting in N identical copies for N clients targeting the same endpoint.

Changes

  • _routing/aio/routing_map_provider.py + _routing/routing_map_provider.py: Module-level _shared_routing_map_cache dict keyed by endpoint URL. PartitionKeyRangeCache.init points _collection_routing_map_by_item to the shared entry. Added clear_cache() method.
  • aio/_cosmos_client_connection_async.py + _cosmos_client_connection.py: refresh_routing_map_provider() calls clear_cache() instead of creating a new SmartRoutingMapProvider.

Memory Profiling Results

Test setup: tracemalloc, ~100 partitions, 2 regions, PPCB=True, 1 read + 1 upsert per client.

Current Memory (MB)

Clients Original PPCB=True Shared PPCB=True Original PPCB=false Shared PPCB=false
1 14.3 14.3 14.0 14.0
25 23.0 18.1 17.9 17.8
50 31.9 21.9 21.7 21.7
100 44.9 27.8 29.4 29.4
150 63.8 37.7 36.4 37.7

PPCB Overhead Reduction

Clients Original Shared Cache Reduction
25 5.1 MB 0.3 MB -93%
50 10.3 MB 0.3 MB -97%
100 15.4 MB -1.6 MB -110%
150 27.4 MB -0.0 MB -100%

Scaling Estimate

At customer scale (200K partitions x 152 clients): ~2.1 GB -> ~14 MB (single shared copy).

Tests

5 unit tests in test_shared_pk_range_cache.py:

  • Same endpoint shares cache (identity check)
  • Different endpoints are isolated
  • First client populates, second sees it
  • clear_cache() resets the shared entry
  • clear_cache() does not affect other endpoints

…oint

Clients targeting the same Cosmos DB endpoint now share a single
CollectionRoutingMap cache instead of each maintaining an independent
copy. This eliminates N-1 redundant copies of the partition key range
data when N clients connect to the same account.

The shared cache is a module-level dict keyed by endpoint URL, protected
by threading.Lock. refresh_routing_map_provider now calls clear_cache()
on the shared entry instead of creating a new SmartRoutingMapProvider.

PPCB overhead with 150 clients (tracemalloc, ~100 partitions):

| Clients | Original PPCB cost | Shared Cache | Reduction |
|---------|-------------------|-------------|-----------|
|      25 |           5.1 MB  |      0.3 MB |      -93% |
|      50 |          10.3 MB  |      0.3 MB |      -97% |
|     100 |          15.4 MB  |     -1.6 MB |     -110% |
|     150 |          27.4 MB  |     -0.0 MB |     -100% |

At customer scale (200K partitions x 152 clients): ~2.1 GB -> ~14 MB.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@tvaron3
Copy link
Copy Markdown
Member Author

tvaron3 commented Apr 14, 2026

Moving to PR #46297 instead.

@tvaron3 tvaron3 closed this Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant