This document is a normative description of the HTTP behavior Prisma Streams must implement.
It is written so an engineer can implement the server without access to the original Go source code.
Primary inputs in this repository:
overview.mdarchitecture.mdsqlite-schema.mdschemas.md
If any of those documents disagree, this spec is the tie-breaker for this implementation.
- Stream: an append-only ordered log addressed by a URL path segment (the “stream name”).
- Entry: one appended item. Every entry has:
offset(opaque, monotonic)append_time(server-defined, monotonic per stream)- optional
key(routing key / primary key) data(opaque bytes)
- Offset: an opaque string checkpoint used for resumable reads.
- The only special offset is
-1meaning “before the first entry”.
- The only special offset is
- Routing key: a per-entry key used for key-filtered reads.
PUT /v1/stream/{name}create streamPOST /v1/stream/{name}appendGET /v1/stream/{name}readHEAD /v1/stream/{name}metadataDELETE /v1/stream/{name}delete
GET /v1/stream/{name}/_schemaget schema registryPOST /v1/stream/{name}/_schemaupdate schema registry
GET /v1/stream/{name}/_profileget stream profile metadataPOST /v1/stream/{name}/_profileupdate stream profile
GET /v1/stream/{name}/_search?q=...searchPOST /v1/stream/{name}/_searchsearchPOST /v1/stream/{name}/_aggregateaggregateGET /v1/stream/{name}/_routing_keyslist routing keys alphabeticallyGET /v1/stream/{name}/_index_statusget per-stream index statusGET /v1/stream/{name}/_detailsget combined stream details
GET /v1/streamslist streams
GET /v1/server/_detailsget server-scoped configured limits and live runtime state
System streams (reserved names):
__stream_metrics__(metrics; seemetrics.md)- uses the
metricsprofile for canonical normalization - intentionally installs a lean internal schema registry with no
routingKeyand nosearchconfig - therefore does not build routing, lexicon, exact, or bundled companion families for the internal system stream
- uses the
__stream_stats__(segment stats; seeinternal/STREAM_STATS.md) (proposal only; not implemented in the current Bun + TypeScript server)__registry__(stream lifecycle log; recommended to make listing cheap)
-
Stream-Key: <string>- Optional routing key for the appended entry (byte mode) or for each entry (JSON mode when allowed).
- If the stream has a configured schema routing key extraction (see
schemas.md), then JSON appends must NOT includeStream-Key.
-
Stream-Timestamp: <rfc3339 | rfc3339nano | unix_nanos>- Optional append-time hint.
- The server clamps timestamps so append time is monotonic per stream.
-
Stream-Seq: <string>- Optional write coordination.
- If provided, the server enforces monotonic increase (lexicographic compare).
-
Content-Type: application/json- If set, the body must be a JSON array and the server appends one entry per array element.
- Otherwise, the body is treated as opaque bytes and appended as a single entry.
Stream-TTL: <duration>(example:24h,30m,15s)Stream-Expires-At: <rfc3339 | rfc3339nano>
Rules:
- At most one of
Stream-TTLandStream-Expires-Atmay be provided. - If provided, the stream becomes unavailable for reads/appends after expiry.
None required.
All successful responses should include:
Stream-Next-Offset: <offset>- The checkpoint the client should pass as
offset=in the next read. - For reads that return no new data, it should equal the request’s
offset(or the canonicalized equivalent).
- The checkpoint the client should pass as
Reads should also include:
Stream-End-Offset: <offset>- A checkpoint representing the current end of stream (at response time). Useful for UIs and diagnostics.
Caching headers:
-
For non-live reads with bounded responses, return
ETag: W/"slice:<start>:<next>:key=<keyOrEmpty>:fmt=<fmt>:filter=<filterOrEmpty>"Cache-Control: immutable, max-age=31536000
-
For live reads (
live=trueorlive=long-poll), returnCache-Control: no-store
If a filtered read hits the scan cap, it must also include:
Stream-Filter-Scan-Limit-Reached: trueStream-Filter-Scan-Limit-Bytes: 104857600Stream-Filter-Scanned-Bytes: <bytes examined>
-
offset=<opaque>(required for normal reads)-1means start.
-
since=<timestamp>(optional)- Seek by append time (RFC3339/RFC3339Nano or unix nanos).
- If both
offsetandsinceare present, offset wins.
-
format=json(optional)- If set, server returns a JSON array of messages.
- If absent, server returns raw concatenated bytes (byte mode).
-
live=trueORlive=long-poll(optional)- If set and there is no data available after
offset, the server waits until either:- new data becomes available, or
- timeout expires.
- If set and there is no data available after
-
timeout=<duration>(optional)- Only meaningful with
live. - Default: 30s.
- Only meaningful with
-
key=<string>(optional)- Routing-key filtered read.
-
filter=<expr>(optional)- Predicate filter for JSON streams.
- Only schema
search.fieldsmay appear in the filter. - Supported clause forms in the current implementation:
- exact match:
field:value - comparisons:
field:>=value,field:>value,field:<=value,field:<value - exists:
has:field - boolean composition with
AND,OR,NOT,-, and(...)
- exact match:
- Exact-equality clauses may use the internal exact family to prune sealed segments.
- Typed equality/range clauses may use
.colcompanions to prune segment-local docs. - Remaining verification happens against the source stream.
Path form (equivalent to key=):
GET /v1/stream/{name}/pk/{url-escaped-key}?offset=...
GET /v1/streams- Returns a JSON array of stream descriptors.
- Must be efficient up to ~1,000,000 streams.
- Each descriptor should expose the stream profile. The current
implementation returns a single
profilefield in the list response.
POST /v1/stream/{name}/_aggregate uses a JSON request body.
Required fields:
rollupfromtointerval
Optional fields:
qgroup_bymeasures
- Offsets are opaque strings.
- The only special offset is
-1meaning “start of stream”. - A read with
offset=Xreturns entries with offsets strictly greater than X. - The server returns
Stream-Next-Offsetwhich the client uses for the next read.
To be cache-friendly and sortable, offsets are encoded as Crockford base32 of a 128-bit tuple:
epoch(u32)hi(u32)lo(u32)in_block(u32)
Canonical representation:
- 26 characters of Crockford base32 (case-insensitive on input; server outputs uppercase).
- Left-pad with zero bits so the string is always 26 chars.
-1is accepted as input shorthand for start-of-stream; response headers are canonical 26-char offsets.
Interpretation used by this implementation:
epochfences offsets across resets/migrations.hi|lotogether store the 64-bit logical entry offset within the epoch.in_blockis reserved for future sub-entry slicing; this implementation sets it to0for all returned offsets.
This keeps the storage engine’s internal offsets simple (a u64 sequence per epoch) while matching the “128-bit opaque offset” protocol shape.
Implementation requirement:
- The server must accept both:
-1, and- 26-char base32 offsets.
- Decimal aliases like
0,1,2, ... are rejected in this implementation.
Current implementation rules:
- all HTTP resolvers use a cooperative server-side timeout target of
5000 ms - if that generic resolver timeout is reached first, the server returns
408 Request Timeoutwith:
{
"error": {
"code": "request_timeout",
"message": "request timed out"
}
}- route-specific lower limits still apply inside that outer cap
- because timeout checks are cooperative rather than preemptive, observed wall time may overshoot the target slightly while an in-flight unit of work completes
- long-poll clients should keep requested waits at
<= 5sand reconnect after a408
PUT /v1/stream/{name}
Headers:
- Optional TTL (
Stream-TTL) or expiry (Stream-Expires-At).
201 Createdif created200 OKif already exists (idempotent)
Profile rule:
- If no profile has been declared for the stream, the server treats it as a
genericstream.
POST /v1/stream/{name} with any content-type except application/json.
- Body is treated as opaque bytes.
- Exactly one entry is appended.
- Routing key is taken from
Stream-Keyif present.
POST /v1/stream/{name} with Content-Type: application/json.
- Body must be a JSON array.
- Each element in the array is appended as one entry.
- If the stream has
routingKeyconfigured in its schema registry:- server extracts routing keys per entry using the JSON pointer
- request must not include
Stream-Key
- If
Stream-Timestampis provided, it is used as an append-time hint. - The server clamps timestamps so they never go backwards per stream.
- If omitted, the server assigns append time.
If Stream-Seq is provided:
- The server treats it as an optimistic concurrency control value.
- The value is opaque and compared lexicographically.
- The server rejects the append if
Stream-Seqis less than or equal to the stream's current value. - Rejection should be
409 Conflictwith a helpful error body.
200 OK- Must include
Stream-Next-Offset(the offset of the last appended entry). 429 Too Many Requestsmay be returned when the append queue or local backlog budget is full
Current implementation timeout behavior:
- append waits use a cooperative server-side timeout target of
3000 ms - this applies to
POST /v1/stream/{name}and toPUT /v1/stream/{name}when it includes an initial append or close operation - if the append wait reaches that budget first, the server returns
408 Request Timeoutwith:
{
"error": {
"code": "append_timeout",
"message": "append timed out; append outcome is unknown, check Stream-Next-Offset before retrying"
}
}- append timeout is ambiguous from the client's perspective because the server may still complete the append after the timeout response is sent
- clients should read or
HEADthe stream and inspectStream-Next-Offsetbefore retrying a timed-out append
GET /v1/stream/{name}/_profile
Response:
{
"apiVersion": "durable.streams/profile/v1",
"profile": { "kind": "generic" }
}Rules:
profileis always present- if no explicit profile was declared when the stream was created, the server
returns
{ "kind": "generic" }
POST /v1/stream/{name}/_profile
Request:
{
"apiVersion": "durable.streams/profile/v1",
"profile": { "kind": "generic" }
}Rules:
- supported built-ins are
evlog,generic,metrics, andstate-protocol evlogrequires anapplication/jsonstream content typeevlognormalizes JSON appends into a canonical request-log envelope and derives a routing key fromrequestIdortraceIdwhen the schema does not own routing-key extraction- installing
evlogalso installs the canonical evlog schema version1and defaultsearchregistry for that stream metricsrequires anapplication/jsonstream content typemetricsnormalizes JSON appends into the canonical metrics interval envelope and derives a routing key fromseriesKeywhen the schema does not own routing-key extraction- installing
metricsalso installs the canonical metrics schema version1, defaultsearchregistry, and default rollups for that stream state-protocolrequires anapplication/jsonstream content typestate-protocol.touch.enabled=trueenables the/touch/*routes- set
profileto{ "kind": "generic" }to use the baseline durable stream behavior
GET /v1/stream/{name}/_routing_keys
Query parameters:
limitoptional positive integer, default100, maximum500afteroptional exclusive cursor; when present, only routing keys strictly greater thanafterare returned
Response fields:
streamsourcetook_mscoveragetimingkeysnext_after
Rules:
- this endpoint is read-only
- it is supported only when the installed schema declares
routingKey - keys are returned in strict ascending lexicographic order
- keys are distinct
next_afterisnullwhen the page is exhausted; otherwise the client uses the returned value as the next request’safter- the server uses the routing-key lexicon run family for the indexed uploaded prefix
- once any lexicon coverage exists, uploaded sealed segments beyond the indexed prefix are not scanned in the request path
- the request path may scan at most one sealed segment from the uncovered local tail; the WAL tail is still scanned directly
- before the first
.lexrun exists, the request path may scan at most one uploaded sealed segment plus the local tail and WAL - when uncovered uploaded history exists, the response is a best-effort alphabetical page over the indexed prefix plus the directly scanned local tail / WAL
coverage.complete=falsemeans uncovered uploaded history may still contain routing keys that sort before or between the returned keysnext_aftermay still be non-null whencoverage.complete=false; clients may continue paging, but they must treat cursors as best-effort while uploaded lexicon lag remainscoverage.indexed_segmentsreports the uploaded prefix covered by lexicon runscoverage.scanned_uploaded_segments,coverage.scanned_local_segments, andcoverage.scanned_wal_rowsreport the uncovered data that had to be scanned directlycoverage.possible_missing_uploaded_segmentsandcoverage.possible_missing_local_segmentsreport uncovered sealed segments that were not scanned in-requesttiming.lexicon_run_get_ms,timing.lexicon_decode_ms, andtiming.lexicon_enumerate_msbreak down indexed-prefix serving timetiming.lexicon_merge_msbreaks down the final indexed/fallback mergetiming.fallback_scan_ms,timing.fallback_segment_get_ms, andtiming.fallback_wal_scan_msbreak down direct fallback worktiming.lexicon_runs_loadedreports how many active.lexruns were loaded for this page- the indexed side fetches only a page-sized candidate set from active runs; it does not expand the indexed candidate count based on fallback key volume
GET /v1/stream/{name}/_index_status
Response fields:
streamprofilesegmentsmanifestrouting_key_indexrouting_key_lexiconexact_indexesbundled_companionssearch_families
Rules:
- this endpoint is read-only
- it reports current per-stream segment and manifest state
- it reports async index/search-family progress for the current stream
routing_key_indexcovers the routing-key tiered indexrouting_key_lexiconcovers the alphabetical routing-key lexicon run familyexact_indexescovers the internal exact-match secondary family derived from schemasearch.fieldsexact_indexes[*].stale_configurationis true when a configured exact field changed and the exact family has not rebuilt for that config yetbundled_companionsreports current.cixcoverage for the desired companion plan generationsearch_familiescovers bundled companion sections such ascol,fts,agg, andmblkmanifest.last_uploaded_size_bytesis the uploaded manifest object size as a string when knownrouting_key_index,routing_key_lexicon, eachexact_indexes[*],bundled_companions, and eachsearch_families[*]reportbytes_at_restrouting_key_index,routing_key_lexicon, eachexact_indexes[*], and eachsearch_families[*]reportlag_segmentsandlag_mssearch_families[*].contiguous_covered_segment_countis the contiguous uploaded prefix covered by that bundled section
GET /v1/stream/{name}/_details
Response fields:
streamprofileschemaindex_statusstorageobject_store_requests
Rules:
- this endpoint is read-only
streamis the full stream summary object, not a reduced descriptorprofilematchesGET /_profileschemamatchesGET /_schemaindex_statusmatchesGET /_index_statusstreamincludes the head/lifecycle fields needed by an active stream page, includingcreated_at,expires_at,epoch,next_offset,sealed_through, anduploaded_throughstream.total_size_bytesis the logical payload-byte size of the stream on this node, returned as a stringstorage.object_storagereports current uploaded bytes and object counts for: segments, indexes, and manifest/schema metadatastorage.local_storagereports current local retained bytes for: WAL, pending sealed segments, caches, and the shared SQLite footprintsegment_cache_bytes,routing_index_cache_bytes,exact_index_cache_bytes,lexicon_index_cache_bytes, andcompanion_cache_bytesare local on-disk cache occupancy, not process heap- segment and index caches are seeded on first read of a remote object; they may increase even when the request was initiated by a read-only UI flow
storage.companion_familiessplits bundled companion bytes by section family (col,fts,agg,mblk)object_store_requestsreports node-local per-stream object-store request counters, split into puts and reads, plus a per-artifact breakdown- this is the supported combined descriptor endpoint for stream-management UIs
Conditional and long-poll behavior:
- responses include
ETag If-None-Matchmay be used for a normal conditionalGET- if
If-None-Matchmatches the current descriptor, return304 Not Modified live=trueorlive=long-pollenables long-poll mode- in long-poll mode, if
If-None-Matchmatches the current descriptor, wait until:- the stream head changes because new events are appended
- descriptor-visible metadata changes, including schema/profile changes, segment/upload progress, or async index progress
- the timeout expires
timeout=<duration>ortimeout_ms=<ms>controls the long-poll deadline; default3000ms- on long-poll timeout with no visible change, return
304 Not Modified - the generic resolver timeout still caps the overall request at
5000 ms, so callers should keep requested long-poll waits at<= 5sand reconnect on408 Request Timeout
The same conditional-long-poll contract also applies to
GET /v1/stream/{name}/_index_status.
GET /v1/server/_details
Response fields:
auto_tuneconfigured_limitsruntime
Rules:
- this endpoint is read-only
- it is server-scoped, not stream-scoped
auto_tunereports whether--auto-tunewas active for this process and, when present:requested_memory_mbpreset_mbeffective_memory_limit_mb
configured_limitsreports the currently configured budgets and caps for:- caches
- concurrency
- ingest queue and backlog limits
- segmenting and upload settings
- request / object-store timeouts
- memory-pressure threshold
runtimereports current live state for:- memory pressure and RSS
- current process memory usage:
rss_bytesheap_total_bytesheap_used_bytesexternal_bytesarray_buffers_bytes
- process-level attribution:
process_breakdown
- SQLite runtime allocator state:
sqlite
- forced-GC and heap-snapshot state:
gc
- tracked runtime memory subsystems, grouped as:
heap_estimatesmapped_filesdisk_cachesconfigured_budgetspipeline_bufferssqlite_runtimecounts
- subtotal rollups for the tracked groups:
heap_estimate_bytesmapped_file_bytesdisk_cache_bytesconfigured_budget_bytespipeline_buffer_bytessqlite_runtime_bytes
- high-water marks with timestamps:
high_water
- ingest queue fill
- local backlog pressure
- pending uploads
- the effective concurrency gate state for ingest, read, search, and async index
- bounded top-N stream contributors for local storage, retained WAL, touch journals, and notifier waiters:
top_streams
runtime.memory.subsystems.heap_estimatesis the operator-facing view of retained in-process bytes that the server can currently attribute directly, such as queued ingest payload bytes and in-memory index-run caches.runtime.memory.subsystems.mapped_filesreports file-backed mmap bytes such as cached segment files, routing-key lexicon runs, and bundled companion files. These contribute to RSS differently from JS heap.runtime.memory.subsystems.disk_cachesandruntime.memory.subsystems.configured_budgetsare included so diagnostics UIs can contrast retained heap-like bytes with on-disk cache occupancy and configured cache budgets.runtime.memory.subsystems.pipeline_buffersreports current in-flight segmenter/uploader bytes rather than retained caches.runtime.memory.subsystems.sqlite_runtimereports SQLite process-global allocator bytes fromsqlite3_status64()when available.runtime.memory.process_breakdown.unattributed_rss_bytesis the remaining RSS after subtracting JS-managed bytes, tracked mapped-file bytes, and tracked SQLite runtime bytes. It is a conservative approximation, not an exact ownership map.- this endpoint is intended for operators and diagnostics UIs that need one cheap summary of the node's configured and effective runtime limits
GET /v1/server/_mem
Response fields:
tsprocessprocess_breakdownsqlitegchigh_watercountersruntime_countsruntime_bytesruntime_totalstop_streams
Rules:
- this endpoint is read-only
- it is server-scoped, not stream-scoped
- it is the compact memory-triage view, whereas
/_detailsis the broader node descriptor runtime_bytesmirrors the byte-bearing memory subsystem groups without thecountssectionruntime_totalsmirrors the byte rollups for those groupstop_streamsis bounded current state only; it is not part of the metrics stream because emitting top-N stream names as metrics would create unbounded series cardinality
GET /v1/stream/{name}?offset=<off>
- Returns a bounded batch.
- Must include
Stream-Next-Offset. - If
filter=is present,Stream-Next-Offsetstill advances past scanned non-matching records.
GET /v1/stream/{name}?offset=<off>&live=true&timeout=5s
- If data exists after
off, return immediately. - Otherwise, wait for new data or timeout.
- On timeout, return an empty batch with
Stream-Next-Offsetunchanged. filter=is supported for long-poll reads.live=sseis not supported together withfilter=.- the generic resolver timeout still caps the overall request at
5000 ms, so callers should keep requested long-poll waits at<= 5sand reconnect on408 Request Timeout
- Default (raw): response body is concatenated bytes of returned entries.
format=json: response body is a JSON array of entry payloads (each element is the raw JSON value that was appended).
key=<k>or/pk/<k>selects only entries whose routing key equals<k>.- If the routing index has a candidate segment set, the server may plan the sealed scan up front and read only candidate indexed segments plus the uncovered uploaded tail.
since + keycursor seeking may use the same planned sealed scan.
Correctness requirement:
- Key-filtering is exact: false positives from bloom/index must still validate actual key matches.
filter=is only supported onapplication/jsonstreams.- The server may use schema-owned exact and
.colsearch families to prune sealed segments and segment-local docs. - The server must still scan the local WAL tail so unsealed data remains visible to filtered reads.
- The current implementation stops after examining 100 MB of payload bytes for one filtered response and returns the filter scan headers above.
Current endpoints:
POST /v1/stream/{name}/_searchGET /v1/stream/{name}/_search?q=...
Current request fields:
qsizesearch_aftersorttimeout_ms- optional lower per-request budget
- server-side effective timeout is always clamped to
<= 3000 ms - the deadline is enforced cooperatively between work units, so wall time may overshoot slightly before the partial response is returned
Current response fields:
streamsnapshot_end_offsettook_mstimed_outtimeout_mscoveragetotalhitsnext_search_after
Current response status behavior:
200when search completes within the effective timeout budget408when search reaches the effective timeout budget- the response body is still a valid search result
- partial hits and coverage counters are included
total.relationisgte- observed wall time may be slightly above
timeout_ms
- if the outer generic
5000 msresolver timeout fires first while an in-flight search work unit is still running,/_searchmay instead return the generic request-timeout error body described above
Current search response headers:
search-timed-outsearch-timeout-mssearch-took-mssearch-total-relationsearch-coverage-completesearch-indexed-segmentssearch-indexed-segment-time-mssearch-fts-section-get-mssearch-fts-decode-mssearch-fts-clause-estimate-mssearch-scanned-segmentssearch-scanned-segment-time-mssearch-scanned-tail-docssearch-scanned-tail-time-mssearch-exact-candidate-time-mssearch-index-families-used
Current search coverage fields:
modecompletestream_head_offsetvisible_through_offsetvisible_through_primary_timestamp_maxoldest_omitted_append_atpossible_missing_events_upper_boundpossible_missing_uploaded_segmentspossible_missing_sealed_rowspossible_missing_wal_rowsindexed_segmentsindexed_segment_time_msfts_section_get_msfts_decode_msfts_clause_estimate_msscanned_segmentsscanned_segment_time_msscanned_tail_docsscanned_tail_time_msexact_candidate_time_msindex_families_used
Current query support:
- fielded exact keyword queries
- fielded keyword prefix queries
- typed equality and range queries
has:field- bare terms over
search.defaultFields - fielded text queries
- quoted phrase queries on text fields with
positions=true - alias resolution from
search.aliases
Current non-support:
contains:- snippets
- multi-stream search
Current request-path behavior under active ingest:
/_searchalways reports against the current stream head viasnapshot_end_offset- while sealed segments are still unpublished or bundled companions are still
catching up,
/_searchmay intentionally omit the newest suffix instead of scanning it on the request path - in that case
coverage.complete=falseand thepossible_missing_*fields report an upper bound on omitted newest events - once publish and bundled-companion work are caught up,
/_searchstill omits a fresh WAL tail during active ingest /_searchmay search the current WAL tail locally only after the tail is quiet for the configured overlay period and still fits within the overlay budgetvisible_through_primary_timestamp_maxandoldest_omitted_append_atlet clients explain the freshness gap in time terms- if the newest suffix is omitted,
total.relationisgte - if the returned page may not include every visible match,
total.relationisgte /_searchdoes not support request-time exact total-hit counting- when exact clauses provide a candidate segment set,
/_searchmay plan the sealed segment scan up front instead of iterating the full indexed sealed prefix one segment at a time - if the request hits the effective timeout budget,
/_searchreturns408with a valid partial search result body instead of keeping the request open - timeout checks are cooperative rather than preemptive, so clients should treat
timeout_msas a bounded target rather than a strict wall-clock guarantee
Current endpoint:
POST /v1/stream/{name}/_aggregate
Current request fields:
rollupfromtointervalqgroup_bymeasures
Current response fields:
streamrollupfromtointervalcoveragebuckets
Current aggregate coverage fields:
modecompletestream_head_offsetvisible_through_offsetvisible_through_primary_timestamp_maxoldest_omitted_append_atpossible_missing_events_upper_boundpossible_missing_uploaded_segmentspossible_missing_sealed_rowspossible_missing_wal_rowsused_rollupsindexed_segmentsscanned_segmentsscanned_tail_docsindex_families_used
Current behavior:
- rollups are schema-owned under
search.rollups - aligned middle windows may use
.aggcompanions - partial edge windows must still scan source segments
- while sealed segments are still unpublished or bundled companions are still
catching up,
/_aggregatemay intentionally omit the newest suffix instead of scanning it on the request path - in that case
coverage.complete=falseand thepossible_missing_*fields report an upper bound on omitted newest events - once publish and bundled-companion work are caught up,
/_aggregatestill omits a fresh WAL tail during active ingest /_aggregatemay evaluate the current WAL tail locally only after the tail is quiet for the configured overlay period and still fits within the overlay budget
HEAD /v1/stream/{name}
Should return:
200 OKif exists404if missing
Headers:
Stream-End-OffsetContent-Typefor the stream if knownStream-Expires-Atif the stream has TTL
DELETE /v1/stream/{name}
- Deletes/tombstones the stream.
- Removes the stream's local acceleration state in the same local delete
transaction:
- routing-key index state and runs
- exact secondary index state and runs
- routing-key lexicon state and runs
- bundled search companion plans and per-segment companion catalog rows
- Does not synchronously delete already-published segment, manifest, schema, or index objects from remote object storage.
- Must be idempotent.
Recommended status codes:
400 Bad Request: invalid parameters, invalid JSON, invalid schema/lens404 Not Found: unknown stream409 Conflict:Stream-Seqmismatch410 Gone: expired stream (or404if you prefer hiding existence; choose one and keep it consistent)413 Payload Too Large: append body too large429 Too Many Requests: transient queue or backlog backpressure503 Service Unavailable: transient server unavailability, such as shutdown500 Internal Server Error: unexpected errors
Errors should be JSON:
{"error": {"code": "...", "message": "..."}}Transient 429 and 503 responses should include Retry-After so clients can
apply server-guided backoff.
See schemas.md for the full model.
Minimum behavior required:
GET /_schemareturns the schema registry JSON (or 404 if none and stream missing).POST /_schemainstalls first schema only on empty streams.- Later updates require a lens
v -> v+1and must record a boundary at the current end offset. - Appends validate against current schema.
- Reads promote older events through the lens chain to the current schema.
POST /_schemaaccepts only the supported update fields:schema,lens,routingKey, andsearch(plus optionalapiVersion).searchis the only supported public search/indexing model.search-only updates require an already-installed schema version.POST /_schemarejects registry-shaped compatibility writes, alias field names, legacyindexes[], and profile-owned live/touch configuration.