Multi-tenancy
tsink’s multi-tenancy model gives every tenant a fully isolated data namespace with independent quotas, admission budgets, authentication tokens, and lifecycle state — all sharing a single storage engine with zero cross-tenant data leakage.How tenants are identified
Every HTTP request carries the tenant ID in a header:X-Scope-OrgID as a compatibility alias. If both headers are present they must match.
If the header is absent the request is attributed to the built-in "default" tenant. Tenant IDs are validated on each request: they must be non-empty, not exceed the maximum label value length, and contain no control characters.
Protocol-specific identification
Sources that do not use HTTP headers resolve a tenant statically:| Source | How the tenant is set |
|---|---|
| Prometheus Remote Write / Remote Read | X-Scope-OrgID or x-tsink-tenant header (must match if both are set) |
| StatsD (UDP) | --statsd-tenant <id> server flag (default: "default") |
| Graphite (TCP) | --graphite-tenant <id> server flag (default: "default") |
| Edge-sync upstream | --edge-sync-static-tenant <id> rewrites all tenant labels before forwarding |
Storage isolation
Isolation is enforced at the storage layer through theTenantScopedStorage wrapper, which sits between the HTTP handlers and the core storage engine.
On write — the reserved label __tsink_tenant__ is automatically appended to every written row. Clients cannot set this label directly; any inbound write or query that contains __tsink_tenant__ is rejected immediately.
On read — an equality matcher for __tsink_tenant__ = <tenant-id> is automatically injected into every query, series listing, and label enumeration. A tenant can only ever see data it owns.
Default tenant compatibility — the "default" tenant also performs an unlabeled fallback query so that pre-tenancy series (written without the label) remain accessible.
Tenant configuration file
Pass a JSON policy file at startup:defaults block whose values are inherited by every tenant, and a tenants map of per-tenant overrides. Any tenant not listed in the file automatically gets the defaults policy on its first request.
Full example
Quota fields
All quota fields are optional. Unset fields fall back to thedefaults block, then to the server’s built-in defaults (unlimited if not specified).
| Field | Description |
|---|---|
maxWriteRowsPerRequest | Maximum rows accepted in a single write request |
maxReadQueriesPerRequest | Maximum queries in a single remote-read batch |
maxMetadataMatchersPerRequest | Maximum label matchers in a single metadata request |
maxQueryLengthBytes | Maximum byte length of a PromQL query string |
maxRangePointsPerQuery | Maximum time-series data points returned by a range query |
HTTP 400 before any storage work is done.
Admission budget fields
Admission limits cap concurrent in-flight work. All limits are enforced with non-blocking semaphores — a request that cannot acquire a permit is rejected immediately withHTTP 429 Too Many Requests and a Retry-After: 1 header.
| Field | Level | Description |
|---|---|---|
maxInflightReads | Tenant | Total concurrent read requests across all read surfaces |
maxInflightWrites | Tenant | Total concurrent write requests across all write surfaces |
ingest.maxInflightRequests | Surface | Concurrent ingest HTTP requests |
ingest.maxInflightUnits | Surface | Concurrent ingest rows (units = row count) |
query.maxInflightRequests | Surface | Concurrent query requests |
metadata.maxInflightRequests | Surface | Concurrent metadata/series/label requests |
retention.maxInflightRequests | Surface | Concurrent retention/deletion operations |
Per-tenant cluster consistency
Thecluster block overrides the server-wide consistency defaults for a specific tenant:
| Field | Values | Description |
|---|---|---|
writeConsistency | "one", "quorum", "all" | Replication acknowledgement requirement for writes |
readConsistency | "one", "quorum", "strict" | Quorum requirement for reads |
readPartialResponse | "allow", "deny" | Whether to return partial results when some shards are unavailable |
Global admission limits
In addition to per-tenant budgets, server-wide admission guards apply across all tenants. These are controlled by environment variables:| Environment variable | Default | Description |
|---|---|---|
TSINK_SERVER_WRITE_MAX_INFLIGHT_REQUESTS | 64 | Global max concurrent write requests |
TSINK_SERVER_WRITE_MAX_INFLIGHT_ROWS | 200,000 | Global max in-flight write rows |
TSINK_SERVER_WRITE_RESOURCE_ACQUIRE_TIMEOUT_MS | 25 | Timeout (ms) waiting for the write semaphore |
TSINK_SERVER_READ_MAX_INFLIGHT_REQUESTS | 64 | Global max concurrent read requests |
TSINK_SERVER_READ_MAX_INFLIGHT_QUERIES | 128 | Global max in-flight query slots |
TSINK_SERVER_READ_RESOURCE_ACQUIRE_TIMEOUT_MS | 25 | Timeout (ms) waiting for the read semaphore |
Per-tenant authentication tokens
Theauth.tokens list in the tenant config file defines bearer tokens scoped to that tenant:
write-scoped token grants write access to that tenant only. A read-scoped token grants read access to that tenant only. Tokens cannot cross tenant boundaries.
These per-tenant tokens are evaluated before the global security manager token. See the security model for OIDC and RBAC configuration.
RBAC tenant resources
RBAC roles use theTenant resource kind to restrict access by tenant name:
Tenant / * wildcard grants access to all tenants; a named entry restricts to a single tenant.
Managed tenants (control plane)
For deployments that need programmatic tenant provisioning, tsink includes a lightweight control-plane store. Managed tenant records carry lifecycle state, storage quotas, and ingest-rate limits that the runtime enforces alongside the static policy file.Tenant lifecycle states
| State | Description |
|---|---|
provisioning | Tenant is being set up; not yet accepting traffic |
active | Fully operational |
suspended | Writes and queries are blocked; data is retained |
deleting | Triggered deletion is in progress |
deleted | All data has been removed |
Provisioning a managed tenant
| Field | Type | Description |
|---|---|---|
tenantId | string | Unique, immutable identifier |
deploymentId | string | Logical deployment group this tenant belongs to |
displayName | string | Human-readable name |
lifecycle | string | Target lifecycle state |
retentionDays | integer | Data retention window in days |
storageLimitBytes | integer | Hard storage cap in bytes |
ingestRateLimitPerSec | integer | Maximum ingested rows per second |
queryConcurrencyLimit | integer | Maximum concurrent queries |
labels | object | Arbitrary key-value metadata |
Transitioning lifecycle state
Usage accounting
tsink records per-tenant resource consumption to an append-only NDJSON ledger. Each record captures:- Category —
ingest,query,retention,background, orstorage - Operation — the specific operation (e.g.,
prometheus_remote_write,promql_range_query) - Counters —
rows,matchedSeries,requestBytes,logicalStorageBytes,durationNanos
Retrieving usage data
Admin API reference
All admin endpoints require a token withAdmin or System RBAC scope.
| Method | Path | Description |
|---|---|---|
POST | /api/v1/admin/control-plane/tenants/apply | Create or update a managed tenant record |
POST | /api/v1/admin/control-plane/tenants/lifecycle | Transition a tenant’s lifecycle state |
GET | /api/v1/admin/control-plane/state | Retrieve full control-plane state including all tenant records |
GET | /api/v1/admin/usage/report | Aggregated usage summary (accepts ?tenant= and ?bucket=) |
GET | /api/v1/admin/usage/export | Stream raw usage ledger records (accepts ?tenant=) |
POST | /api/v1/admin/usage/reconcile | Reconcile usage counters against live storage state |
GET | /api/v1/admin/support_bundle | Download diagnostic JSON bundle (accepts ?tenant=) |
POST | /api/v1/admin/delete_series | Tombstone-delete series for the requesting tenant |
Data-plane usage
Tenants use all standard data-plane endpoints. The only requirement is thex-tsink-tenant header.
Clustering considerations
In a cluster deployment, tenant-scoped queries are fanned out to the relevant shards with the__tsink_tenant__ matcher injected automatically. The "default" tenant keeps that scoped fanout and adds a second unlabeled-only fallback selector so legacy series remain visible without widening the primary read across every tenant.
The hotspot tracker accumulates per-tenant write skew counters. Tenants with a write skew factor exceeding 4× the cluster average are flagged in the cluster snapshot, which can inform rebalancing decisions. See the clustering internals guide for details.