Skip to main content

Configuration reference

Complete listing of every configuration knob in tsink — the embedded library API, the server CLI flags, and the environment variables that tune ingestion, clustering, and background workers. Sections are ordered from most commonly used to most advanced.

Contents

  1. Embedded library — StorageBuilder
  2. Server CLI flags
  3. Environment variables — server admission
  4. Environment variables — ingestion protocols
  5. Environment variables — rules engine
  6. Environment variables — cluster

1. Embedded library — StorageBuilder

These are the options exposed through the StorageBuilder Rust API and the equivalent TsinkStorageBuilder Python bindings. See the embedded library guide and Python bindings guide for usage examples.

Storage & persistence

Builder methodTypeDefaultDescription
with_data_path(path)PathBuf(none)Root directory for all on-disk data (WAL, segments, metadata). Required for durable storage.
with_object_store_path(path)PathBuf(none)Root directory for tiered segment lanes (hot/, warm/, cold/). Required for tiered storage.
with_runtime_mode(mode)StorageRuntimeModeReadWriteReadWrite — full local instance. ComputeOnly — query node that reads from object store without persisting locally.
with_timestamp_precision(p)TimestampPrecisionNanosecondsInterpretation of raw integer timestamps: Seconds, Milliseconds, Microseconds, or Nanoseconds. Must match the precision of all ingested data.

Retention & tiering

Builder methodTypeDefaultDescription
with_retention(duration)Duration14 daysHow long data is retained. Writes outside this window are rejected when retention_enforced is set (which with_retention enables automatically).
with_hot_tier_retention(duration)Duration(falls back to retention)Age at which data moves from local (hot) to object-store warm tier.
with_warm_tier_retention(duration)Duration(falls back to retention)Age at which data moves from warm to cold tier.
with_mirror_hot_segments_to_object_store(bool)boolfalseCopy freshly-persisted hot segments into <object_store_path>/hot/ in addition to writing locally. Useful for cross-node availability.
with_remote_segment_cache_policy(policy)RemoteSegmentCachePolicyMetadataOnlyWhat to hold in memory for remote (object-store) segments: MetadataOnly or Full.
with_remote_segment_refresh_interval(duration)Duration5sHow often a ComputeOnly node refreshes its view of remote segment metadata.

Chunk & partition tuning

Builder methodTypeDefaultDescription
with_chunk_points(n)usize2048Target number of data points per chunk before the chunk is sealed. Clamped to 1..=65535. Larger values improve compression; smaller values reduce read amplification on recent data.
with_partition_duration(duration)Duration1 hourTime window covered by a single partition. All series data within this window is co-located.
with_max_active_partition_heads_per_series(n)usize8Maximum number of simultaneously open partition heads per series. When the limit is reached the oldest head is sealed and compacted.

Write pipeline

Builder methodTypeDefaultDescription
with_max_writers(n)usizeCPU-count (cgroup-aware)Size of the writer thread pool. Higher values increase throughput under concurrent write load at the cost of memory.
with_write_timeout(duration)Duration30sMaximum time a write call will wait for a writer slot before returning a backpressure error.

Memory & cardinality

Builder methodTypeDefaultDescription
with_memory_limit_bytes(n)usizeusize::MAX (unlimited)Global byte budget for all in-memory chunks (active + sealed). New writes are back-pressured when the budget is exhausted.
with_cardinality_limit(n)usizeusize::MAX (unlimited)Hard cap on the total number of unique series. Writes that would create a new series beyond this limit are rejected with a cardinality error.

WAL

Builder methodTypeDefaultDescription
with_wal_enabled(bool)booltrueEnable or disable the write-ahead log. Disabling removes crash-safety guarantees.
with_wal_size_limit_bytes(n)usizeusize::MAX (unlimited)Maximum total on-disk size for WAL files. Oldest segments are pruned when the limit is reached.
with_wal_buffer_size(n)usize4096I/O buffer size for WAL writes. Larger buffers reduce syscall overhead on high-throughput workloads.
with_wal_sync_mode(mode)WalSyncModePerAppendPerAppendfsync after every write (crash-safe, higher latency). Periodic(duration) — flush without fsync on a fixed interval (higher throughput, potential data loss on crash).
with_wal_replay_mode(mode)WalReplayModeStrictStrict — abort recovery on any corrupted WAL frame. Salvage — skip corrupted frames and recover as much data as possible.

Background workers

Builder methodTypeDefaultDescription
with_background_fail_fast(bool)booltrueWhen true, a failure in any background worker (flush, compaction, remote segment refresh) immediately fences all further writes with an error. When false, the error is logged but writes continue.

Cluster / metadata sharding

Builder methodTypeDefaultDescription
with_metadata_shard_count(n)u32(none, no sharding)Partition the in-memory series metadata into N shards to reduce lock contention on high-cardinality workloads.

2. Server CLI flags

All flags are passed on the command line to the tsink-server binary. Defaults shown are the compiled-in values; they may be overridden at any time with the corresponding flag.
tsink-server --help

2.1 Networking & listeners

FlagDefaultDescription
--listen <HOST:PORT>127.0.0.1:9201TCP address for the HTTP/HTTPS listener.
--statsd-listen <HOST:PORT>(disabled)UDP address for the StatsD listener. Omit to disable.
--statsd-tenant <ID>defaultTenant that receives StatsD writes.
--graphite-listen <HOST:PORT>(disabled)TCP address for the Graphite plaintext listener. Omit to disable.
--graphite-tenant <ID>defaultTenant that receives Graphite writes.

2.2 Storage & WAL

FlagDefaultDescription
--data-path <PATH>(none)Persist data under PATH. Without this flag, storage is purely in-memory.
--object-store-path <PATH>(none)Object-store root for tiered segment lanes (hot/, warm/, cold/).
--timestamp-precision <PRECISION>msUnits for raw timestamps: s, ms, us, ns.
--retention <DURATION>14dData retention window (e.g. 7d, 24h, 90d).
--hot-tier-retention <DURATION>(same as --retention)Age at which local segments move to the warm object-store tier.
--warm-tier-retention <DURATION>(same as --retention)Age at which warm segments move to the cold object-store tier.
--storage-mode <MODE>read-writeread-write — normal full node. compute-only — query-only node backed by object store.
--remote-segment-refresh-interval <DURATION>5sMetadata refresh interval for compute-only nodes.
--mirror-hot-segments-to-object-storefalseCopy hot segments to object store as they are sealed.
--wal-enabled <BOOL>trueEnable (true) or disable (false) the WAL.
--wal-sync-mode <MODE>per-appendper-append (crash-safe) or periodic (higher throughput).
--chunk-points <N>2048Target data points per chunk (1–65535).

2.3 Memory & cardinality

FlagDefaultDescription
--memory-limit <BYTES>(unlimited)Global in-memory chunk budget, in bytes. Supports suffixes such as 1G, 512M.
--cardinality-limit <N>(unlimited)Maximum number of unique series. New series are rejected once the limit is reached.
--max-writers <N>(CPU count)Concurrent writer threads.

2.4 Security & auth

FlagDefaultDescription
--tls-cert <PATH>(none)PEM-encoded TLS certificate. Both --tls-cert and --tls-key must be set to enable TLS.
--tls-key <PATH>(none)PEM-encoded TLS private key.
--auth-token <TOKEN>(none)Static bearer token required on all non-admin requests.
--auth-token-file <PATH>(none)File or exec-based token manifest (JSON). Takes precedence over --auth-token.
--admin-auth-token <TOKEN>(none)Static bearer token required on /api/v1/admin/* endpoints.
--admin-auth-token-file <PATH>(none)File or exec-based admin token manifest. Takes precedence over --admin-auth-token.
--tenant-config <PATH>(none)JSON file defining per-tenant auth, quotas, and policies. See Multi-tenancy.
--rbac-config <PATH>(none)JSON file defining RBAC roles, service accounts, and OIDC settings. See Security model.
--enable-admin-apifalseExpose admin snapshot, restore, and cluster management endpoints.
--admin-path-prefix <PATH>(none)Restrict admin file I/O operations to this directory prefix.

2.5 Cluster

These flags are only relevant when --cluster-enabled is set. See Cluster setup and Clustering internals for deployment guidance.
FlagDefaultDescription
--cluster-enabledfalseEnable cluster mode.
--cluster-node-id <ID>(required)Stable, unique identifier for this node. Must not change after initial startup.
--cluster-bind <HOST:PORT>(none)Internal RPC bind/advertise address. Peers will connect to this address.
--cluster-node-role <ROLE>hybridstorage — data only; query — query fan-out only; hybrid — both.
--cluster-seeds <LIST>(none)Comma-separated HOST:PORT addresses of seed peers for cluster bootstrap.
--cluster-shards <N>128Number of logical hash-ring shards. Changing this after data is stored requires a full rebalance.
--cluster-replication-factor <N>1Number of replicas for each shard.
--cluster-write-consistency <LEVEL>quorumone, quorum, or all — how many replicas must acknowledge a write.
--cluster-read-consistency <LEVEL>eventualeventual, quorum, or strict — read consistency level.
--cluster-read-partial-response <POLICY>allowallow — return partial results when some shards are unavailable; deny — fail the query.
--cluster-internal-auth-token <TOKEN>(none)Shared secret for internal RPC authentication (used when mTLS is not enabled).
--cluster-internal-auth-token-file <PATH>(none)File/exec manifest for the internal RPC token.
--cluster-internal-mtls-enabledfalseEnable mTLS for all internal peer-to-peer RPC.
--cluster-internal-mtls-ca-cert <PATH>(none)PEM CA bundle for internal mTLS.
--cluster-internal-mtls-cert <PATH>(none)PEM client certificate for internal mTLS.
--cluster-internal-mtls-key <PATH>(none)PEM client key for internal mTLS.

2.6 Edge sync

Replays locally-written data to an upstream tsink instance. Useful for edge deployments or write aggregation.
FlagDefaultDescription
--edge-sync-upstream <HOST:PORT>(disabled)Upstream server to replay writes to. Omit to disable edge sync.
--edge-sync-auth-token <TOKEN>(none)Bearer token used when writing to the upstream server.
--edge-sync-source-id <ID>(none)Stable identifier for this edge node, used to generate idempotency keys.
--edge-sync-static-tenant <ID>(none)Rewrite all tenant labels to this value before forwarding writes upstream.

3. Environment variables — server admission

These variables cap the number of concurrent HTTP requests and in-flight rows to protect the server under sudden load. They are read once at process start.
VariableDefaultDescription
TSINK_SERVER_WRITE_MAX_INFLIGHT_REQUESTS64Maximum number of concurrent write HTTP requests accepted by the server.
TSINK_SERVER_WRITE_MAX_INFLIGHT_ROWS200000Maximum total rows across all active write requests. New requests block until below the threshold.
TSINK_SERVER_WRITE_RESOURCE_ACQUIRE_TIMEOUT_MS25Milliseconds to wait for a write slot before returning HTTP 429.
TSINK_SERVER_READ_MAX_INFLIGHT_REQUESTS64Maximum number of concurrent read HTTP requests.
TSINK_SERVER_READ_MAX_INFLIGHT_QUERIES128Maximum total in-flight queries across all read requests.
TSINK_SERVER_READ_RESOURCE_ACQUIRE_TIMEOUT_MS25Milliseconds to wait for a read slot before returning HTTP 429.

4. Environment variables — ingestion protocols

These variables control per-protocol feature flags and per-request limits.
VariableDefaultDescription
TSINK_REMOTE_WRITE_METADATA_ENABLEDtrueAccept metric metadata in Prometheus remote-write requests (capped at 512 metadata entries per request). Set to false to ignore all metadata.
TSINK_REMOTE_WRITE_EXEMPLARS_ENABLEDtrueAccept exemplar records in Prometheus remote-write requests.
TSINK_REMOTE_WRITE_HISTOGRAMS_ENABLEDtrueAccept native histogram samples in Prometheus remote-write requests (capped at 16,384 bucket entries per request).
TSINK_INFLUX_LINE_PROTOCOL_ENABLEDtrueEnable the InfluxDB line-protocol endpoints (POST /write, POST /api/v2/write).
TSINK_INFLUX_LINE_PROTOCOL_MAX_LINES_PER_REQUEST4096Maximum number of lines accepted in a single InfluxDB line-protocol request.
TSINK_OTLP_METRICS_ENABLEDtrueEnable the OTLP HTTP/protobuf metrics ingestion endpoint (POST /v1/metrics).
TSINK_STATSD_MAX_PACKET_BYTES8192Maximum UDP packet size for the StatsD listener.
TSINK_STATSD_MAX_EVENTS_PER_PACKET1024Maximum number of StatsD events parsed from a single UDP packet.
TSINK_GRAPHITE_MAX_LINE_BYTES8192Maximum byte length of a single Graphite plaintext line.

5. Environment variables — rules engine

VariableDefaultDescription
TSINK_RULES_SCHEDULER_TICK_MS1000Interval in milliseconds between rules-engine scheduler evaluations.
TSINK_RULES_MAX_RECORDING_ROWS_PER_EVAL10000Maximum rows written by a single recording rule evaluation. Evaluations that would exceed this produce a partial result and log a warning.
TSINK_RULES_MAX_ALERT_INSTANCES_PER_RULE10000Maximum number of alert instances tracked per alerting rule.

6. Environment variables — cluster

These variables control every aspect of cluster internals. They are all read once at startup unless otherwise noted.

6.1 RPC & writes

VariableDefaultDescription
TSINK_CLUSTER_RPC_TIMEOUT_MS2000Timeout in milliseconds for a single internal RPC call.
TSINK_CLUSTER_RPC_MAX_RETRIES2Number of retries on transient RPC failures before giving up.
TSINK_CLUSTER_WRITE_MAX_BATCH_ROWS1024Maximum rows per remote-write batch sent to a replica.
TSINK_CLUSTER_WRITE_MAX_INFLIGHT_BATCHES32Maximum number of concurrent write batches in flight to all replicas combined.
TSINK_CLUSTER_FANOUT_CONCURRENCY16Maximum concurrent sub-requests when fanning a write out to multiple shards.

6.2 Reads

VariableDefaultDescription
TSINK_CLUSTER_READ_MAX_MERGED_SERIES250000Maximum unique series returned by a distributed query.
TSINK_CLUSTER_READ_MAX_MERGED_POINTS_PER_SERIES1000000Maximum data points per series in a distributed query result.
TSINK_CLUSTER_READ_MAX_MERGED_POINTS_TOTAL5000000Maximum total data points across all series in a distributed query result.
TSINK_CLUSTER_READ_MAX_INFLIGHT_QUERIES64Maximum concurrent distributed read queries across the node.
TSINK_CLUSTER_READ_MAX_INFLIGHT_MERGED_POINTS20000000Maximum total in-flight merged points across all concurrent distributed reads.
TSINK_CLUSTER_READ_RESOURCE_ACQUIRE_TIMEOUT_MS25Milliseconds to wait for a distributed-read concurrency slot before returning an error.

6.3 Hinted handoff outbox

When a replica is temporarily unreachable, writes are queued in an on-disk outbox (backed by a WAL) and replayed once the replica recovers.
VariableDefaultDescription
TSINK_CLUSTER_OUTBOX_MAX_ENTRIES100000Maximum queued entries across all unreachable peers combined.
TSINK_CLUSTER_OUTBOX_MAX_BYTES536870912 (512 MiB)Total in-memory size cap for the outbox.
TSINK_CLUSTER_OUTBOX_MAX_PEER_BYTES268435456 (256 MiB)Per-peer in-memory size cap for the outbox.
TSINK_CLUSTER_OUTBOX_MAX_LOG_BYTES2147483648 (2 GiB)Maximum on-disk WAL size for the outbox log.
TSINK_CLUSTER_OUTBOX_MAX_RECORD_BYTES2097152 (2 MiB)Maximum size of a single outbox record.
TSINK_CLUSTER_OUTBOX_REPLAY_INTERVAL_SECS2Interval in seconds between replay attempts for queued entries.
TSINK_CLUSTER_OUTBOX_REPLAY_BATCH_SIZE256Rows per replay batch sent to a recovering replica.
TSINK_CLUSTER_OUTBOX_MAX_BACKOFF_SECS30Maximum backoff in seconds between replay attempts when the peer remains unresponsive.
TSINK_CLUSTER_OUTBOX_CLEANUP_INTERVAL_SECS30Interval at which stale delivered records are pruned from the outbox log.
TSINK_CLUSTER_OUTBOX_CLEANUP_MIN_STALE_RECORDS1024Minimum number of stale records required to trigger an early cleanup pass.
TSINK_CLUSTER_OUTBOX_STALLED_PEER_AGE_SECS300Seconds of outbox age before a peer is flagged as stalled.
TSINK_CLUSTER_OUTBOX_STALLED_PEER_MIN_ENTRIES1Minimum queued entries for a peer to be considered stalled.
TSINK_CLUSTER_OUTBOX_STALLED_PEER_MIN_BYTES1Minimum queued bytes for a peer to be considered stalled.

6.4 Digest exchange / anti-entropy

Nodes periodically exchange fingerprint digests to detect and repair missing data without full scans.
VariableDefaultDescription
TSINK_CLUSTER_DIGEST_INTERVAL_SECS30Interval in seconds between digest exchange rounds per node.
TSINK_CLUSTER_DIGEST_WINDOW_SECS300Time lookback window covered by each digest exchange.
TSINK_CLUSTER_DIGEST_MAX_SHARDS_PER_TICK64Maximum shards compared in a single digest tick.
TSINK_CLUSTER_DIGEST_MAX_MISMATCH_REPORTS128Maximum mismatch records held in memory before older ones are evicted.
TSINK_CLUSTER_DIGEST_MAX_BYTES_PER_TICK262144 (256 KiB)Maximum payload size of the digest message sent per tick.

6.5 Repair

Repair uses the mismatch records found during digest exchange to transfer missing data between nodes.
VariableDefaultDescription
TSINK_CLUSTER_REPAIR_MAX_MISMATCHES_PER_TICK2Maximum diverged shards repaired per tick.
TSINK_CLUSTER_REPAIR_MAX_SERIES_PER_TICK256Maximum series scanned per repair tick.
TSINK_CLUSTER_REPAIR_MAX_ROWS_PER_TICK16384Maximum rows transferred per repair tick.
TSINK_CLUSTER_REPAIR_MAX_RUNTIME_MS_PER_TICK100Wall-clock budget in milliseconds per repair tick.
TSINK_CLUSTER_REPAIR_FAILURE_BACKOFF_SECS30Backoff in seconds after a failed repair attempt before retrying.

6.6 Rebalance

Rebalance migrates shard ownership when nodes are added or removed.
VariableDefaultDescription
TSINK_CLUSTER_REBALANCE_INTERVAL_SECS5Interval in seconds between rebalance loop ticks.
TSINK_CLUSTER_REBALANCE_MAX_ROWS_PER_TICK10000Maximum rows migrated per rebalance tick.
TSINK_CLUSTER_REBALANCE_MAX_SHARDS_PER_TICK4Maximum shards processed per rebalance tick.

6.7 Control plane (Raft)

The control plane uses a Raft-based consensus protocol to manage cluster membership and shard assignments.
VariableDefaultDescription
TSINK_CLUSTER_CONTROL_TICK_INTERVAL_SECS2Consensus heartbeat interval in seconds.
TSINK_CLUSTER_CONTROL_MAX_APPEND_ENTRIES64Maximum log entries per Raft AppendEntries RPC round.
TSINK_CLUSTER_CONTROL_SNAPSHOT_INTERVAL_ENTRIES128Compact the Raft log into a snapshot every N committed entries.
TSINK_CLUSTER_CONTROL_SUSPECT_TIMEOUT_SECS6Seconds of missed heartbeats before a peer is marked suspect.
TSINK_CLUSTER_CONTROL_DEAD_TIMEOUT_SECS20Seconds after which a suspect peer is declared dead and removed from routing.
TSINK_CLUSTER_CONTROL_LEADER_LEASE_SECS6Leader lease duration in seconds.

6.8 Write deduplication

Cluster writes carry idempotency keys so that retried requests from the client or from hinted handoff replay are not applied twice.
VariableDefaultDescription
TSINK_CLUSTER_DEDUPE_WINDOW_SECS900 (15 min)How long idempotency keys are retained. Retries arriving after this window may be re-applied.
TSINK_CLUSTER_DEDUPE_MAX_ENTRIES250000Maximum number of idempotency keys held in memory. Oldest entries are evicted when the limit is reached.