Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.sluice.sh/llms.txt

Use this file to discover all available pages before exploring further.

Architecture

Sluice collects data from your Celery infrastructure through either the Python SDK or the Go agent, normalizes it into a unified data model, and sends it to the Sluice API for storage and real-time display.

Data collection

Python SDK path

The SDK installs a Celery Bootstep — a lifecycle hook that runs inside your worker process. It captures events as they happen with zero latency:
  1. Celery eventstask-sent, task-received, task-started, task-succeeded, task-failed, task-retried, task-revoked, and task-rejected
  2. Worker eventsworker-online, worker-heartbeat, worker-offline
  3. Auto-configuration — enables the three Celery flags needed for monitoring
Events are batched and forwarded to POST /api/ingest over HTTPS with your API key.

Go agent path

The agent runs as a separate container and connects to your Redis broker directly:
  1. PUB/SUB subscription — listens on celeryev.* channels for task and worker events
  2. Queue polling — reads queue depths via LLEN on queue keys
  3. Topology discovery — scans _kombu.binding.* keys to find queues
  4. Automatic reconnection — exponential backoff with jitter if Redis drops
The agent doesn’t modify Redis — it’s strictly read-only.

Event normalization

Both the SDK and agent convert Celery-native events into Sluice’s unified format before sending. This normalization step:
  • Maps Celery states to unified states (e.g., PENDINGunknown, SUCCESScompleted)
  • Preserves framework-specific data in an extensions field
  • Labels every record with framework: "celery"
  • Assigns timestamps and tracks state transitions

API and storage

The Sluice API (POST /api/ingest) validates incoming events, deduplicates them, and writes them to Postgres. Each event updates the job, worker, or queue record and appends to the state history. Free tier limits: 10,000 events per day with 24-hour data retention. Events beyond the daily limit are rejected with a 429 status.

Real-time streaming

The dashboard receives live updates via Server-Sent Events (SSE) from GET /api/events/stream. When a new job event arrives at the API, it’s broadcast to all connected dashboard sessions for that connection. This gives sub-second visibility into your Celery infrastructure without polling.