Documentation Index
Fetch the complete documentation index at: https://docs.sluice.sh/llms.txt
Use this file to discover all available pages before exploring further.
Architecture
Sluice collects data from your Celery infrastructure through either the Python SDK or the Go agent, normalizes it into a unified data model, and sends it to the Sluice API for storage and real-time display.Data collection
Python SDK path
The SDK installs a Celery Bootstep — a lifecycle hook that runs inside your worker process. It captures events as they happen with zero latency:- Celery events —
task-sent,task-received,task-started,task-succeeded,task-failed,task-retried,task-revoked, andtask-rejected - Worker events —
worker-online,worker-heartbeat,worker-offline - Auto-configuration — enables the three Celery flags needed for monitoring
POST /api/ingest over HTTPS with your API key.
Go agent path
The agent runs as a separate container and connects to your Redis broker directly:- PUB/SUB subscription — listens on
celeryev.*channels for task and worker events - Queue polling — reads queue depths via
LLENon queue keys - Topology discovery — scans
_kombu.binding.*keys to find queues - Automatic reconnection — exponential backoff with jitter if Redis drops
Event normalization
Both the SDK and agent convert Celery-native events into Sluice’s unified format before sending. This normalization step:- Maps Celery states to unified states (e.g.,
PENDING→unknown,SUCCESS→completed) - Preserves framework-specific data in an
extensionsfield - Labels every record with
framework: "celery" - Assigns timestamps and tracks state transitions
API and storage
The Sluice API (POST /api/ingest) validates incoming events, deduplicates them, and writes them to Postgres. Each event updates the job, worker, or queue record and appends to the state history.
Free tier limits: 10,000 events per day with 24-hour data retention. Events beyond the daily limit are rejected with a 429 status.
Real-time streaming
The dashboard receives live updates via Server-Sent Events (SSE) fromGET /api/events/stream. When a new job event arrives at the API, it’s broadcast to all connected dashboard sessions for that connection. This gives sub-second visibility into your Celery infrastructure without polling.