All Sluice errors follow the format:Documentation Index
Fetch the complete documentation index at: https://docs.sluice.sh/llms.txt
Use this file to discover all available pages before exploring further.
[sluice] {what happened}. {why}. See: {URL}. This page covers the most common ones, grouped by where they appear.
SDK errors
API key is empty
sluice.init() was called without an API key, and the SLUICE_API_KEY environment variable isn’t set.
Fix: Either pass the key directly or set the env var:
Connection ID is empty
init() or set SLUICE_CONNECTION_ID.
Connection ID is not a valid UUID
550e8400-e29b-41d4-a716-446655440000.
init() called multiple times
sluice.init() was called more than once. This is a warning, not an error — the SDK works correctly using the first call’s configuration.
Fix: Remove duplicate init() calls. This commonly happens when the call is in a module that gets imported multiple times.
Failed to set up Celery integration
logging.getLogger('sluice').setLevel(logging.DEBUG) before calling init(). Common causes:
- Celery isn’t installed (
pip install celery) - Celery version is too old (minimum 5.3)
- Celery app isn’t configured yet when
init()runs
API errors
400 — Validation error
details field for specific field-level errors.
401 — Unauthorized
Authorization: Bearer sk_... header is present.
429 — Daily limit exceeded
Retry-After response header indicates seconds until the limit resets.
Celery failure modes
The following are the top failure modes seen in production Celery deployments. Sluice detects these automatically and surfaces them in the dashboard.Silent task stalls
Tasks that hang indefinitely without raising an exception. This happens when a task blocks on I/O (database query, HTTP call, file lock) and never returns. Celery won’t mark the task as failed unless you configuretask_time_limit.
Fix: Set task_time_limit and task_soft_time_limit in your Celery config. Sluice flags tasks that exceed expected duration in the Slow Tasks view.
Worker OOM kills
Workers killed by the OS out-of-memory killer. Common when tasks accumulate large objects in memory across many executions. The worker process dies silently — no Celery failure event is emitted. Fix: Setworker_max_memory_per_child to auto-restart workers after a memory threshold. Sluice tracks worker restarts and correlates them with memory growth patterns.
Visibility timeout duplicates
Tasks executed more than once because the broker’s visibility_timeout expired before the task completed. The broker assumes the task was lost and re-delivers it to another worker. Fix: Increasevisibility_timeout in your broker transport options to exceed your longest task runtime. Sluice detects duplicate task IDs and flags them in the Duplicates view.
Prefetch blindness
Workers prefetch tasks into a local buffer (worker_prefetch_multiplier), making them invisible to other workers and to monitoring tools that only watch the broker queue length. Queue appears empty while tasks sit in prefetch buffers.
Fix: Set worker_prefetch_multiplier to 1 for long-running tasks, or use -Ofair for the worker. Sluice tracks task state transitions regardless of prefetch, so you see the real picture.