Oban, the job processing framework from Elixir, has come to Python
Origins and related systems
- Oban for Python is conceptually influenced by prior Ruby/Elixir job systems (Sidekiq, Resque, delayed_job) and by Faktory’s “central server + thin clients” approach, but takes the “focus on one ecosystem with a DB-backed queue” path.
- Some note Oban in Elixir is far richer than classic Sidekiq (workflows, cron, partitioning, dependent jobs, advanced failure handling).
Elixir vs Python ecosystems
- Several commenters wish more data/ML/BI workloads would move to Elixir, arguing its fault-tolerance and concurrency are a more natural fit for pipelines than Python.
- Others highlight Oban as one of the most elegant parts of the Elixir stack and predict Python users will like it.
Comparison with Celery, Temporal, and other tools
- Celery: widely used, powerful at scale (often with RabbitMQ), but seen as clunky, hard to extend (unique tasks, rate limiting, proper scheduling, asyncio), and overkill for many Django apps.
- Oban: positioned as a lighter-weight, Postgres-backed worker queue; good fit when you already have Postgres and want transactional enqueueing.
- Temporal: offers strict workflow guarantees, determinism, and strong reliability, but is considered heavyweight and verbose for simple jobs.
- Other mentioned options: RQ, Prefect, Argo Workflows, DBOS, Absurd, pgflow/pgmq, pg_timetable, Kafka+Debezium, Django Tasks (API only so far).
Database-backed queues & transactional outbox
- Strong enthusiasm for “jobs in the same DB” to get ACID transactions and avoid dual-write problems. Oban’s ability to enqueue within the same transaction as domain changes is seen as a major win.
- PostgreSQL features (LISTEN/NOTIFY,
FOR UPDATE SKIP LOCKED, advisory locks) are cited as enabling high throughput in Oban/GoodJob-like designs. - Some are uneasy about turning the primary database into a job nexus (deadlocks, heavy write load, scaling vs Kafka/Redis), but others report millions of jobs/minute with Postgres-backed queues.
Performance & scalability debates
- Skeptics recall big gains moving from Postgres queues to Redis/Sidekiq; question whether Postgres can handle “hundreds of millions” of jobs.
- Others claim tens of millions of jobs/day on modest Postgres instances are feasible with careful design.
OSS vs Pro feature gating
- Major friction around Pro-only features: multi-process pool, smarter heartbeats/rescues, workflows, rate limiting, unique jobs, bulk ops.
- Some feel basic reliability/parallelism being paid-only makes the OSS version feel “gimped” or demo-like and unsuitable for serious OSS projects.
- Others defend the model as necessary to fund sustained development, and note features have moved from Pro → OSS in the past; there’s openness to shifting that line over time.
- Several suggest a more “enterprise-only” paywall (compliance, encryption variants, formal support) would be easier to adopt.
Python-specific considerations
- Concern that OSS is single-threaded asyncio: fine for I/O-bound tasks and small services, but less attractive for CPU-bound workloads; though multiple worker processes can still be run.
- Some argue process-based parallelism should be free, with async as a premium differentiator, given most Python libraries aren’t async-friendly.
- Interest in Django integration, particularly as a backend for Django’s new Tasks API, but that ecosystem is still immature.