Skip to content

Release Notes

0.8.0

Added

  • BullMQ-style producer parity for practical Python usage:
    • queue-scoped job_id duplicate suppression on Queue.add(...) and Queue.add_bulk(...),
    • BullMQ-style deduplication, throttle, and debounce semantics via deduplication={...} and debounce={...},
    • deduplication inspection and control APIs such as get_deduplication_job_id(...), get_debounce_job_id(...), and remove_deduplication_key(...).
  • Expanded queue administration and inspection APIs:
    • get_job(...),
    • get_jobs(...),
    • get_job_counts(...),
    • get_job_count_by_types(...),
    • count(...),
    • remove_job(...),
    • retry_job(...),
    • drain(...),
    • clean_jobs(...),
    • obliterate(...),
    • BullMQ-style inspection helpers for waiting, delayed, completed, failed, active, and waiting-children.
  • Durable repeatable scheduling APIs for production-oriented schedule management:
    • upsert_repeatable(...),
    • remove_repeatable(...),
    • durable backend-managed repeatable discovery across built-in backends.
  • New documentation sections for:
    • deduplication,
    • BullMQ parity,
    • backend capability differences,
    • production operations,
    • BullMQ migration guidance.

Changed

  • Brought AsyncMQ to practical parity while preserving AsyncMQ's backend-neutral architecture rather than coupling behavior to Redis-only data structures.
  • Queue producer semantics now behave consistently across single-job and bulk-job creation, including custom job identifiers, deduplication windows, delayed replacement, and duplicate suppression.
  • Repeatable scheduling now supports both local code-defined schedules and durable backend-managed schedules in one coherent runtime model.
  • Scheduler ownership for durable repeatables is now coordinated under queue-scoped locks so multiple workers do not all advance the same backend schedule at once.
  • Documentation was substantially expanded and reorganized:
  • deeper runtime guides for jobs, workers, schedulers, and flows,
  • richer production and migration guidance,
  • clearer navigation between features, reference material, and operations documentation.

Fixed

  • Sandbox execution integration now respects the configured sandbox handler path consistently during worker execution.
  • PostgreSQL job identity semantics are now queue-scoped, aligning custom job_id handling duplicate suppression behavior.
  • Retry and job-payload persistence paths were aligned across backends so stateful metadata such as deduplication, dependency updates, and retried payload state are preserved correctly.
  • MongoDB payload replacement now removes stale job metadata fields instead of leaving outdated values behind after payload mutation.
  • RabbitMQ metadata persistence and locking fallbacks were aligned with the shared backend contract for queue inspection, schedule management, and deduplication-aware updates.

0.7.0

Added

  • Migrated the CLI to a Sayer-backed implementation while preserving existing command names and behavior.
  • Added a shared dashboard queue-count aggregation helper for consistent overview/metrics/SSE data.
  • Added richer dashboard operations features:
  • queue job filtering/search (q, task, job_id, sorting),
  • action audit trail page (/audit),
  • metrics history endpoint (/metrics/history) and richer metrics history visualizations.
  • Added high-value regression tests for:
    • MongoDB store _id update behavior,
    • worker handling of backend-wrapped payloads,
    • dashboard count aggregation and repeatables actions,
    • dashboard audit/history stores and filtering behavior,
    • project metadata validation for optional dependency extras.

Changed

  • Expanded and restructured documentation with:
    • a rewritten main entrypoint (index.md),
    • richer feature docs,
    • new how-to guides,
    • new reference section,
    • dedicated troubleshooting documentation,
    • improved dashboard operator documentation (capabilities + operations playbook).
  • Updated docs navigation for clearer onboarding and operational workflows.
  • Improved dashboard controllers to reuse consistent queue/job state aggregation and safer backend fallbacks.
  • Expanded dashboard documentation with route reference, richer runbooks, and additional architecture/workflow diagrams.

Fixed

  • RabbitMQ backend acknowledgment lifecycle:
  • dequeue no longer auto-acks,
  • in-flight messages are tracked and acknowledged explicitly via ack(...).
  • RabbitMQ delayed/repeatable and queue-state consistency:
  • normalized delayed-state handling,
  • corrected delayed listing/removal semantics,
  • improved queue pause/resume and worker heartbeat visibility behavior.
  • Dependency and stalled-job edge cases:
  • improved dependency merge/resolution behavior,
  • normalized stalled-job payload handling across worker/recovery paths.
  • DLQ terminal-state correctness across backends:
  • preserved explicit terminal statuses (failed / expired) instead of forcing non-terminal values.
  • CLI queue-info correctness:
  • now prefers backend queue_stats(...) instead of backend-internal attributes.
  • Packaging extras correctness:
  • fixed asyncmq[all] to include concrete installable dependencies.
  • Typing fix in repeatable queue definitions (Queue.add_repeatable) to satisfy strict type checking.
  • Worker completion path now tolerates minimal backend stubs that do not implement dependency-resolution APIs.
  • PostgreSQL delayed-job retrieval now removes due delayed records atomically, restoring delayed lifecycle correctness.
  • In-memory and MongoDB backends now prevent failed jobs from being double-counted as waiting in queue stats.
  • Stalled-job recovery behavior was aligned across backends for payload/state consistency in re-enqueue flows.

0.6.3

Changed

  • Update internals to start using Palfrey for the dashboard.

0.6.2

Fixed

  • When using the settings it was causing a conflict with the types and not casting properly to the right type due to the from future import __annotations__.

0.6.1

Fixed

  • Mininum requirements for RabbitMQ.

0.6.0

Added

  • Implemented AuthGateMiddleware to protect dashboard routes and handle HTMX redirects.
  • Added CORS and session middleware support in AsyncMQAdmin with customizable options.
  • Integrated DashboardConfig access via global settings (settings.dashboard_config).
  • Introduced JWTAuthBackend and database-backed authentication backends for enhanced security.
  • Expanded documentation with detailed examples for custom AuthBackend implementations and integration guides.

Changed

  • Updated the Dashboard documentation to reflect new authentication flows and backend integrations.
  • Clarified upgrade instructions to assist users transitioning from previous dashboard versions.
  • Improved authentication-related documentation for better clarity and usability.

Breaking

Check the documentation of the Dashboard as now with the introduction of the new flow, the old way won't work anymore. The dashboard is no longer available and now everything is done via AsyncMQAdmin object.

The documentation explains how to easily integrate and update (its not too much different)

0.5.1

Add

  • send() as alternative to enqueue.

Changed

  • Allow enqueue, enqueue_delayed and delay to return the job id directly.

0.5.0

Added

  • Parallel DAG execution fixes for backend via FlowProducer and process_job.
  • Backward-compatible serialization improvements for Job to support both task_id and legacy task fields.
  • Automatic dependency resolution fallback when atomic flow addition is not supported by the backend.
  • Lifecycle hooks for worker startup and shutdown via worker_on_startup and worker_on_shutdown, supporting both sync and async callables.
  • New reusable lifecycle.py module providing utilities like normalize_hooks, run_hooks, and run_hooks_safely.
  • New section in the workers for hooks and lifecycle.

Changed

  • Improved worker and flow orchestration stability under mixed backends.
  • Worker startup now runs worker_on_startup hooks before registering heartbeats.
  • Worker shutdown now runs worker_on_shutdown hooks safely after deregistration.

Fixed

  • Timeout issues when running parallel DAG flows due to backend event-loop mismatches.
  • Various minor concurrency edge cases in job scanning and dependency unlocking logic.
  • Fix BaseSettings for python 3.14 and more complex inheritances.
  • Sandbox task discovery.

0.4.6

Fixed

Due to some internals, Lilya was missing for the dashboard dependency and causing a regression.

  • Added Lilya as dependency for AsyncMQ and remove several redundant others.

0.4.5

Fixed

  • Lazy loading of the settings via Monkay on stores.

0.4.4

Fixed

  • Lazy loading of the settings via Monkay.

0.4.3

Added

  • Python 3.14 support.

0.4.2

Added

  • Add config option for custom JSON loads/dumps #73 by chbndrhnns.
  • No negative numbers on workers list pagination #69 by chbndrhnns.

Changed

  • Accept a Redis client instance for RedisBackend #67 by chbndrhnns.

Fixed

  • Lazy loading for the global settings was pointing to the wrong location.
  • Error when parsing Boolean values from environment variables.
  • Do not reload dashboard test files #64 by chbndrhnns.
  • No negative numbers on workers list pagination #68 by chbndrhnns.

0.4.1

Changed

In the past, AsyncMQ was using dataclass to manage all the settings but we found out that can be a bit cumbersome for a lot of people that are more used to slightly cleaner interfaces and therefore, the internal API was updated to stop using @dataclass and use directly a typed Settings object.

  • Replace Settings to stop using @dataclass and start using direct objects instead.

Example before

from dataclasses import dataclass, field
from asyncmq.conf.global_settings import Settings


@dataclass
class MyCustomSettings(Settings):
    hosts: list[str] = field(default_factory=lambda: ["example.com"])

Example after

from asyncmq.conf.global_settings import Settings


class MyCustomSettings(Settings):
    hosts: list[str] = ["example.com"]

This makes the code cleaner and readable.

0.4.0

Added

  • Address session middleware from the dashboard config and can be changed.
  • Make cli commands more consistent across the client.
  • start_worker now uses the Worker object. This was previously added but never 100% plugged into the client.

Fixed

  • Register/De-register workers in run_worker.
  • CLI: Catch RuntimeError when running commands via anyio
  • Log exceptions during job execution.
  • Out-of-process workers do not pick up jobs from the queue
  • index.html had a typo for the <script />

0.3.1

Added

  • RabbitMQBackend

    • Full implementation of the BaseBackend interface over AMQP via aio-pika:
      • enqueue / dequeue / ack
      • Dead-letter queue support (move_to_dlq)
      • Delayed jobs (enqueue_delayed, get_due_delayed, list_delayed, remove_delayed)
      • Repeatable jobs (enqueue_repeatable, list_repeatables, pause_repeatable, resume_repeatable)
      • Atomic flows & dependency resolution (atomic_add_flow, add_dependencies, resolve_dependency)
      • Queue pause/resume, cancellation/retry, job-state/progress/heartbeat tracking
      • Broker-side stats (queue_stats) and queue draining (drain_queue)
  • RabbitMQJobStore

    • Implements BaseJobStore, delegating persistence to any other store (e.g. RedisJobStore), so metadata stays flexible.

Changed

  • Unified on the default exchange for enqueueing and DLQ publishing

Fixed

  • Proper wrapping of store results into DelayedInfo and RepeatableInfo dataclasses
  • Correct scheduling semantics using epoch timestamps (time.time())

0.3.0

Fixed

  • Enqueue/delay was not returning a job id.
  • Backend was not returning the id on enqueue.

Breaking change

  • This barely affects you but the order of parameters when the .enqueue(backend, ...) happens, it is now .enqueue(..., backend=backend)
Example

Before:

import anyio
from asyncmq.queues import Queue

async def main():
    q = Queue("emails")
    await send_welcome.enqueue(q.backend, "alice@example.com", delay=10)
anyio.run(main)

Now

import anyio
from asyncmq.queues import Queue

async def main():
    q = Queue("emails")
    await send_welcome.enqueue("alice@example.com", backend=q.backend, delay=10)
anyio.run(main)

0.2.3

Fixed

  • Add alternative check to get the concurrency field from the Redis backend.
  • Ensure pagination data is checked when listing workers.
  • Added check for workers.html pagination to ensure it is not empty.

0.2.2

Changed

  • Updated minimum version of Lilya.
  • Workers controller now reflects pagination and sizes.

Fixed

  • When starting a worker from the command-line, it was not automatically registering that same worker.
  • The workers.html was not reflecting the correct information about the active workers.

0.2.1

Fixed

  • StaticFiles were not enabling the html to True for the dashboard.

0.2.0

Release Notes

Added

  • Dashboard

    • Brand new Dashboard where it shows how to integrate with your AsyncMQ. This dahboard is still in beta mode so any feedback is welcomed.
  • Top-Level CLI Commands Added four new top-level commands to the asyncmq CLI (powered by Click & Rich):

    • list-queues โ€” list all queues known to the backend
    • list-workers โ€” show all registered workers with queue, concurrency, and last heartbeat
    • register-worker <worker_id> <queue> [--concurrency N] โ€” register or bump a worker's heartbeat and concurrency
    • deregister-worker <worker_id> โ€” remove a worker from the registry.
    • AsyncMQGroup to cli ensuring the ASYNCMQ_SETTINGS_MODULE is always evaluated beforehand.
  • Worker Configuration

    • Worker now accepts a heartbeat_interval: float parameter (default HEARTBEAT_TTL/3) to control how often it re-registers itself in the backend.

Changed

  • Redis Backend: Store Concurrency

    • Now register_worker stores both heartbeat and concurrency as a JSON blob in each hash field.
    • list_workers parses that JSON so the returned WorkerInfo.concurrency reflects the actual setting.
  • RedisBackend.cancel_job

    • Now walks the waiting and delayed sorted sets via ZRANGE/ZREM instead of using list operations, then marks the job in a Redis SET.
    • Eliminates โ€œWRONGTYPEโ€ errors when cancelling jobs.
  • InMemoryBackend.remove_job

    • Expanded to purge a job from waiting, delayed, DLQ, and to clean up its in-memory state (job_states, job_results, job_progress).
  • Postgres Backend: register_worker Fix

    • The queues column (a text[] type) now correctly receives a one-element Python list ([queue]) instead of a bare string.
    • This resolves DataError: invalid input for query argument $2.
  • General Pauseโ€“Check Safety

    • process_job now guards against backends that don't implement is_queue_paused by checking with hasattr, avoiding AttributeError on simple in-memory or dummy backends.

Fixed

  • Fixed Redis hash-scan code to handle both bytes and str keys/values, preventing .decode() errors.
  • Ensured Postgres connection pool is wired into both list_queues and all worker-heartbeat methods.
  • Cleaned up duplicate fixtures in test modules to prevent event-loop and fixture-resolution errors.
  • Worker registration heartbeat tests were previously timing out, now pass reliably thanks to the configurable heartbeat interval.

0.1.0

Welcome to the first official release of AsyncMQ!


๐Ÿš€ Highlights

  • ๐ŸŽ‰ A 100% asyncio & AnyIO foundationโ€”no more thread hacks or callback nightmares.
  • ๐Ÿ”Œ A pluggable backend system: Redis, Postgres, MongoDB, In-Memory, or your own.
  • โฑ๏ธ Robust delayed and repeatable job scheduling, including cron expressions.
  • ๐Ÿ”„ Built-in retries, exponential backoff, and time-to-live (TTL) semantics.
  • ๐Ÿ’€ Dead Letter Queues (DLQ) for failed-job inspection and replay.
  • โšก Rate limiting and concurrency control to protect downstream systems.
  • ๐Ÿš Sandboxed execution with subprocess isolation and fallback options.
  • ๐Ÿ“Š Event Pub/Sub hooks for job:started, completed, failed, progress, cancelled, and expired.
  • ๐Ÿ”€ Flow/DAG orchestration via FlowProducer, with atomic and fallback dependency wiring.
  • ๐Ÿ› ๏ธ A powerful CLI for managing queues, jobs, and workersโ€”JSON output for scripting.

โœจ New Features

Core APIs

  • @task decorator

  • Define sync or async functions as tasks.

  • Attach .enqueue() (alias .delay()) for one-line scheduling.
  • Support for retries, ttl, progress flag, depends_on, and repeat_every.

  • Queue class

  • add(), add_bulk(), add_repeatable() for single, batch, and periodic jobs.

  • pause(), resume(), clean(), queue_stats(), and in-depth inspection.
  • Configurable concurrency, rate_limit, rate_interval, and scan_interval.

  • process_job & handle_job

  • End-to-end lifecycle: dequeue, pause detection, TTL, delayed re-enqueue, execution (sandbox or direct), retry logic, DLQ, and events.

  • run_worker orchestrator

  • Combines ConcurrencyLimiter, RateLimiter, delayed_job_scanner, and repeatable_scheduler into a single async entrypoint.

  • repeatable_scheduler

  • Dynamic cron and interval scheduling with smart sleep intervals and high accuracy.

  • Utility compute_next_run() for dashboards and testing.

  • FlowProducer

  • Enqueue entire job graphs/DAGs with dependencies, using atomic backend calls or safe fallback.

  • Job abstraction

  • Rich state machine (WAITING, ACTIVE, COMPLETED, FAILED, DELAYED, EXPIRED).

  • Serialization via to_dict()/from_dict(), TTL checks, custom backoff strategies, dependencies, and repeat metadata.

Observability & Configuration

  • Settings dataclass

  • Centralized configuration (debug, logging_level, backend, DB URLs, TTL, concurrency, rate limits, sandbox options, scan intervals).

  • Environment variable ASYNCMQ_SETTINGS_MODULE for overrides.

  • LoggingConfig protocol

  • Built-in StandardLoggingConfig with timestamped console logs.

  • Pluggable for JSON, file rotation, or third-party handlers via custom LoggingConfig implementations.

  • EventEmitter hooks for real-time job events, ideal for Prometheus metrics or Slack alerts.

Developer Experience

  • CLI

  • asyncmq queue, asyncmq job, asyncmq worker, asyncmq info groups with intuitive commands and flags.

  • JSON and human-readable outputs, piping-friendly for shell scripts.

  • Documentation

  • Comprehensive Learn section with deep-dive guides on every component.

  • Features reference for quick lookups.
  • Performance Tuning and Security & Compliance guides (coming soon).

  • Testing Utilities

  • InMemoryBackend for fast, isolated unit tests.

  • Helpers for simulating delays, retries, failures, and cancellations.

๐Ÿ”„ Breaking Changes

  • This is the initial 0.1.0 release. There are no breaking changes yet! ๐ŸŽ‰

๐ŸŽฏ Roadmap & Next Steps

  • Dashboard UI: real-time job monitoring and management interface.
  • Plugin Ecosystem: community-driven extensions for metrics, retries, and custom stores.

Thank you for choosing AsyncMQ! We can't wait to see what you build.

Happy tasking! ๐Ÿš€