from typing import Any
Logging¶
AsyncMQ’s logging system provides a consistent, configurable, and extensible way to capture insights from every component, from task producers to workers and schedulers.
This guide dives deep into the LoggingConfig
protocol, the built‑in StandardLoggingConfig
,
and how to customize logging to suit your needs.
1. What Is the LoggingConfig
Protocol?¶
Located in asyncmq.logging
, LoggingConfig
defines the interface for configuring and initializing logging in AsyncMQ.
Any implementation must:
- Expose a
level
attribute (str
, e.g., "INFO", "DEBUG"). - Implement a
configure()
method that applies the configuration to Python’slogging
module and aget_logger()
that returns the instance of the logging being used.
This abstraction lets AsyncMQ treat logging setup as a pluggable component.
2. The Built‑in StandardLoggingConfig
¶
Provided in asyncmq.core.utils.logging
, StandardLoggingConfig
is the default implementation:
import logging
from typing import Any
from asyncmq.logging import LoggingConfig
class StandardLoggingConfig(LoggingConfig):
def configure(self) -> None:
"""
Initialize Python logging for AsyncMQ components.
1. Sets the root logger level.
2. Configures a simple console handler with a timestamped format.
3. Applies the level to the 'asyncmq' logger and all submodules.
"""
# Convert string to logging constant
lvl = getattr(logging, self.level.upper(), logging.INFO)
# Basic console format
fmt = "%(asctime)s [%(levelname)s] %(name)s: %(message)s"
logging.basicConfig(level=lvl, format=fmt)
# Ensure AsyncMQ loggers inherit this level
logging.getLogger("asyncmq").setLevel(lvl)
def get_logger(self) -> Any:
return logging.getLogger("asyncmq")
Why It Matters¶
- Uniform Output: All modules log with the same format and level.
- Quick Setup: One call to
configure()
andget_logger()
and you’re done. - Extensible: Build your own class if you need JSON, file logging, or third‑party handlers.
3. How AsyncMQ Uses It¶
- Settings Integration
In
asyncmq.conf.global_settings.Settings
:
@property
def logging_config(self) -> LoggingConfig:
from asyncmq.core.utils.logging import StandardLoggingConfig
return StandardLoggingConfig(level=self.logging_level)
config = settings.logging_config
if config:
config.configure()
This ensures before any commands or job processing runs, the logging framework is fully configured.
4. Customizing Logging¶
4.1. Override logging_level
¶
Simply set a different level in your Settings
subclass:
class Settings(BaseSettings):
logging_level = "DEBUG"
4.2. Provide a Custom LoggingConfig
¶
If you need advanced behavior (e.g., JSON output, file rotation), implement LoggingConfig
:
import logging
from logging import FileHandler, Formatter
from typing import Any
from asyncmq.logging import LoggingConfig
class MyLoggingConfig(LoggingConfig):
def __init__(self, level: str, filename: str):
super().__init__(level=level)
self.filename = filename
def configure(self) -> None:
import logging
lvl = getattr(logging, self.level)
fmt = "%(asctime)s %(levelname)s %(message)s"
handler = FileHandler(self.filename)
handler.setFormatter(Formatter(fmt))
root = logging.getLogger()
root.addHandler(handler)
root.setLevel(lvl)
def get_logger(self) -> Any:
return logging.getLogger("myapp")
Then override the logging_config
property in your settings:
@dataclass
class Settings(BaseSettings):
@property
def logging_config(self):
return MyLoggingConfig(level=self.logging_level, filename="asyncmq.log")
5. Detailed Reference¶
Component | Location | Description |
---|---|---|
LoggingConfig Protocol |
asyncmq/logging.py |
Interface requiring level & configure() |
StandardLoggingConfig |
asyncmq/core/utils/logging.py |
Default setup: console handler, timestamped format |
logging_config |
asyncmq.conf.global_settings.Settings |
Dynamically returns a LoggingConfig instance |
6. Best Practices & Pitfalls¶
- Always configure logging before other imports to capture early warnings.
- Avoid multiple handlers: guard against repeated
basicConfig()
calls by checkingis_logging_setup
. - Use structured logging in production: consider JSON for logs shipped to ELK or Splunk.
- Be mindful of performance: expensive formatters or handlers (network/file I/O) can slow down high‑throughput workers.
With this knowledge, you can tame logging in AsyncMQ, whether you stick with the standard console output or build a fully customized observability pipeline! 🚀