Description
🚀 The feature, motivation and pitch
Having a unified, user-configurable mechanism to control AOT log level would be beneficial for user experience. Currently, we have many individual, file-scoped python loggers. It is difficult to control log verbosity as a user, and as a result, we output very few logs by default. For development purposes, I often end up locally modifying the source to add temporary logs or increase verbosity.
As a user, I would like to be able to do the following:
- Enable verbose output during lowering, in order to see log messages from backends indicating why specific nodes weren't partitioned.
- Disable log output entirely or reduce verbosity to only critical warnings and errors.
From a developer perspective, this helps enable a few things:
- Allow backends to provide verbose diagnostic information on an opt-in basis.
- One common use case is printing the XNNPACK graph.
- Avoid needing to locally modify source files to add or modify common logs.
- Provide a standardized, non-fatal way to surface warnings from backends, such as when critical ops are not partitioned or when a constraint is violated.
Design
PyTorch core provides a unified, component-level logging mechanism that we can leverage to control ExecuTorch logging in a manner consistent with the rest of the PyTorch stack. It exposes control over individual component loggers from both environment variables and from a Python API.
PyTorch Logging Resources:
To support this in ExecuTorch, we can initially register a single "executorch" logger. We can then update ExecuTorch Python code to use this logger instead of creating a dedicated Python logger for each file. We can also add additional component loggers to ExecuTorch as needed.
Once we do this for ExecuTorch core, we can work with backend owners to expose dedicated loggers for each backend. They are currently using a mix of ad-hoc Python logging, direct writes to stdout/stderr, and raising exceptions to signal warnings.
As an example of what this might look like, a user might be able to enable debug-level logs for Dynamo, ExecuTorch core, and the XNNPACK backend as follows:
TORCH_LOGS="+dynamo,+executorch,+xnnpack" python run_export.py
Or programatically, via torch._logging.set_logs(...).
Task Breakdown
- Call
torch._logging.register_artifact
to register an "executorch" logger, perhaps from a new file - executorch/exir/logging.py. We'll need to make sure this registration code runs before attempting to access the logger. - Provide an API to set log level for the ExecuTorch logger. PyTorch provides a set_log API, but it currently takes hard coded parameters. It might make sense to extend that in PyTorch Core to allow arbitrary kwargs.
- Update logging calls under exir/ to call
torch._logging.getArtifactLogger
and use this logger to make log calls.- Searching for
logging.getLogger
is a good way to find these: link. - It might make sense to introduce an additional function to retrieve the ET logger, in order to make sure the logger is registered with PyTorch core. Alternatively, maybe we can find a place to put the registration code to make sure it always runs first.
- Searching for
Follow-up Tasks
- Register an "xnnpack" logger. Update log calls in the backends/xnnpack/ directory in the same way.
- Route XNNPACK WhyNoPartitioner logging through the new logger, perhaps at info level and defaulting to warning.
- Work with Arm, Qualcomm, NXP, Core ML, and Vulkan backends to move logging to the unified framework.
Alternatives
No response
Additional context
No response
RFC (Optional)
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Status