@@ -34,6 +34,23 @@ class DeviceStatsMonitor(Callback):
34
34
r"""Automatically monitors and logs device stats during training, validation and testing stage.
35
35
``DeviceStatsMonitor`` is a special callback as it requires a ``logger`` to passed as argument to the ``Trainer``.
36
36
37
+ Logged Metrics:
38
+ Device statistics are logged with keys prefixed as
39
+ ``DeviceStatsMonitor.{hook_name}/{base_metric_name}`` (e.g.,
40
+ ``DeviceStatsMonitor.on_train_batch_start/cpu_percent``).
41
+ The source of these metrics depends on the ``cpu_stats`` flag
42
+ and the active accelerator.
43
+
44
+ CPU (via ``psutil``): Logs ``cpu_percent``, ``cpu_vm_percent``, ``cpu_swap_percent``.
45
+ All are percentages (%).
46
+ CUDA GPU (via :func:`torch.cuda.memory_stats`): Logs detailed memory statistics from
47
+ PyTorch's allocator (e.g., ``allocated_bytes.all.current``, ``num_ooms``; all in Bytes).
48
+ GPU compute utilization is not logged by default.
49
+ Other Accelerators (e.g., TPU, MPS): Logs device-specific stats.
50
+ - TPU example: ``avg. free memory (MB)``.
51
+ - MPS example: ``mps.current_allocated_bytes``.
52
+ Observe logs or check accelerator documentation for details.
53
+
37
54
Args:
38
55
cpu_stats: if ``None``, it will log CPU stats only if the accelerator is CPU.
39
56
If ``True``, it will log CPU stats regardless of the accelerator.
@@ -45,6 +62,7 @@ class DeviceStatsMonitor(Callback):
45
62
ModuleNotFoundError:
46
63
If ``psutil`` is not installed and CPU stats are monitored.
47
64
65
+
48
66
Example::
49
67
50
68
from lightning import Trainer
0 commit comments