Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,10 @@
- local: phone_teleop
title: Phone
title: "Teleoperators"
- sections:
- local: torch_accelerators
title: PyTorch accelerators
title: "Supported Hardware"
- sections:
- local: notebooks
title: Notebooks
Expand Down
42 changes: 42 additions & 0 deletions docs/source/torch_accelerators.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# PyTorch accelerators

LeRobot supports multiple hardware acceleration options for both training and inference.

These options include:

- **CPU**: CPU executes all computations, no dedicated accelerator is used
- **CUDA**: acceleration with NVIDIA & AMD GPUs
- **MPS**: acceleration with Apple Silicon GPUs
- **XPU**: acceleration with Intel integrated and discrete GPUs

## Getting Started

To use particular accelerator, a suitable version of PyTorch should be installed.

For CPU, CUDA, and MPS backends follow instructions provided on [PyTorch installation page](https://pytorch.org/get-started/locally).
For XPU backend, follow instructions from [PyTorch documentation](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html).

### Verifying the installation

After installation, accelerator availability can be verified by running

```python
import torch
print(torch.<backend_name>.is_available()) # <backend_name> is cuda, mps, or xpu
```

## How to run training or evaluation

To select the desired accelerator, use the `--policy.device` flag when running `lerobot-train` or `lerobot-eval`. For example, to use MPS on Apple Silicon, run:

```bash
lerobot-train
--policy.device=mps ...
```

```bash
lerobot-eval \
--policy.device=mps ...
```

However, in most cases, presence of an accelerator is detected automatically and `policy.device` parameter can be omitted from CLI commands.