Skip to content

Commit b547085

Browse files
authored
[docs] Give instructions on how to enable GPU acceleration (#1955)
1 parent bcad105 commit b547085

File tree

1 file changed

+33
-0
lines changed

1 file changed

+33
-0
lines changed

docs/source/using_doctr/using_models.rst

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -332,6 +332,39 @@ For example to disable the automatic grouping of lines into blocks:
332332
model = ocr_predictor(pretrained=True, resolve_blocks=False)
333333
334334
335+
Running the predictors on GPU
336+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
337+
338+
You can run the predictors on GPU by specifying the appropriate device.
339+
340+
Here's how to do it for both **NVIDIA** and **Apple Silicon (MPS)** GPUs:
341+
342+
.. code:: python3
343+
344+
import torch
345+
from doctr.models import ocr_predictor
346+
347+
# For NVIDIA GPU
348+
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
349+
predictor = ocr_predictor(pretrained=True).to(device)
350+
# Alternatively: predictor = ocr_predictor(pretrained=True).cuda()
351+
352+
# For Apple Silicon (MPS)
353+
device = torch.device('mps' if torch.backends.mps.is_available() else 'cpu')
354+
predictor = ocr_predictor(pretrained=True).to(device)
355+
356+
357+
The same approach applies to all standalone predictors:
358+
359+
* `recognition_predictor`
360+
* `detection_predictor`
361+
* `crop_orientation_predictor`
362+
* `page_orientation_predictor`
363+
364+
Just create the predictor instance and move it to the appropriate device.
365+
To enable **half-precision inference**, you can append `.half()` after moving the predictor to the device.
366+
367+
335368
What should I do with the output?
336369
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
337370

0 commit comments

Comments
 (0)