Skip to content

Commit c5a15c7

Browse files
authored
update installation instructions for mlx on CUDA. (#1661)
The intent is to allow users on CUDA backend to know how to install the library in addition to this, it may allow non-quantized models to work with MLX on Colab too. cc: @awni
1 parent bab82e6 commit c5a15c7

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

packages/tasks/src/model-libraries-snippets.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1804,6 +1804,7 @@ huggingface-cli download --local-dir ${nameWithoutNamespace(model.id)} ${model.i
18041804
const mlxlm = (model: ModelData): string[] => [
18051805
`# Make sure mlx-lm is installed
18061806
# pip install --upgrade mlx-lm
1807+
# if on a CUDA device, also pip install mlx[cuda]
18071808
18081809
# Generate text with mlx-lm
18091810
from mlx_lm import load, generate

0 commit comments

Comments
 (0)