Skip to content

Commit a8ded6d

Browse files
committed
add Compose how-to page for Docker Model Runner support with Compose
Signed-off-by: Guillaume Lours <[email protected]>
1 parent 4a3f007 commit a8ded6d

File tree

2 files changed

+74
-0
lines changed

2 files changed

+74
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
---
2+
title: Use Docker Model
3+
description: Learn how to integrate Docker Model Runner with Docker Compose to build AI-powered applications
4+
keywords: compose, docker compose, model runner, ai, llm, artificial intelligence, machine learning
5+
weight: 111
6+
params:
7+
sidebar:
8+
badge:
9+
color: green
10+
text: New
11+
---
12+
13+
{{< summary-bar feature_name="Compose model runner" >}}
14+
15+
Docker Model Runner can be integrated with Docker Compose to run AI models as part of your multi-container applications.
16+
This allows you to define and run AI-powered applications alongside your other services.
17+
18+
## Prerequisites
19+
20+
- Docker Compose v2.35 or later
21+
- Docker Desktop 4.41 or later
22+
- Docker Model Runner enabled in Docker Desktop
23+
- Apple Silicon Mac (currently Model Runner is only available for Mac with Apple Silicon)
24+
25+
## Enabling Docker Model Runner
26+
27+
Before you can use Docker Model Runner with Compose, you need to enable it in Docker Desktop, as described in the [Docker Model Runner documentation](/desktop/features/model-runner/).
28+
29+
## Provider services
30+
31+
Compose introduces a new service type called `provider` that allows you to declare platform capabilities required by your application. For AI models, you can use the `model` type to declare model dependencies.
32+
33+
Here's an example of how to define a model provider:
34+
35+
```yaml
36+
services:
37+
chat:
38+
image: my-chat-app
39+
depends_on:
40+
- ai-runner
41+
42+
ai-runner:
43+
provider:
44+
type: model
45+
options:
46+
model: ai/smollm2
47+
```
48+
49+
You should notice the dedicated `provider` attribute in the `ai-runner` service.
50+
This attribute specifies that the service is a model provider and let you define options such as the name of the model to be used.
51+
52+
There is also a `depends_on` attribute in the `chat` service.
53+
This attribute specifies that the `chat` service depends on the `ai-runner` service.
54+
This means that the `ai-runner` service will be started before the `chat` service to allow injection of model information to the `chat` service.
55+
56+
## How it works
57+
58+
During the `docker compose up` process, Docker Model Runner will automatically pull and run the specified model.
59+
It will also send to Compose the model tag name and the URL to access the model runner.
60+
61+
Those information will be then pass to services which declare a dependency on the model provider.
62+
In the example above, the `chat` service will receive 2 env variables prefixed by the service name:
63+
- `AI-RUNNER_URL` with the URL to access the model runner
64+
- `AI-RUNNER_MODEL` with the model name which could be passed with the URL to request the model.
65+
66+
This allows the `chat` service to interact with the model and use it for its own purposes.
67+
68+
69+
## Reference
70+
71+
- [Docker Model Runner documentation](desktop/features/model-runner/)
72+

data/summary.yaml

+2
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,8 @@ Compose mac address:
105105
requires: Docker Compose [2.23.2](/manuals/compose/releases/release-notes.md#2232) and later
106106
Compose menu:
107107
requires: Docker Compose [2.26.0](/manuals/compose/releases/release-notes.md#2260) and later
108+
Compose model runner:
109+
requires: Docker Compose [2.35.0](/manuals/compose/releases/release-notes.md#2300) and later
108110
Compose OCI artifact:
109111
requires: Docker Compose [2.34.0](/manuals/compose/releases/release-notes.md#2340) and later
110112
Compose replace file:

0 commit comments

Comments
 (0)