Skip to content

Commit 992e5c3

Browse files
authored
Merge similar examples in offline_inference into single basic example (#12737)
1 parent b69692a commit 992e5c3

29 files changed

+394
-437
lines changed

.buildkite/run-cpu-test.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ function cpu_tests() {
3030
# offline inference
3131
docker exec cpu-test-"$BUILDKITE_BUILD_NUMBER"-avx2-"$NUMA_NODE" bash -c "
3232
set -e
33-
python3 examples/offline_inference/basic.py"
33+
python3 examples/offline_inference/basic/generate.py --model facebook/opt-125m"
3434

3535
# Run basic model test
3636
docker exec cpu-test-"$BUILDKITE_BUILD_NUMBER"-"$NUMA_NODE" bash -c "

.buildkite/run-gh200-test.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@ remove_docker_container
2424

2525
# Run the image and test offline inference
2626
docker run -e HF_TOKEN -v /root/.cache/huggingface:/root/.cache/huggingface --name gh200-test --gpus=all --entrypoint="" gh200-test bash -c '
27-
python3 examples/offline_inference/cli.py --model meta-llama/Llama-3.2-1B
27+
python3 examples/offline_inference/basic/generate.py --model meta-llama/Llama-3.2-1B
2828
'

.buildkite/run-hpu-test.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,5 +20,5 @@ trap remove_docker_container_and_exit EXIT
2020
remove_docker_container
2121

2222
# Run the image and launch offline inference
23-
docker run --runtime=habana --name=hpu-test --network=host -e HABANA_VISIBLE_DEVICES=all -e VLLM_SKIP_WARMUP=true --entrypoint="" hpu-test-env python3 examples/offline_inference/basic.py
23+
docker run --runtime=habana --name=hpu-test --network=host -e HABANA_VISIBLE_DEVICES=all -e VLLM_SKIP_WARMUP=true --entrypoint="" hpu-test-env python3 examples/offline_inference/basic/generate.py --model facebook/opt-125m
2424
EXITCODE=$?

.buildkite/run-openvino-test.sh

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ trap remove_docker_container EXIT
1313
remove_docker_container
1414

1515
# Run the image and launch offline inference
16-
docker run --network host --env VLLM_OPENVINO_KVCACHE_SPACE=1 --name openvino-test openvino-test python3 /workspace/examples/offline_inference/basic.py
16+
docker run --network host --env VLLM_OPENVINO_KVCACHE_SPACE=1 --name openvino-test openvino-test python3 /workspace/examples/offline_inference/basic/generate.py --model facebook/opt-125m

.buildkite/run-xpu-test.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,6 @@ remove_docker_container
1414

1515
# Run the image and test offline inference/tensor parallel
1616
docker run --name xpu-test --device /dev/dri -v /dev/dri/by-path:/dev/dri/by-path --entrypoint="" xpu-test sh -c '
17-
python3 examples/offline_inference/basic.py
18-
python3 examples/offline_inference/cli.py -tp 2
17+
python3 examples/offline_inference/basic/generate.py --model facebook/opt-125m
18+
python3 examples/offline_inference/basic/generate.py --model facebook/opt-125m -tp 2
1919
'

.buildkite/test-pipeline.yaml

+6-6
Original file line numberDiff line numberDiff line change
@@ -215,18 +215,18 @@ steps:
215215
- examples/
216216
commands:
217217
- pip install tensorizer # for tensorizer test
218-
- python3 offline_inference/basic.py
219-
- python3 offline_inference/cpu_offload.py
220-
- python3 offline_inference/chat.py
218+
- python3 offline_inference/basic/generate.py --model facebook/opt-125m
219+
- python3 offline_inference/basic/generate.py --model meta-llama/Llama-2-13b-chat-hf --cpu-offload-gb 10
220+
- python3 offline_inference/basic/chat.py
221221
- python3 offline_inference/prefix_caching.py
222222
- python3 offline_inference/llm_engine_example.py
223223
- python3 offline_inference/vision_language.py
224224
- python3 offline_inference/vision_language_multi_image.py
225225
- python3 other/tensorize_vllm_model.py --model facebook/opt-125m serialize --serialized-directory /tmp/ --suffix v1 && python3 other/tensorize_vllm_model.py --model facebook/opt-125m deserialize --path-to-tensors /tmp/vllm/facebook/opt-125m/v1/model.tensors
226226
- python3 offline_inference/encoder_decoder.py
227-
- python3 offline_inference/classification.py
228-
- python3 offline_inference/embedding.py
229-
- python3 offline_inference/scoring.py
227+
- python3 offline_inference/basic/classify.py
228+
- python3 offline_inference/basic/embed.py
229+
- python3 offline_inference/basic/score.py
230230
- python3 offline_inference/profiling.py --model facebook/opt-125m run_num_steps --num-steps 2
231231

232232
- label: Prefix Caching Test # 9min

docs/source/generate_examples.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ def generate(self) -> str:
147147
return content
148148

149149
content += "## Example materials\n\n"
150-
for file in self.other_files:
150+
for file in sorted(self.other_files):
151151
include = "include" if file.suffix == ".md" else "literalinclude"
152152
content += f":::{{admonition}} {file.relative_to(self.path)}\n"
153153
content += ":class: dropdown\n\n"
@@ -194,7 +194,7 @@ def generate_examples():
194194
path=EXAMPLE_DOC_DIR / "examples_offline_inference_index.md",
195195
title="Offline Inference",
196196
description=
197-
"Offline inference examples demonstrate how to use vLLM in an offline setting, where the model is queried for predictions in batches.", # noqa: E501
197+
"Offline inference examples demonstrate how to use vLLM in an offline setting, where the model is queried for predictions in batches. We recommend starting with <project:basic.md>.", # noqa: E501
198198
caption="Examples",
199199
),
200200
}

docs/source/getting_started/installation/cpu/index.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ vLLM CPU backend supports the following vLLM features:
170170
sudo apt-get install libtcmalloc-minimal4 # install TCMalloc library
171171
find / -name *libtcmalloc* # find the dynamic link library path
172172
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4:$LD_PRELOAD # prepend the library to LD_PRELOAD
173-
python examples/offline_inference/basic.py # run vLLM
173+
python examples/offline_inference/basic/basic.py # run vLLM
174174
```
175175

176176
- When using the online serving, it is recommended to reserve 1-2 CPU cores for the serving framework to avoid CPU oversubscription. For example, on a platform with 32 physical CPU cores, reserving CPU 30 and 31 for the framework and using CPU 0-29 for OpenMP:
@@ -207,7 +207,7 @@ CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ MHZ
207207

208208
# On this platform, it is recommend to only bind openMP threads on logical CPU cores 0-7 or 8-15
209209
$ export VLLM_CPU_OMP_THREADS_BIND=0-7
210-
$ python examples/offline_inference/basic.py
210+
$ python examples/offline_inference/basic/basic.py
211211
```
212212

213213
- If using vLLM CPU backend on a multi-socket machine with NUMA, be aware to set CPU cores using `VLLM_CPU_OMP_THREADS_BIND` to avoid cross NUMA node memory access.

docs/source/getting_started/quickstart.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ For non-CUDA platforms, please refer [here](#installation-index) for specific in
4040

4141
## Offline Batched Inference
4242

43-
With vLLM installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing). See the example script: <gh-file:examples/offline_inference/basic.py>
43+
With vLLM installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing). See the example script: <gh-file:examples/offline_inference/basic/basic.py>
4444

4545
The first line of this example imports the classes {class}`~vllm.LLM` and {class}`~vllm.SamplingParams`:
4646

docs/source/models/generative_models.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ for output in outputs:
4646
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
4747
```
4848

49-
A code example can be found here: <gh-file:examples/offline_inference/basic.py>
49+
A code example can be found here: <gh-file:examples/offline_inference/basic/basic.py>
5050

5151
### `LLM.beam_search`
5252

@@ -103,7 +103,7 @@ for output in outputs:
103103
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
104104
```
105105

106-
A code example can be found here: <gh-file:examples/offline_inference/chat.py>
106+
A code example can be found here: <gh-file:examples/offline_inference/basic/chat.py>
107107

108108
If the model doesn't have a chat template or you want to specify another one,
109109
you can explicitly pass a chat template:

docs/source/models/pooling_models.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ embeds = output.outputs.embedding
8888
print(f"Embeddings: {embeds!r} (size={len(embeds)})")
8989
```
9090

91-
A code example can be found here: <gh-file:examples/offline_inference/embedding.py>
91+
A code example can be found here: <gh-file:examples/offline_inference/basic/embed.py>
9292

9393
### `LLM.classify`
9494

@@ -103,7 +103,7 @@ probs = output.outputs.probs
103103
print(f"Class Probabilities: {probs!r} (size={len(probs)})")
104104
```
105105

106-
A code example can be found here: <gh-file:examples/offline_inference/classification.py>
106+
A code example can be found here: <gh-file:examples/offline_inference/basic/classify.py>
107107

108108
### `LLM.score`
109109

@@ -125,7 +125,7 @@ score = output.outputs.score
125125
print(f"Score: {score}")
126126
```
127127

128-
A code example can be found here: <gh-file:examples/offline_inference/scoring.py>
128+
A code example can be found here: <gh-file:examples/offline_inference/basic/score.py>
129129

130130
## Online Serving
131131

examples/offline_inference/aqlm_example.py

-47
This file was deleted.

examples/offline_inference/arctic.py

-28
This file was deleted.
+94
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# Basic
2+
3+
The `LLM` class provides the primary Python interface for doing offline inference, which is interacting with a model without using a separate model inference server.
4+
5+
## Usage
6+
7+
The first script in this example shows the most basic usage of vLLM. If you are new to Python and vLLM, you should start here.
8+
9+
```bash
10+
python examples/offline_inference/basic/basic.py
11+
```
12+
13+
The rest of the scripts include an [argument parser](https://docs.python.org/3/library/argparse.html), which you can use to pass any arguments that are compatible with [`LLM`](https://docs.vllm.ai/en/latest/api/offline_inference/llm.html). Try running the script with `--help` for a list of all available arguments.
14+
15+
```bash
16+
python examples/offline_inference/basic/classify.py
17+
```
18+
19+
```bash
20+
python examples/offline_inference/basic/embed.py
21+
```
22+
23+
```bash
24+
python examples/offline_inference/basic/score.py
25+
```
26+
27+
The chat and generate scripts also accept the [sampling parameters](https://docs.vllm.ai/en/latest/api/inference_params.html#sampling-parameters): `max_tokens`, `temperature`, `top_p` and `top_k`.
28+
29+
```bash
30+
python examples/offline_inference/basic/chat.py
31+
```
32+
33+
```bash
34+
python examples/offline_inference/basic/generate.py
35+
```
36+
37+
## Features
38+
39+
In the scripts that support passing arguments, you can experiment with the following features.
40+
41+
### Default generation config
42+
43+
The `--generation-config` argument specifies where the generation config will be loaded from when calling `LLM.get_default_sampling_params()`. If set to ‘auto’, the generation config will be loaded from model path. If set to a folder path, the generation config will be loaded from the specified folder path. If it is not provided, vLLM defaults will be used.
44+
45+
> If max_new_tokens is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.
46+
47+
Try it yourself with the following argument:
48+
49+
```bash
50+
--generation-config auto
51+
```
52+
53+
### Quantization
54+
55+
#### AQLM
56+
57+
vLLM supports models that are quantized using AQLM.
58+
59+
Try one yourself by passing one of the following models to the `--model` argument:
60+
61+
- `ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf`
62+
- `ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf`
63+
- `ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf`
64+
- `ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf`
65+
- `BlackSamorez/TinyLlama-1_1B-Chat-v1_0-AQLM-2Bit-1x16-hf`
66+
67+
> Some of these models are likely to be too large for a single GPU. You can split them across multiple GPUs by setting `--tensor-parallel-size` to the number of required GPUs.
68+
69+
#### GGUF
70+
71+
vLLM supports models that are quantized using GGUF.
72+
73+
Try one yourself by downloading a GUFF quantised model and using the following arguments:
74+
75+
```python
76+
from huggingface_hub import hf_hub_download
77+
repo_id = "bartowski/Phi-3-medium-4k-instruct-GGUF"
78+
filename = "Phi-3-medium-4k-instruct-IQ2_M.gguf"
79+
print(hf_hub_download(repo_id, filename=filename))
80+
```
81+
82+
```bash
83+
--model {local-path-printed-above} --tokenizer microsoft/Phi-3-medium-4k-instruct
84+
```
85+
86+
### CPU offload
87+
88+
The `--cpu-offload-gb` argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory on the fly in each model forward pass.
89+
90+
Try it yourself with the following arguments:
91+
92+
```bash
93+
--model meta-llama/Llama-2-13b-chat-hf --cpu-offload-gb 10
94+
```
File renamed without changes.

0 commit comments

Comments
 (0)