Skip to content

Commit cfac111

Browse files
wangshuai09xuedinge233hipudding
authored
cann: add doc for cann backend (ggml-org#8867)
Co-authored-by: xuedinge233 <[email protected]> Co-authored-by: hipudding <[email protected]>
1 parent 1b6ff90 commit cfac111

File tree

4 files changed

+329
-0
lines changed

4 files changed

+329
-0
lines changed

.devops/llama-cli-cann.Dockerfile

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
ARG ASCEND_VERSION=8.0.rc2.alpha003-910b-openeuler22.03-py3.8
2+
3+
FROM cosdt/cann:$ASCEND_VERSION AS build
4+
5+
WORKDIR /app
6+
7+
COPY . .
8+
9+
RUN yum install -y gcc g++ cmake make
10+
ENV ASCEND_TOOLKIT_HOME=/usr/local/Ascend/ascend-toolkit/latest
11+
ENV LIBRARY_PATH=${ASCEND_TOOLKIT_HOME}/lib64:$LIBRARY_PATH
12+
ENV LD_LIBRARY_PATH=${ASCEND_TOOLKIT_HOME}/lib64:${ASCEND_TOOLKIT_HOME}/lib64/plugin/opskernel:${ASCEND_TOOLKIT_HOME}/lib64/plugin/nnengine:${ASCEND_TOOLKIT_HOME}/opp/built-in/op_impl/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH}
13+
ENV PYTHONPATH=${ASCEND_TOOLKIT_HOME}/python/site-packages:${ASCEND_TOOLKIT_HOME}/opp/built-in/op_impl/ai_core/tbe:${PYTHONPATH}
14+
ENV PATH=${ASCEND_TOOLKIT_HOME}/bin:${ASCEND_TOOLKIT_HOME}/compiler/ccec_compiler/bin:${PATH}
15+
ENV ASCEND_AICPU_PATH=${ASCEND_TOOLKIT_HOME}
16+
ENV ASCEND_OPP_PATH=${ASCEND_TOOLKIT_HOME}/opp
17+
ENV TOOLCHAIN_HOME=${ASCEND_TOOLKIT_HOME}/toolkit
18+
ENV ASCEND_HOME_PATH=${ASCEND_TOOLKIT_HOME}
19+
20+
# find libascend_hal.so, because the drive hasn`t been mounted.
21+
ENV LD_LIBRARY_PATH=${ASCEND_TOOLKIT_HOME}/runtime/lib64/stub:$LD_LIBRARY_PATH
22+
23+
RUN echo "Building with static libs" && \
24+
source /usr/local/Ascend/ascend-toolkit/set_env.sh --force && \
25+
cmake -B build -DGGML_CANN=ON -DBUILD_SHARED_LIBS=OFF && \
26+
cmake --build build --config Release --target llama-cli
27+
28+
# TODO: use image with NNRT
29+
FROM cosdt/cann:$ASCEND_VERSION AS runtime
30+
COPY --from=build /app/build/bin/llama-cli /llama-cli
31+
32+
ENV LC_ALL=C.utf8
33+
34+
ENV ASCEND_TOOLKIT_HOME=/usr/local/Ascend/ascend-toolkit/latest
35+
ENV LIBRARY_PATH=${ASCEND_TOOLKIT_HOME}/lib64:$LIBRARY_PATH
36+
ENV LD_LIBRARY_PATH=${ASCEND_TOOLKIT_HOME}/lib64:${ASCEND_TOOLKIT_HOME}/lib64/plugin/opskernel:${ASCEND_TOOLKIT_HOME}/lib64/plugin/nnengine:${ASCEND_TOOLKIT_HOME}/opp/built-in/op_impl/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH}
37+
ENV PYTHONPATH=${ASCEND_TOOLKIT_HOME}/python/site-packages:${ASCEND_TOOLKIT_HOME}/opp/built-in/op_impl/ai_core/tbe:${PYTHONPATH}
38+
ENV PATH=${ASCEND_TOOLKIT_HOME}/bin:${ASCEND_TOOLKIT_HOME}/compiler/ccec_compiler/bin:${PATH}
39+
ENV ASCEND_AICPU_PATH=${ASCEND_TOOLKIT_HOME}
40+
ENV ASCEND_OPP_PATH=${ASCEND_TOOLKIT_HOME}/opp
41+
ENV TOOLCHAIN_HOME=${ASCEND_TOOLKIT_HOME}/toolkit
42+
ENV ASCEND_HOME_PATH=${ASCEND_TOOLKIT_HOME}
43+
44+
ENTRYPOINT ["/llama-cli" ]

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -425,6 +425,7 @@ Please refer to [Build llama.cpp locally](./docs/build.md)
425425
| [CUDA](./docs/build.md#cuda) | Nvidia GPU |
426426
| [hipBLAS](./docs/build.md#hipblas) | AMD GPU |
427427
| [Vulkan](./docs/build.md#vulkan) | GPU |
428+
| [CANN](./docs/build.md#cann) | Ascend NPU |
428429

429430
## Tools
430431

docs/backend/CANN.md

Lines changed: 259 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,259 @@
1+
# llama.cpp for CANN
2+
3+
- [Background](#background)
4+
- [News](#news)
5+
- [OS](#os)
6+
- [Hardware](#hardware)
7+
- [Model Supports](#model-supports)
8+
- [DataType Supports](#datatype-supports)
9+
- [Docker](#docker)
10+
- [Linux](#linux)
11+
- [TODO](#todo)
12+
13+
14+
## Background
15+
16+
**Ascend NPU** is a range of AI processors using Neural Processing Unit. It will efficiently handle matrix-matrix multiplication, dot-product and scalars.
17+
18+
**CANN** (Compute Architecture for Neural Networks) is a heterogeneous computing architecture for AI scenarios, providing support for multiple AI frameworks on the top and serving AI processors and programming at the bottom. It plays a crucial role in bridging the gap between upper and lower layers, and is a key platform for improving the computing efficiency of Ascend AI processors. Meanwhile, it offers a highly efficient and easy-to-use programming interface for diverse application scenarios, allowing users to rapidly build AI applications and services based on the Ascend platform.
19+
20+
**Llama.cpp + CANN**
21+
22+
The llama.cpp CANN backend is designed to support Ascend NPU. It utilize the ability of AscendC and ACLNN which are intergrated to CANN Toolkit and kernels to using Ascend NPU directly.
23+
24+
## News
25+
26+
- 2024.8
27+
- Support `Q4_0` and `Q8_0` data type for Ascend NPU.
28+
- 2024.7
29+
- Create CANN backend for Ascend NPU.
30+
31+
## OS
32+
33+
| OS | Status | Verified |
34+
|:-------:|:-------:|:----------------------------------------------:|
35+
| Linux | Support | Ubuntu 22.04, OpenEuler22.03 |
36+
37+
38+
## Hardware
39+
40+
### Ascend NPU
41+
42+
**Verified devices**
43+
| Ascend NPU | Status |
44+
|:-----------------------------:|:-------:|
45+
| Atlas 300T A2 | Support |
46+
47+
*Notes:*
48+
49+
- If you have trouble with Ascend NPU device, please create a issue with **[CANN]** prefix/tag.
50+
- If you run successfully with your Ascend NPU device, please help update the upper table.
51+
52+
53+
## Model Supports
54+
55+
| Model Name | FP16 | Q8_0 | Q4_0 |
56+
|:----------------------------|:-----:|:----:|:----:|
57+
| AquilaChat2-7B ||||
58+
| Baichuan-7b ||||
59+
| Baichuan2-7B-Chat ||||
60+
| bitnet_b1_58-large ||||
61+
| bloom-560m || x ||
62+
| bloomz-alpaca-560m || x ||
63+
| c4ai-command-r-35B-v01 | x | x | x |
64+
| chatglm3-6B | x | x | x |
65+
| chinese-alpaca-2-1.3b ||||
66+
| CodeShell-7B ||||
67+
| deepseek-ai_deepseek-coder-1.3B-base | x | x | x |
68+
| deepseek-ai_DeepSeek-V2-Lite | x | x | x |
69+
| deepseek-coder-6.7B-instruct | x | x | x |
70+
| DeepSeek-V2-Lite-64x1.5B | x | x | x |
71+
| falcon-7b-instruct ||||
72+
| flan-t5-large ||||
73+
| gemma-2-9b-it ||||
74+
| glm-4-9B | x | x | x |
75+
| gpt2 ||||
76+
| Gpt2-163M ||||
77+
| granite-3B-code-instruct ||||
78+
| GritLM-7B ||||
79+
| internlm2_5-7b-chat ||||
80+
| koala-7B-HF ||||
81+
| Llama-2-7b-chat-hf ||||
82+
| Llama-3-Smaug-8B ||||
83+
| Llama2-Chinese-7b-Chat ||||
84+
| Llama3-8B ||||
85+
| Llama3-8b-chinese ||||
86+
| mamba-130m-hf ||||
87+
| Mistral-7B-Instruct-v0.2 ||||
88+
| Mixtral-8x7B-Instruct-v0.1 | x |||
89+
| mpt-7B ||||
90+
| OLMo-1B-hf ||||
91+
| OpenELM-3B-Instruct ||||
92+
| Orion-14b-base ||||
93+
| phi1 | x | x | x |
94+
| phi2 | x | x | x |
95+
| Phi-3-mini-4k-instruct ||||
96+
| plamo-13b ||||
97+
| pythia-70M | x | x | x |
98+
| Qwen-7B ||||
99+
| Qwen2-1.5B-Instruct || x ||
100+
| Refact-1_6B-fim ||||
101+
| SmolLM-135M ||||
102+
| stablelm-zephyr | x | x | x |
103+
| stablelm-2-zephyr-1_6b | x | x | x |
104+
| starcoderbase-1b ||||
105+
| starcoder2-3b ||||
106+
| vigogne-7b-chat ||||
107+
| xverse-7b-chat ||||
108+
| Yi-6b-Chat ||||
109+
110+
111+
112+
## DataType Supports
113+
114+
| DataType | Status |
115+
|:----------------------:|:-------:|
116+
| FP16 | Support |
117+
| Q8_0 | Support |
118+
| Q4_0 | Support |
119+
120+
## Docker
121+
122+
### Build Images
123+
You can get a image with llama.cpp in one command.
124+
```sh
125+
docker build -t llama-cpp-cann -f .devops/llama-cli-cann.Dockerfile .
126+
```
127+
128+
### Run container
129+
130+
```sh
131+
# Find all cards.
132+
npu-smi info
133+
134+
# Select the cards that you want to use, make sure these cards are not used by someone.
135+
# Following using cards of device0.
136+
docker run --name llamacpp --device /dev/davinci0 --device /dev/davinci_manager --device /dev/devmm_svm --device /dev/hisi_hdc -v /usr/local/dcmi:/usr/local/dcmi -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info -v /PATH_TO_YOUR_MODELS/:/app/models -it llama-cpp-cann -m /app/models/MODEL_PATH -ngl 32 -p "Building a website can be done in 10 simple steps:"
137+
```
138+
139+
*Notes:*
140+
141+
- You may need to install Ascend Driver and firmware on the **host** machine *(Please refer to the [Linux configuration](#linux) for details)*.
142+
143+
## Linux
144+
145+
### I. Setup Environment
146+
147+
1. **Install Ascend Driver and firmware**
148+
149+
```sh
150+
# create driver running user.
151+
sudo groupadd -g HwHiAiUser
152+
sudo useradd -g HwHiAiUser -d /home/HwHiAiUser -m HwHiAiUser -s /bin/bash
153+
sudo usermod -aG HwHiAiUser $USER
154+
155+
# download driver from https://www.hiascend.com/hardware/firmware-drivers/community according to your system
156+
# and install driver.
157+
sudo sh Ascend-hdk-910b-npu-driver_x.x.x_linux-{arch}.run --full --install-for-all
158+
```
159+
160+
Once installed, run `npu-smi info` to check whether driver is installed successfully.
161+
```sh
162+
+-------------------------------------------------------------------------------------------+
163+
| npu-smi 24.1.rc2 Version: 24.1.rc2 |
164+
+----------------------+---------------+----------------------------------------------------+
165+
| NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page)|
166+
| Chip | Bus-Id | AICore(%) Memory-Usage(MB) HBM-Usage(MB) |
167+
+======================+===============+====================================================+
168+
| 2 xxx | OK | 64.4 51 15 / 15 |
169+
| 0 | 0000:01:00.0 | 0 1873 / 15077 0 / 32768 |
170+
+======================+===============+====================================================+
171+
| 5 xxx | OK | 64.0 52 15 / 15 |
172+
| 0 | 0000:81:00.0 | 0 1874 / 15077 0 / 32768 |
173+
+======================+===============+====================================================+
174+
| No running processes found in NPU 2 |
175+
+======================+===============+====================================================+
176+
| No running processes found in NPU 5 |
177+
+======================+===============+====================================================+
178+
```
179+
180+
2. **Install Ascend Firmware**
181+
```sh
182+
# download driver from https://www.hiascend.com/hardware/firmware-drivers/community according to your system
183+
# and install driver.
184+
sudo sh Ascend-hdk-910b-npu-firmware_x.x.x.x.X.run --full
185+
```
186+
If the following messaage appers, firmware is installed successfully.
187+
```sh
188+
Firmware package installed successfully!
189+
```
190+
191+
192+
3. **Install CANN toolkit and kernels**
193+
194+
CANN toolkit and kernels can be obtained from the official [CANN Toolkit](https://www.hiascend.com/zh/developer/download/community/result?module=cann) page.
195+
196+
Please download the corresponding version that satified your system. The minimum version required is 8.0.RC2.alpha002 and here is the install command.
197+
```sh
198+
pip3 install attrs numpy decorator sympy cffi pyyaml pathlib2 psutil protobuf scipy requests absl-py wheel typing_extensions
199+
sh Ascend-cann-toolkit_8.0.RC2.alpha002_linux-aarch64.run --install
200+
sh Ascend-cann-kernels-910b_8.0.RC2.alpha002_linux.run --install
201+
```
202+
203+
Set Ascend Variables:
204+
```sh
205+
echo "source ~/Ascend/ascend-toolkit/set_env.sh" >> ~/.bashrc
206+
source ~/.bashrc
207+
```
208+
209+
Upon a successful installation, CANN is enabled for the available ascend devices.
210+
211+
### II. Build llama.cpp
212+
213+
```sh
214+
cmake -B build -DGGML_CANN=on -DCMAKE_BUILD_TYPE=release
215+
cmake --build build --config release
216+
```
217+
218+
### III. Run the inference
219+
220+
1. **Retrieve and prepare model**
221+
222+
You can refer to the general [*Prepare and Quantize*](../../README.md#prepare-and-quantize) guide for model prepration.
223+
224+
**Notes**:
225+
226+
- CANN backend only supports FP16/Q4_0/Q8_0 models currently.
227+
228+
2. **Launch inference**
229+
230+
There are two device selection modes:
231+
232+
- Single device: Use one device target specified by the user.
233+
- Multiple devices: Automatically choose the devices with the same backend.
234+
235+
| Device selection | Parameter |
236+
|:----------------:|:--------------------------------------:|
237+
| Single device | --split-mode none --main-gpu DEVICE_ID |
238+
| Multiple devices | --split-mode layer (default) |
239+
240+
Examples:
241+
242+
- Use device 0:
243+
244+
```sh
245+
./build/bin/llama-cli -m path_to_model -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm none -mg 0
246+
```
247+
248+
- Use multiple devices:
249+
250+
```sh
251+
./build/bin/llama-cli -m path_to_model -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 -sm layer
252+
```
253+
254+
### **GitHub contribution**:
255+
Please add the **[CANN]** prefix/tag in issues/PRs titles to help the CANN-team check/address them without delay.
256+
257+
258+
## TODO
259+
- Support more models and data types.

docs/build.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -352,6 +352,31 @@ cmake --build build --config Release
352352
# ggml_vulkan: Using Intel(R) Graphics (ADL GT2) | uma: 1 | fp16: 1 | warp size: 32
353353
```
354354
355+
### CANN
356+
This provides NPU acceleration using the AI cores of your Ascend NPU. And [CANN](https://www.hiascend.com/en/software/cann) is a hierarchical APIs to help you to quickly build AI applications and service based on Ascend NPU.
357+
358+
For more information about Ascend NPU in [Ascend Community](https://www.hiascend.com/en/).
359+
360+
Make sure to have the CANN toolkit installed. You can download it from here: [CANN Toolkit](https://www.hiascend.com/developer/download/community/result?module=cann)
361+
362+
Go to `llama.cpp` directory and build using CMake.
363+
```bash
364+
cmake -B build -DGGML_CANN=on -DCMAKE_BUILD_TYPE=release
365+
cmake --build build --config release
366+
```
367+
368+
You can test with:
369+
370+
`./build/llama-cli -m PATH_TO_MODEL -p "Building a website can be done in 10 steps:" -ngl 32`
371+
372+
If the fllowing info is output on screen, you are using `llama.cpp by CANN backend`:
373+
```bash
374+
llm_load_tensors: CANN buffer size = 13313.00 MiB
375+
llama_new_context_with_model: CANN compute buffer size = 1260.81 MiB
376+
```
377+
378+
For detailed info, such as model/device supports, CANN install, please refer to [llama.cpp for CANN](./backend/CANN.md).
379+
355380
### Android
356381
357382
To read documentation for how to build on Android, [click here](./android.md)

0 commit comments

Comments
 (0)