You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Termux](https://github.com/termux/termux-app#installation) is a method to execute `llama.cpp` on an Android device (no root required).
6
+
```
7
+
apt update && apt upgrade -y
8
+
apt install git make cmake
9
+
```
10
+
11
+
It's recommended to move your model inside the `~/` directory for best performance:
12
+
```
13
+
cd storage/downloads
14
+
mv model.gguf ~/
15
+
```
16
+
17
+
[Get the code](https://github.com/ggerganov/llama.cpp#get-the-code) & [follow the Linux build instructions](https://github.com/ggerganov/llama.cpp#build) to build `llama.cpp`.
18
+
19
+
## Building the Project using Android NDK
20
+
Obtain the [Android NDK](https://developer.android.com/ndk) and then build with CMake.
21
+
22
+
Execute the following commands on your computer to avoid downloading the NDK to your mobile. Alternatively, you can also do this in Termux:
Install [termux](https://github.com/termux/termux-app#installation) on your device and run `termux-setup-storage` to get access to your SD card (if Android 11+ then run the command twice).
32
+
33
+
Finally, copy these built `llama` binaries and the model file to your device storage. Because the file permissions in the Android sdcard cannot be changed, you can copy the executable files to the `/data/data/com.termux/files/home/bin` path, and then execute the following commands in Termux to add executable permission:
34
+
35
+
(Assumed that you have pushed the built executable files to the /sdcard/llama.cpp/bin path using `adb push`)
Download model [llama-2-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_M.gguf), and push it to `/sdcard/llama.cpp/`, then move it to `/data/data/com.termux/files/home/model/`
* Docker must be installed and running on your system.
5
+
* Create a folder to store big models & intermediate files (ex. /llama/models)
6
+
7
+
## Images
8
+
We have three Docker images available for this project:
9
+
10
+
1.`ghcr.io/ggerganov/llama.cpp:full`: This image includes both the main executable file and the tools to convert LLaMA models into ggml and convert into 4-bit quantization. (platforms: `linux/amd64`, `linux/arm64`)
11
+
2.`ghcr.io/ggerganov/llama.cpp:light`: This image only includes the main executable file. (platforms: `linux/amd64`, `linux/arm64`)
12
+
3.`ghcr.io/ggerganov/llama.cpp:server`: This image only includes the server executable file. (platforms: `linux/amd64`, `linux/arm64`)
13
+
14
+
Additionally, there the following images, similar to the above:
15
+
16
+
-`ghcr.io/ggerganov/llama.cpp:full-cuda`: Same as `full` but compiled with CUDA support. (platforms: `linux/amd64`)
17
+
-`ghcr.io/ggerganov/llama.cpp:light-cuda`: Same as `light` but compiled with CUDA support. (platforms: `linux/amd64`)
18
+
-`ghcr.io/ggerganov/llama.cpp:server-cuda`: Same as `server` but compiled with CUDA support. (platforms: `linux/amd64`)
19
+
-`ghcr.io/ggerganov/llama.cpp:full-rocm`: Same as `full` but compiled with ROCm support. (platforms: `linux/amd64`, `linux/arm64`)
20
+
-`ghcr.io/ggerganov/llama.cpp:light-rocm`: Same as `light` but compiled with ROCm support. (platforms: `linux/amd64`, `linux/arm64`)
21
+
-`ghcr.io/ggerganov/llama.cpp:server-rocm`: Same as `server` but compiled with ROCm support. (platforms: `linux/amd64`, `linux/arm64`)
22
+
23
+
The GPU enabled images are not currently tested by CI beyond being built. They are not built with any variation from the ones in the Dockerfiles defined in [.devops/](.devops/) and the GitHub Action defined in [.github/workflows/docker.yml](.github/workflows/docker.yml). If you need different settings (for example, a different CUDA or ROCm library, you'll need to build the images locally for now).
24
+
25
+
## Usage
26
+
27
+
The easiest way to download the models, convert them to ggml and optimize them is with the --all-in-one command which includes the full docker image.
28
+
29
+
Replace `/path/to/models` below with the actual path where you downloaded the models.
30
+
31
+
```bash
32
+
docker run -v /path/to/models:/models ghcr.io/ggerganov/llama.cpp:full --all-in-one "/models/" 7B
33
+
```
34
+
35
+
On completion, you are ready to play!
36
+
37
+
```bash
38
+
docker run -v /path/to/models:/models ghcr.io/ggerganov/llama.cpp:full --run -m /models/7B/ggml-model-q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 512
39
+
```
40
+
41
+
or with a light image:
42
+
43
+
```bash
44
+
docker run -v /path/to/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models/7B/ggml-model-q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 512
Assuming one has the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit) properly installed on Linux, or is using a GPU enabled cloud, `cuBLAS` should be accessible inside the container.
You may want to pass in some different `ARGS`, depending on the CUDA environment supported by your container host, as well as the GPU architecture.
66
+
67
+
The defaults are:
68
+
69
+
-`CUDA_VERSION` set to `11.7.1`
70
+
-`CUDA_DOCKER_ARCH` set to `all`
71
+
72
+
The resulting images, are essentially the same as the non-CUDA images:
73
+
74
+
1.`local/llama.cpp:full-cuda`: This image includes both the main executable file and the tools to convert LLaMA models into ggml and convert into 4-bit quantization.
75
+
2.`local/llama.cpp:light-cuda`: This image only includes the main executable file.
76
+
3.`local/llama.cpp:server-cuda`: This image only includes the server executable file.
77
+
78
+
## Usage
79
+
80
+
After building locally, Usage is similar to the non-CUDA examples, but you'll need to add the `--gpus` flag. You will also want to use the `--n-gpu-layers` flag.
81
+
82
+
```bash
83
+
docker run --gpus all -v /path/to/models:/models local/llama.cpp:full-cuda --run -m /models/7B/ggml-model-q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 512 --n-gpu-layers 1
84
+
docker run --gpus all -v /path/to/models:/models local/llama.cpp:light-cuda -m /models/7B/ggml-model-q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 512 --n-gpu-layers 1
85
+
docker run --gpus all -v /path/to/models:/models local/llama.cpp:server-cuda -m /models/7B/ggml-model-q4_0.gguf --port 8000 --host 0.0.0.0 -n 512 --n-gpu-layers 1
This expression is automatically updated within the [nixpkgs repo](https://github.com/NixOS/nixpkgs/blob/nixos-24.05/pkgs/by-name/ll/llama-cpp/package.nix#L164).
30
+
31
+
## Flox
32
+
33
+
On Mac and Linux, Flox can be used to install llama.cpp within a Flox environment via
When running the larger models, make sure you have enough disk space to store all the intermediate files.
41
+
42
+
## Memory/Disk Requirements
43
+
44
+
As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same.
45
+
46
+
| Model | Original size | Quantized size (Q4_0) |
47
+
|------:|--------------:|----------------------:|
48
+
| 7B | 13 GB | 3.9 GB |
49
+
| 13B | 24 GB | 7.8 GB |
50
+
| 30B | 60 GB | 19.5 GB |
51
+
| 65B | 120 GB | 38.5 GB |
52
+
53
+
## Quantization
54
+
55
+
Several quantization methods are supported. They differ in the resulting model disk size and inference speed.
0 commit comments