You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+11-4Lines changed: 11 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -119,15 +119,15 @@ Run local LLMs on iGPU, APU and CPU (AMD , Intel, and Qualcomm (Coming Soon)). E
119
119
120
120
1. `ellm_chatbot --port 7788 --host localhost --server_port <ellm_server_port> --server_host localhost`. **Note:** To find out more of the supported arguments. `ellm_chatbot --help`.
It is an interface that allows you to download and deploy OpenAI API compatible server. You can find out the disk space required to download the model in the UI.
127
127
128
128
1. `ellm_modelui --port 6678`. **Note:** To find out more of the supported arguments. `ellm_modelui --help`.
129
129
130
-

130
+

131
131
132
132
## Compile OpenAI-API Compatible Server into Windows Executable
133
133
@@ -138,13 +138,20 @@ It is an interface that allows you to download and deploy OpenAI API compatible
138
138
5. Use it like `ellm_server`. `.\ellm_api_server.exe --model_path <path/to/model/weight>`.
139
139
140
140
## Prebuilt OpenAI API Compatible Windows Executable (Alpha)
141
+
141
142
You can find the prebuilt OpenAI API Compatible Windows Executable in the Release page.
142
143
143
-
*Powershell/Terminal Usage (Use it like `ellm_server`)*:
144
+
_Powershell/Terminal Usage (Use it like `ellm_server`)_:
0 commit comments