File tree 2 files changed +30
-2
lines changed
2 files changed +30
-2
lines changed Original file line number Diff line number Diff line change @@ -109,6 +109,13 @@ Add `--device ascend` in the serve command.
109
109
lmdeploy serve api_server --backend pytorch --device ascend --eager-mode internlm/internlm2_5-7b-chat
110
110
```
111
111
112
+ Run the following commands to launch docker container for lmdeploy LLM serving:
113
+
114
+ ``` bash
115
+ docker exec -it --net=host crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:latest \
116
+ bash -i -c " lmdeploy serve api_server --backend pytorch --device ascend --eager-mode internlm/internlm2_5-7b-chat"
117
+ ```
118
+
112
119
### Serve a VLM model
113
120
114
121
Add ` --device ascend ` in the serve command
@@ -117,6 +124,13 @@ Add `--device ascend` in the serve command
117
124
lmdeploy serve api_server --backend pytorch --device ascend --eager-mode OpenGVLab/InternVL2-2B
118
125
```
119
126
127
+ Run the following commands to launch docker container for lmdeploy VLM serving:
128
+
129
+ ``` bash
130
+ docker exec -it --net=host crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:latest \
131
+ bash -i -c " lmdeploy serve api_server --backend pytorch --device ascend --eager-mode OpenGVLab/InternVL2-2B"
132
+ ```
133
+
120
134
## Inference with Command line Interface
121
135
122
136
Add ` --device ascend ` in the serve command.
@@ -128,7 +142,7 @@ lmdeploy chat internlm/internlm2_5-7b-chat --backend pytorch --device ascend --e
128
142
Run the following commands to launch lmdeploy chatting after starting container:
129
143
130
144
``` bash
131
- docker exec -it lmdeploy_ascend_demo \
145
+ docker exec -it crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:latest \
132
146
bash -i -c " lmdeploy chat --backend pytorch --device ascend --eager-mode internlm/internlm2_5-7b-chat"
133
147
```
134
148
Original file line number Diff line number Diff line change @@ -105,6 +105,13 @@ if __name__ == "__main__":
105
105
lmdeploy serve api_server --backend pytorch --device ascend --eager-mode internlm/internlm2_5-7b-chat
106
106
```
107
107
108
+ 也可以运行以下命令启动容器运行LLM模型服务。
109
+
110
+ ``` bash
111
+ docker exec -it --net=host crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:latest \
112
+ bash -i -c " lmdeploy serve api_server --backend pytorch --device ascend --eager-mode internlm/internlm2_5-7b-chat"
113
+ ```
114
+
108
115
### VLM 模型服务
109
116
110
117
将` --device ascend ` 加入到服务启动命令中。
@@ -113,6 +120,13 @@ lmdeploy serve api_server --backend pytorch --device ascend --eager-mode internl
113
120
lmdeploy serve api_server --backend pytorch --device ascend --eager-mode OpenGVLab/InternVL2-2B
114
121
```
115
122
123
+ 也可以运行以下命令启动容器运行VLM模型服务。
124
+
125
+ ``` bash
126
+ docker exec -it --net=host crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:latest \
127
+ bash -i -c " lmdeploy serve api_server --backend pytorch --device ascend --eager-mode OpenGVLab/InternVL2-2B"
128
+ ```
129
+
116
130
## 使用命令行与LLM模型对话
117
131
118
132
将` --device ascend ` 加入到服务启动命令中。
@@ -124,7 +138,7 @@ lmdeploy chat internlm/internlm2_5-7b-chat --backend pytorch --device ascend --e
124
138
也可以运行以下命令使启动容器后开启lmdeploy聊天
125
139
126
140
``` bash
127
- docker exec -it lmdeploy_ascend_demo \
141
+ docker exec -it crpi-4crprmm5baj1v8iv.cn-hangzhou.personal.cr.aliyuncs.com/lmdeploy_dlinfer/ascend:latest \
128
142
bash -i -c " lmdeploy chat --backend pytorch --device ascend --eager-mode internlm/internlm2_5-7b-chat"
129
143
```
130
144
You can’t perform that action at this time.
0 commit comments