You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Deploy on AWS with Hugging Face Inference Endpoints](#deploy-on-aws-with-hugging-face-inference-endpoints)
27
+
-[Deploy on Amazon Bedrock Marketplace]
27
28
-[Deploy on Amazon SageMaker AI with Hugging Face LLM DLCs](#deploy-on-amazon-sagemaker-ai-with-hugging-face-llm-dlcs)
28
29
-[DeepSeek R1 on GPUs](#deepseek-r1-on-gpus)
29
30
-[Distilled models on GPUs](#distilled-models-on-gpus)
@@ -48,6 +49,12 @@ You can find DeepSeek R1 and distilled models, as well as other popular open LLM
48
49
49
50
| **Note:** The team is working on enabling DeepSeek models deployment on Inferentia instances. Stay tuned!
50
51
52
+
### Deploy on Amazon Bedrock Marketplace
53
+
54
+
You can deploy the Deepseek distilled models on Amazon Bedrock via the marketplace, which will deploy an endpoint in Amazon SageMaker AI under the hood. Here is a video of how you can navigate through the AWS console:
### Deploy on Amazon Sagemaker AI with Hugging Face LLM DLCs
52
59
53
60
#### DeepSeek R1 on GPUs
@@ -56,7 +63,12 @@ You can find DeepSeek R1 and distilled models, as well as other popular open LLM
56
63
57
64
#### Distilled models on GPUs
58
65
59
-
Let’s walk through the deployment of DeepSeek-R1-Distill-Llama-70B.
66
+
You can deploy the Deepseek distilled models on Amazon Sagemaker AI with Hugging Face LLM DLCs using Jumpstart directly or using the Python Sagemaker SDK.
67
+
Here is a video of how you can navigate through the AWS console:
0 commit comments