You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have spent the whole day trying to deploy a custom HF model to Sagemaker endpoint and making sure it uses the GPU, and I had no luck, hoping to get some insight here.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def model_fn(model_dir, context=None):
"""
Load the model for inference
"""
model_path = os.path.join(model_dir, 'model/')
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
print("Loaded Processor")
model = VisionEncoderDecoderModel.from_pretrained(model_path)
print("Loaded Model")
model_dict = {'model': model.to(device), 'processor': processor}
return model_dict
def predict_fn(images, model, context=None):
"""
Apply model to the incoming request
"""
images = [Image.open(io.BytesIO(content)) for content in images]
print("Opened Image")
processor = model['processor']
model = model['model']
pixel_values = processor(images, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
print("Generated Text: " + str(generated_text))
return generated_text
I've read the threads here and here, and followed the suggestions made by @philschmid , I tried changing the version of the transformers_version arg variable but it still doesn't use the GPU (see pic below). I tested the model in the SM notebook using the same GPU instance(ml.g4dn.xlarge) and I can confirm the inference code does use the GPU as expected. So I'm not sure why when it's deployed to the endpoint with the docker image it doesn't use the GPU.
I'd appreciate any help on this, thanks!
The text was updated successfully, but these errors were encountered:
I have spent the whole day trying to deploy a custom HF model to Sagemaker endpoint and making sure it uses the GPU, and I had no luck, hoping to get some insight here.
Here's my script for the model deployment
And here's my
code/inference.py
scriptI've read the threads here and here, and followed the suggestions made by @philschmid , I tried changing the version of the
transformers_version
arg variable but it still doesn't use the GPU (see pic below). I tested the model in the SM notebook using the same GPU instance(ml.g4dn.xlarge) and I can confirm the inference code does use the GPU as expected. So I'm not sure why when it's deployed to the endpoint with the docker image it doesn't use the GPU.I'd appreciate any help on this, thanks!

The text was updated successfully, but these errors were encountered: