Accessing inference endpoint environment / config from within the inference code is undocumented #2381
Unanswered
ram-nadella
asked this question in
Help
Replies: 3 comments
-
@ram-nadella sorry for the delayed response here is your custom image based on one of the prebuilt SageMaker images or completely built from scratch? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @laurenyu it's completely from scratch. (we're operating in a python environment, using a base of |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks for clarifying! I've passed this along to the SageMaker Hosting team to see if they can help. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What did you find confusing? Please describe.
I was looking for a way to get information about the inference endpoint from the inference endpoint code at runtime (eg. within a Flask request handler). Similar to how training and processing have access to JSON config in a file under the
/opt/ml/
structure or an env var.Describe how documentation can be improved
Need docs for the right
/opt/ml/
config file that would provide info about the current inference endpoint (the name would be a good starting point as the rest of the info about model etc. can be pulled from this using the SDK)Additional context
Example of how this info is accessible for training:
SM_TRAINING_ENV
has a lot of info about the job itself.https://docs.aws.amazon.com/sagemaker/latest/dg/docker-container-environmental-variables.html
Is there a similar file/env var for inference?
FYI, we are using a custom container for inference.
Beta Was this translation helpful? Give feedback.
All reactions