This repository contains the boilerplate code needed to create a containerized evaluation function written in Python.
This chapter helps you to quickly set up a new Python evaluation function using this template repository.
Note
After setting up the evaluation function, delete this chapter from the README.md
file, and add your own documentation.
-
In GitHub, choose
Use this template
>Create a new repository
in the repository toolbar. -
Choose the owner, and pick a name for the new repository.
[!IMPORTANT] If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
-
Set the visibility to
Public
orPrivate
.[!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to
Public
. -
Click on
Create repository
.
Clone the new repository to your local machine using the following command:
git clone <repository-url>
When deploying to Lambda Feedback, set the evaluation function name in the config.json
file. Read the Deploy to Lambda Feedback section for more information.
You're ready to start developing your evaluation function. Head over to the Development section to learn more.
In the README.md
file, change the title and description so it fits the purpose of your evaluation function.
Also, don't forget to delete the Quickstart chapter from the README.md
file after you've completed these steps.
You can run the evaluation function either using the pre-built Docker image or build and run the binary executable.
The pre-built Docker image comes with Shimmy installed.
Tip
Shimmy is a small application that listens for incoming HTTP requests, validates the incoming data and forwards it to the underlying evaluation function. Learn more about Shimmy in the Documentation.
The pre-built Docker image is available on the GitHub Container Registry. You can run the image using the following command:
docker run -p 8080:8080 ghcr.io/lambda-feedback/evaluation-function-boilerplate-python:latest
You can choose between running the Python evaluation function itself, ore using Shimmy to run the function.
Raw Mode
Use the following command to run the evaluation function directly:
python -m evaluation_function.main
This will run the evaluation function using the input data from request.json
and write the output to response.json
.
Shimmy
To have a more user-friendly experience, you can use Shimmy to run the evaluation function.
To run the evaluation function using Shimmy, use the following command:
shimmy -c "python" -a "-m" -a "evaluation_function.main" -i ipc
.github/workflows/
build.yml # builds the public evaluation function image
deploy.yml # deploys the evaluation function to Lambda Feedback
evaluation_function/main.py # evaluation function entrypoint
evaluation_function/evaluation.py # evaluation function implementation
evaluation_function/evaluation_test.py # evaluation function tests
evaluation_function/preview.py # evaluation function preview
evaluation_function/preview_test.py # evaluation function preview tests
config.json # evaluation function deployment configuration file
In its most basic form, the development workflow consists of writing the evaluation function in the evaluation_function.wl
file and testing it locally. As long as the evaluation function adheres to the Evaluation Function API, a development workflow which incorporates using Shimmy is not necessary.
Testing the evaluation function can be done by running the dev.py
script using the Python interpreter like so:
python -m evaluation_function.dev <response> <answer>
Note
Specify the response
and answer
as command-line arguments.
To build the Docker image, run the following command:
docker build -t my-python-evaluation-function .
To run the Docker image, use the following command:
docker run -it --rm -p 8080:8080 my-python-evaluation-function
This will start the evaluation function and expose it on port 8080
.
This section guides you through the deployment process of the evaluation function. If you want to deploy the evaluation function to Lambda Feedback, follow the steps in the Lambda Feedback section. Otherwise, you can deploy the evaluation function to other platforms using the Other Platforms section.
Deploying the evaluation function to Lambda Feedback is simple and straightforward, as long as the repository is within the Lambda Feedback organization.
After configuring the repository, a GitHub Actions workflow will automatically build and deploy the evaluation function to Lambda Feedback as soon as changes are pushed to the main branch of the repository.
Configuration
The deployment configuration is stored in the config.json
file. Choose a unique name for the evaluation function and set the EvaluationFunctionName
field in config.json
.
Important
The evaluation function name must be unique within the Lambda Feedback organization, and must be in lowerCamelCase
. You can find a example configuration below:
{
"EvaluationFunctionName": "compareStringsWithPython"
}
If you want to deploy the evaluation function to other platforms, you can use the Docker image to deploy the evaluation function.
Please refer to the deployment documentation of the platform you want to deploy the evaluation function to.
If you need help with the deployment, feel free to reach out to the Lambda Feedback team by creating an issue in the template repository.
If you want to pull changes from the template repository to your repository, follow these steps:
- Add the template repository as a remote:
git remote add template https://github.com/lambda-feedback/evaluation-function-boilerplate-python.git
- Fetch changes from all remotes:
git fetch --all
- Merge changes from the template repository:
git merge template/main --allow-unrelated-histories
Warning
Make sure to resolve any conflicts and keep the changes you want to keep.
If your evaluation function is working fine when run locally, but not when containerized, there is much more to consider. Here are some common issues and solution approaches:
Run-time dependencies
Make sure that all run-time dependencies are installed in the Docker image.
- Python packages: Make sure to add the dependency to the
pyproject.toml
file, and runpoetry install
in the Dockerfile. - System packages: If you need to install system packages, add the installation command to the Dockerfile.
- ML models: If your evaluation function depends on ML models, make sure to include them in the Docker image.
- Data files: If your evaluation function depends on data files, make sure to include them in the Docker image.
Architecture
Some package may not be compatible with the architecture of the Docker image. Make sure to use the correct platform when building and running the Docker image.
E.g. to build a Docker image for the linux/x86_64
platform, use the following command:
docker build --platform=linux/x86_64 .
Verify Standalone Execution
If requests are timing out, it might be due to the evaluation function not being able to run. Make sure that the evaluation function can be run as a standalone script. This will help you to identify issues that are specific to the containerized environment.
To run just the evaluation function as a standalone script, without using Shimmy, use the following command:
docker run -it --rm my-python-evaluation-function python -m evaluation_function.main
If the command starts without any errors, the evaluation function is working correctly. If not, you will see the error message in the console.