Skip to content

Commit 2573699

Browse files
Merge pull request #39 from lambda-feedback/test-tr124-restructuring
Restructured version of compareExpressions
2 parents 0bf46ea + ecb5da2 commit 2573699

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+4627
-4342
lines changed

Dockerfile

Lines changed: 0 additions & 37 deletions
This file was deleted.

README.md

Lines changed: 65 additions & 207 deletions
Original file line numberDiff line numberDiff line change
@@ -1,238 +1,96 @@
1-
# Python Evaluation Function
1+
# Evaluation Function Template Repository
22

3-
This repository contains the boilerplate code needed to create a containerized evaluation function written in Python.
3+
This template repository contains the boilerplate code needed in order to create an AWS Lambda function that can be written by any tutor to grade a response area in any way they like.
44

5-
## Quickstart
5+
This version is specifically for python, however the ultimate goal is to make similar boilerplate repositories in any language, allowing tutors the freedom to code in what they feel most comfortable with.
66

7-
This chapter helps you to quickly set up a new Python evaluation function using this template repository.
7+
## Table of Contents
8+
- [Evaluation Function Template Repository](#evaluation-function-template-repository)
9+
- [Table of Contents](#table-of-contents)
10+
- [Repository Structure](#repository-structure)
11+
- [Usage](#usage)
12+
- [Getting Started](#getting-started)
13+
- [How it works](#how-it-works)
14+
- [Docker & Amazon Web Services (AWS)](#docker--amazon-web-services-aws)
15+
- [Middleware Functions](#middleware-functions)
16+
- [GitHub Actions](#github-actions)
17+
- [Pre-requisites](#pre-requisites)
18+
- [Contact](#contact)
819

9-
> [!NOTE]
10-
> After setting up the evaluation function, delete this chapter from the `README.md` file, and add your own documentation.
11-
12-
#### 1. Create a new repository
13-
14-
- In GitHub, choose `Use this template` > `Create a new repository` in the repository toolbar.
15-
16-
- Choose the owner, and pick a name for the new repository.
17-
18-
> [!IMPORTANT]
19-
> If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
20-
21-
- Set the visibility to `Public` or `Private`.
22-
23-
> [!IMPORTANT]
24-
> If you want to use GitHub [deployment protection rules](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#deployment-protection-rules), make sure to set the visibility to `Public`.
25-
26-
- Click on `Create repository`.
27-
28-
#### 2. Clone the new repository
29-
30-
Clone the new repository to your local machine using the following command:
20+
## Repository Structure
3121

3222
```bash
33-
git clone <repository-url>
23+
app/
24+
__init__.py
25+
evaluation.py # Script containing the main evaluation_function
26+
docs.md # Documentation page for this function (required)
27+
evaluation_tests.py # Unittests for the main evaluation_function
28+
requirements.txt # list of packages needed for algorithm.py
29+
Dockerfile # for building whole image to deploy to AWS
30+
31+
.github/
32+
workflows/
33+
test-and-deploy.yml # Testing and deployment pipeline
34+
35+
config.json # Specify the name of the evaluation function in this file
36+
.gitignore
3437
```
3538

36-
#### 3. Configure the evaluation function
37-
38-
When deploying to Lambda Feedback, set the evaluation function name in the `config.json` file. Read the [Deploy to Lambda Feedback](#deploy-to-lambda-feedback) section for more information.
39-
40-
#### 4. Develop the evaluation function
41-
42-
You're ready to start developing your evaluation function. Head over to the [Development](#development) section to learn more.
43-
44-
#### 5. Update the README
45-
46-
In the `README.md` file, change the title and description so it fits the purpose of your evaluation function.
47-
48-
Also, don't forget to delete the Quickstart chapter from the `README.md` file after you've completed these steps.
49-
5039
## Usage
5140

52-
You can run the evaluation function either using [the pre-built Docker image](#run-the-docker-image) or build and run [the binary executable](#build-and-run-the-binary).
53-
54-
### Run the Docker Image
55-
56-
The pre-built Docker image comes with [Shimmy](https://github.com/lambda-feedback/shimmy) installed.
57-
58-
> [!TIP]
59-
> Shimmy is a small application that listens for incoming HTTP requests, validates the incoming data and forwards it to the underlying evaluation function. Learn more about Shimmy in the [Documentation](https://github.com/lambda-feedback/shimmy).
60-
61-
The pre-built Docker image is available on the GitHub Container Registry. You can run the image using the following command:
62-
63-
```bash
64-
docker run -p 8080:8080 ghcr.io/lambda-feedback/evaluation-function-boilerplate-python:latest
65-
```
66-
67-
### Run the Script
68-
69-
You can choose between running the Python evaluation function itself, ore using Shimmy to run the function.
70-
71-
**Raw Mode**
72-
73-
Use the following command to run the evaluation function directly:
74-
75-
```bash
76-
python -m evaluation_function.main
77-
```
78-
79-
This will run the evaluation function using the input data from `request.json` and write the output to `response.json`.
80-
81-
**Shimmy**
82-
83-
To have a more user-friendly experience, you can use [Shimmy](https://github.com/lambda-feedback/shimmy) to run the evaluation function.
84-
85-
To run the evaluation function using Shimmy, use the following command:
86-
87-
```bash
88-
shimmy -c "python" -a "-m" -a "evaluation_function.main" -i ipc
89-
```
90-
91-
## Development
92-
93-
### Prerequisites
94-
95-
- [Docker](https://docs.docker.com/get-docker/)
96-
- [Python](https://www.python.org)
97-
98-
### Repository Structure
99-
100-
```bash
101-
.github/workflows/
102-
build.yml # builds the public evaluation function image
103-
deploy.yml # deploys the evaluation function to Lambda Feedback
104-
105-
evaluation_function/main.py # evaluation function entrypoint
106-
evaluation_function/evaluation.py # evaluation function implementation
107-
evaluation_function/evaluation_test.py # evaluation function tests
108-
evaluation_function/preview.py # evaluation function preview
109-
evaluation_function/preview_test.py # evaluation function preview tests
110-
111-
config.json # evaluation function deployment configuration file
112-
```
113-
114-
### Development Workflow
115-
116-
In its most basic form, the development workflow consists of writing the evaluation function in the `evaluation_function.wl` file and testing it locally. As long as the evaluation function adheres to the Evaluation Function API, a development workflow which incorporates using Shimmy is not necessary.
117-
118-
Testing the evaluation function can be done by running the `dev.py` script using the Python interpreter like so:
119-
120-
```bash
121-
python -m evaluation_function.dev <response> <answer>
122-
```
123-
124-
> [!NOTE]
125-
> Specify the `response` and `answer` as command-line arguments.
126-
127-
### Building the Docker Image
128-
129-
To build the Docker image, run the following command:
130-
131-
```bash
132-
docker build -t my-python-evaluation-function .
133-
```
134-
135-
### Running the Docker Image
136-
137-
To run the Docker image, use the following command:
138-
139-
```bash
140-
docker run -it --rm -p 8080:8080 my-python-evaluation-function
141-
```
142-
143-
This will start the evaluation function and expose it on port `8080`.
144-
145-
## Deployment
146-
147-
This section guides you through the deployment process of the evaluation function. If you want to deploy the evaluation function to Lambda Feedback, follow the steps in the [Lambda Feedback](#deploy-to-lambda-feedback) section. Otherwise, you can deploy the evaluation function to other platforms using the [Other Platforms](#deploy-to-other-platforms) section.
148-
149-
### Deploy to Lambda Feedback
150-
151-
Deploying the evaluation function to Lambda Feedback is simple and straightforward, as long as the repository is within the [Lambda Feedback organization](https://github.com/lambda-feedback).
152-
153-
After configuring the repository, a [GitHub Actions workflow](.github/workflows/deploy.yml) will automatically build and deploy the evaluation function to Lambda Feedback as soon as changes are pushed to the main branch of the repository.
41+
### Getting Started
15442

155-
**Configuration**
43+
1. Clone this repository
44+
2. Change the name of the evaluation function in `config.json`
45+
3. The name must be unique. To view existing grading functions, go to:
15646

157-
The deployment configuration is stored in the `config.json` file. Choose a unique name for the evaluation function and set the `EvaluationFunctionName` field in [`config.json`](config.json).
47+
- [Staging API Gateway Integrations](https://eu-west-2.console.aws.amazon.com/apigateway/main/develop/integrations/attach?api=c1o0u8se7b&region=eu-west-2&routes=0xsoy4q)
48+
- [Production API Gateway Integrations](https://eu-west-2.console.aws.amazon.com/apigateway/main/develop/integrations/attach?api=cttolq2oph&integration=qpbgva8&region=eu-west-2&routes=0xsoy4q)
15849

159-
> [!IMPORTANT]
160-
> The evaluation function name must be unique within the Lambda Feedback organization, and must be in `lowerCamelCase`. You can find a example configuration below:
50+
4. Merge commits into the default branch
51+
- This will trigger the `test-and-deploy.yml` workflow, which will build the docker image, push it to a shared ECR repository, then call the backend `grading-function/ensure` route to build the necessary infrastructure to make the function available from the client app.
16152

162-
```json
163-
{
164-
"EvaluationFunctionName": "compareStringsWithPython"
165-
}
166-
```
167-
168-
### Deploy to other Platforms
169-
170-
If you want to deploy the evaluation function to other platforms, you can use the Docker image to deploy the evaluation function.
171-
172-
Please refer to the deployment documentation of the platform you want to deploy the evaluation function to.
173-
174-
If you need help with the deployment, feel free to reach out to the Lambda Feedback team by creating an issue in the template repository.
175-
176-
## FAQ
53+
5. You are now ready to start developing your function:
54+
55+
- Edit the `app/evaluation.py` file, which ultimately gets called when the function is given the `eval` command
56+
- Edit the `app/evaluation_tests.py` file to add tests which get run:
57+
- Every time you commit to this repo, before the image is built and deployed
58+
- Whenever the `healthcheck` command is supplied to the deployed function
59+
- Edit the `app/docs.md` file to reflect your changes. This file is baked into the function's image, and is made available using the `docs` command. This feature is used to display this function's documentation on our [Documentation](https://lambda-feedback.github.io/Documentation/) website once it's been hooked up!
17760

178-
### Pull Changes from the Template Repository
61+
---
17962

180-
If you want to pull changes from the template repository to your repository, follow these steps:
63+
## How it works
18164

182-
1. Add the template repository as a remote:
65+
The function is built on top of a custom base layer, [BaseEvaluationFunctionLayer](https://github.com/lambda-feedback/BaseEvalutionFunctionLayer), which tools, tests and schema checking relevant to all evaluation functions.
18366

184-
```bash
185-
git remote add template https://github.com/lambda-feedback/evaluation-function-boilerplate-python.git
186-
```
67+
### Docker & Amazon Web Services (AWS)
18768

188-
2. Fetch changes from all remotes:
69+
The grading scripts are hosted AWS Lambda, using containers to run a docker image of the app. Docker is a popular tool in software development that allows programs to be hosted on any machine by bundling all its requirements and dependencies into a single file called an **image**.
18970

190-
```bash
191-
git fetch --all
192-
```
71+
Images are run within **containers** on AWS, which give us a lot of flexibility over what programming language and packages/libraries can be used. For more information on Docker, read this [introduction to containerisation](https://www.freecodecamp.org/news/a-beginner-friendly-introduction-to-containers-vms-and-docker-79a9e3e119b/). To learn more about AWS Lambda, click [here](https://geekflare.com/aws-lambda-for-beginners/).
19372

194-
3. Merge changes from the template repository:
73+
### Middleware Functions
74+
In order to run the algorithm and schema on AWS Lambda, some middleware functions have been provided to handle, validate and return the data so all you need to worry about is the evaluation script and testing.
19575

196-
```bash
197-
git merge template/main --allow-unrelated-histories
198-
```
76+
The code needed to build the image using all the middleware functions are available in the [BaseEvaluationFunctionLayer](https://github.com/lambda-feedback/BaseEvalutionFunctionLayer) repository.
19977

200-
> [!WARNING]
201-
> Make sure to resolve any conflicts and keep the changes you want to keep.
78+
### GitHub Actions
79+
Whenever a commit is made to the GitHub repository, the new code will go through a pipeline, where it will be tested for syntax errors and code coverage. The pipeline used is called **GitHub Actions** and the scripts for these can be found in `.github/workflows/`.
20280

203-
## Troubleshooting
81+
On top of that, when starting a new evaluation function, you will have to complete a set of unit test scripts, which not only make sure your code is reliable, but also helps you to build a _specification_ for how the code should function before you start programming.
20482

205-
### Containerized Evaluation Function Fails to Start
83+
Once the code passes all these tests, it will then be uploaded to AWS and will be deployed and ready to go in only a few minutes.
20684

207-
If your evaluation function is working fine when run locally, but not when containerized, there is much more to consider. Here are some common issues and solution approaches:
85+
## Pre-requisites
86+
Although all programming can be done through the GitHub interface, it is recommended you do this locally on your machine. To do this, you must have installed:
20887

209-
**Run-time dependencies**
88+
- Python 3.8 or higher.
21089

211-
Make sure that all run-time dependencies are installed in the Docker image.
90+
- GitHub Desktop or the `git` CLI.
21291

213-
- Python packages: Make sure to add the dependency to the `pyproject.toml` file, and run `poetry install` in the Dockerfile.
214-
- System packages: If you need to install system packages, add the installation command to the Dockerfile.
215-
- ML models: If your evaluation function depends on ML models, make sure to include them in the Docker image.
216-
- Data files: If your evaluation function depends on data files, make sure to include them in the Docker image.
92+
- A code editor such as Atom, VS Code, or Sublime.
21793

218-
**Architecture**
219-
220-
Some package may not be compatible with the architecture of the Docker image. Make sure to use the correct platform when building and running the Docker image.
221-
222-
E.g. to build a Docker image for the `linux/x86_64` platform, use the following command:
223-
224-
```bash
225-
docker build --platform=linux/x86_64 .
226-
```
227-
228-
**Verify Standalone Execution**
229-
230-
If requests are timing out, it might be due to the evaluation function not being able to run. Make sure that the evaluation function can be run as a standalone script. This will help you to identify issues that are specific to the containerized environment.
231-
232-
To run just the evaluation function as a standalone script, without using Shimmy, use the following command:
233-
234-
```bash
235-
docker run -it --rm my-python-evaluation-function python -m evaluation_function.main
236-
```
94+
Copy this template over by clicking **Use this template** button found in the repository on GitHub. Save it to the `lambda-feedback` Organisation.
23795

238-
If the command starts without any errors, the evaluation function is working correctly. If not, you will see the error message in the console.
96+
## Contact

0 commit comments

Comments
 (0)