FastAPI service for evaluating Opentrons protocols with asynchronous analysis plus simulation.
/infoendpoint: Returns application version and protocol API to robot stack version mappings/evaluateendpoint: Accepts protocol files for analysis and simulation with optional custom labware, CSV data, and runtime parameters/jobs/{job_id}/statusendpoint: Check the status of an evaluation job/jobs/{job_id}/resultendpoint: Retrieve either the analysis or simulation artifact via theresult_typequery parameter- Asynchronous processing: Evaluations run in a dedicated processor service
This service uses a two-component architecture:
- FastAPI Server (
api/main.py): Handles file uploads and serves results - Processor Service (
evaluate/processor.py): Runs analysis and simulation jobs asynchronously in the background
Jobs are queued via the filesystem at storage/jobs/{job_id}/ and the processor picks them up for evaluation.
Each job specifies a target robot server version. Supported versions range from 8.0.0 through the special next alias, which always points at the latest published Opentrons alpha build (configured in evaluate/env_config.py). The processor spins up isolated virtual environments (managed via uv) per version so evaluations stay reproducible.
- Python >= 3.10
- uv for dependency management
# Install dependencies
make setup
# Run linter (check only, no fixes)
make lint
# Run tests (unit + integration)
make test
# Run end-to-end tests (starts services automatically)
make test-e2e
# Run all tests and linting
make test-all
# Format code
make formatThe project uses GitHub Actions for continuous integration:
- Linting: Runs
ruffto check code quality - Unit & Integration Tests: Fast tests without services
- End-to-End Tests: Full workflow tests with services running
All checks run automatically on pull requests and pushes to main.
TODO – RTP overrides: Runtime parameter (RTP) override scenarios are not yet implemented or tested end-to-end. Once someone needs RTP overrides, we can extend the processor/API/tests to cover the behavior.
Start the FastAPI server:
make run-api
# Or manually:
uv run fastapi dev api/main.pyStart the processor service (in a separate terminal):
make run-processor
# Or manually:
uv run python run_processor.pyFor one-shot processing (process pending jobs and exit):
make run-processor-onceThe API will be available at http://localhost:8000
Once the server is running, you can access:
- Interactive API docs:
http://localhost:8000/docs - ReDoc documentation:
http://localhost:8000/redoc
Run all tests:
make testNote:
make test-e2eautomatically starts both the API and processor services for you, so you don't need to run them manually before exercising full workflow tests.
Run only unit tests:
make test-unitRun only integration tests:
make test-integrationmake setup- Install dependencies with uv (including dev tools)make teardown- Remove the project virtual environmentmake lint- Run ruff lint + format check (no fixes)make format- Run ruff check --fix and ruff formatmake test- Run unit + integration tests (excludes e2e)make test-unit/make test-integration- Run specific suitesmake test-e2e- Spin up both services, run E2E tests, capture logsmake test-all- Run lint, fast tests, and e2e in sequencemake run- Launch both API and processor in one terminalmake run-api/make run-processor/make run-processor-once- Control individual servicesmake run-client- Execute the example client workflowmake clean-storage- Delete all queued job directoriesmake clean-venvs- Remove analysis virtual environments (recreated on demand)make clean-e2e-artifacts- Remove e2e PID + log files (e2e-*.pid/.log)
protocol-evaluation/
├── api/
│ ├── __init__.py
│ ├── main.py # FastAPI application and endpoints
│ ├── file_storage.py # File storage service
│ ├── config.py # Configuration
│ └── version_mapping.py # Protocol API to robot stack version mappings
├── evaluate/
│ ├── __init__.py
│ ├── env_config.py # Robot server environment definitions
│ ├── job_status.py # Job status management helpers
│ ├── processor.py # Analysis + simulation processor service
│ └── venv_manager.py # Virtual environment lifecycle helpers
├── client/
│ ├── README.md # Usage docs for EvaluationClient
│ └── evaluate_client.py # Sync + async clients for the API
├── tests/
│ ├── unit/
│ │ ├── test_evaluate.py
│ │ ├── test_file_storage.py
│ │ ├── test_info.py
│ │ └── test_processor.py
│ ├── integration/
│ │ ├── test_evaluate.py
│ │ └── test_info.py
│ └── e2e/
│ └── test_protocol_analysis.py
├── run_processor.py # CLI script for running processor
├── Makefile # Development tasks
├── pyproject.toml # Project dependencies and configuration
└── README.md
Returns application information including version and protocol API version mappings.
Response:
{
"version": "0.1.0",
"protocol_api_versions": {
"2.0": "3.14.0",
"2.1": "3.15.2",
...
"2.26": "8.7.0"
}
}Accepts a protocol file for evaluation with optional custom labware, CSV data, and runtime parameters.
Parameters:
protocol_file(required): Python protocol file (.pyextension)labware_files(optional): Array of custom labware JSON files (.jsonextension)csv_file(optional): CSV data file (.csvor.txtextension)rtp(optional): Runtime parameters as JSON objectrobot_version(required): Target robot server version (e.g.,8.7.0,next)
Example using curl:
curl -X POST http://localhost:8000/evaluate \
-F "robot_version=8.7.0" \
-F "protocol_file=@my_protocol.py" \
-F "labware_files=@custom_labware1.json" \
-F "labware_files=@custom_labware2.json" \
-F "csv_file=@plate_data.csv" \
-F 'rtp={"volume": 100, "temperature": 37}'Response:
{
"job_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"protocol_file": "my_protocol.py",
"labware_files": ["custom_labware1.json", "custom_labware2.json"],
"csv_file": "plate_data.csv",
"rtp": {
"volume": 100,
"temperature": 37
},
"robot_version": "8.7.0"
}The job_id is a unique identifier for this evaluation job. Files are saved to storage/jobs/{job_id}/ with the following structure:
{job_id}.py- The protocol filelabware/- Directory containing custom labware JSON files{original_name}.csv- Uploaded CSV/TXT file (if provided)status.json- Job status informationcompleted_analysis.json- Result fromopentrons.cli.analyzecompleted_simulation.json- Result (or skip reason) fromopentrons.simulate
The RTP parameters are stored with the response but not persisted to disk.
Simulation is best-effort: if runtime parameter overrides or RTP CSV inputs are provided, the processor skips simulate and records the reason in completed_simulation.json.
Check the status of an evaluation job.
Response:
{
"job_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "completed",
"updated_at": "2024-01-15T10:30:00.123456"
}Status values: pending, processing, completed, failed
Retrieve either the analysis or simulation results for a completed job. Use the optional result_type query parameter (analysis by default, or simulation).
Response:
{
"job_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "completed",
"result_type": "analysis",
"result": {
"job_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "success",
"files_analyzed": {
"protocol_file": "my_protocol.py",
"labware_files": ["custom_labware1.json"],
"csv_file": "plate_data.csv"
},
"analysis": {
"commands": [],
"labware": [],
"pipettes": [],
"modules": [],
"errors": [],
"warnings": []
},
"metadata": {
"protocol_api_version": "2.26",
"processed_at": "2024-01-15T10:30:00Z"
}
}
}
To retrieve the simulation output instead, append `?result_type=simulation` to the request URL.- Submit: Client POSTs to
/evaluatewith the protocol, optional labware/CSV, RTP payload, and robot version - Queue: API saves the files to
storage/jobs/{job_id}/, persists metadata, and marks status aspending - Process: Processor picks up pending jobs, ensures the correct venv exists, runs analysis, then attempts simulation (skipping simulation when CSV/RTP overrides are present)
- Complete: Processor writes
completed_analysis.json,completed_simulation.json, and updates status tocompleted - Retrieve: Client GETs
/jobs/{job_id}/result?result_type=analysis|simulationto fetch the desired artifact
The processor service can run:
- Daemon mode (default): Continuously polls for new jobs
- One-shot mode: Processes pending jobs and exits (useful for cron/scheduled tasks)
Here's a complete example workflow:
Terminal 1 - Start the API server:
make run-apiTerminal 2 - Start the processor:
make run-processorcurl -X POST http://localhost:8000/evaluate \
-F "robot_version=8.7.0" \
-F "protocol_file=@example_protocol.py" \
| jq '.'Response includes the assigned job_id, recorded filenames, optional RTP payload, and robot version.
curl http://localhost:8000/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890/status | jq '.'Response:
{
"job_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status": "completed",
"updated_at": "2024-01-15T10:30:00.123456"
}# Analysis output (default)
curl "http://localhost:8000/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890/result" | jq '.'
# Simulation output
curl "http://localhost:8000/jobs/a1b2c3d4-e5f6-7890-abcd-ef1234567890/result?result_type=simulation" | jq '.'The analysis response contains protocol metadata, commands, and warnings. The simulation response mirrors completed_simulation.json and may indicate status: "skipped" with a reason when RTP overrides or CSV inputs prevent running simulate.