Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New feature #1

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/assess_new_production_model.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: <YOUR SERVICE PRINCIPAL ROLE ARN>
role-to-assume: arn:aws:iam::436283604051:role/OuterboundsServicePrincipalsRole
aws-region: us-west-2
- run: aws sts get-caller-identity
- name: Set up Python 3.x
Expand All @@ -28,5 +28,5 @@ jobs:
env:
METAFLOW_HOME: /tmp/.metaflowconfig
run: |
<YOUR OB CONFIGURE COMMAND FOR SERVICE PRINCIPALS>
outerbounds configure awssm-arn:eJxszc2KgzAUxfF3ydoLZnL9mOwykhEXo0MUhlmFmCbShVqMxUXpuxepdNXlgT/ndyPir9WtLJTsWv0jalFKpZUsq6YmnFwDOBNWoCR6Gz63FmqPzTJxswUenF3cGkYzmcEt/PXBM4oJSz+QIcOj4nN/gdHbefLnAdLPnKG3FHLXI+DJMTAs9dBTlvvE+jjOkESk+dJFU39Xpe7+f+UubwEOFQ6W3B8BAAD//95nQ6k=
python evaluate_new_model_flow.py run --with card
2 changes: 1 addition & 1 deletion constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
EVALUATE_DEPLOYMENT_CANDIDATES_COMMAND = ["python", "evaluate_deployment_candidates.py"]

# This is the threshold that determines whether a model is a candidate for deployment.
# In practice, you might define this by comparing the result against a baseline model's performance.
# In practice, you might define this by comparing the esult against a baseline model's performance.
PERFORMANCE_THRESHOLDS = {
'accuracy': 90
}
Expand Down
4 changes: 2 additions & 2 deletions evaluate_new_model_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ class EvaluateNewModel(FlowSpec):
def start(self):
"Train and evaluate a model defined in my_data_science_module.py."

# Import my organization's custom modules.
# Import my organization's custom .
from my_data_science_module import MyDataLoader, MyModel

# Load some data.
Expand All @@ -30,7 +30,7 @@ def start(self):
# In practice this may return a tabular dataframe or a DataLoader object for images or text.

# Simulate scores measured on your model's performance.
self.model = MyModel() # When this flow passes your CI/CD criteria, this artifact will be used in production to produce predictions.
self.model = MyModel() # When this flow passes your CI/CD criteria, this artifacin production to produce predictions.
self.eval_metrics = self.model.score(data=self.train_data)
# In this toy example, the "model evaluation" will just add 1 to the "self.train_data" integer.

Expand Down
2 changes: 1 addition & 1 deletion predict_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

def fetch_default_run_id():
"""
Return the run id of the latest successful upstream flow's deployment_candidate.
Return the run id of the latest successful 's deployment_candidate.
In practice, you will want far more rigorous conditions.
For example, you might want to smoke test the model rather than just assert is not None.
"""
Expand Down