-
Notifications
You must be signed in to change notification settings - Fork 296
Open
Labels
infrastalestat:contributions welcomeAdd this label to feature request issues so they are separated out from bug reporting issuesAdd this label to feature request issues so they are separated out from bug reporting issuesteam-createdIssues created by Keras Hub team as part of development roadmap.Issues created by Keras Hub team as part of development roadmap.type:featureNew feature or requestNew feature or request
Description
As we grow, and add more and more backbones and tasks, our modeling testing is quickly growing beyond what our infrastructure can handle today. I think this is going to be a general pain point for scaling, and it may be worth investing in some smarter solutions here.
One option would be to only run our "large"
tests when we update model code in question. We could do this for our accelerator testing with something like this (pseudocode).
pytest keras_nlp/ --ignore=keras_nlp/models --run_large
for dir in model dirs:
if $(git diff --quiet HEAD master -- $dir):
pytest keras_nlp/models/$dir --run_large
This could be a relatively lightweight way to avoid the fundamental scaling problem we are facing. We would also need some way to manually invoke a "test everything" command for specific PRs we are worried about (for example, a change to TransformerDecoder
).
innat, jbischof and ADITYADAS1999
Metadata
Metadata
Assignees
Labels
infrastalestat:contributions welcomeAdd this label to feature request issues so they are separated out from bug reporting issuesAdd this label to feature request issues so they are separated out from bug reporting issuesteam-createdIssues created by Keras Hub team as part of development roadmap.Issues created by Keras Hub team as part of development roadmap.type:featureNew feature or requestNew feature or request