-
Notifications
You must be signed in to change notification settings - Fork 646
Open
Labels
good first issueGood for newcomersGood for newcomersmodule: ciIssues related to continuous integrationIssues related to continuous integrationtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🚀 The feature, motivation and pitch
Regression tracking for aot export times: add some timestamps to indicate how long the llama export took:
executorch/.ci/scripts/test_llama.sh
Line 268 in d67fb52
$PYTHON_EXECUTABLE -m examples.models.llama.export_llama ${EXPORT_ARGS} |
And fail if the export time is over some threshold. For now, we can set the threshold to be the time it takes to export the longest-running configuration.
The breakdown can look something like this:
- Add timestamps for the export command in CI.
- Observe the time it takes for each llama test in CI runs. Expect different export commands to show different export times.
- Add a fail, if the export time exceeds the largest time observed in (2), with some buffer; I guess we expect some variability with each run.
- Optional add-on: make (3) more fine-grained, and targeted per-command.
Alternatives
No response
Additional context
No response
RFC (Optional)
No response
Metadata
Metadata
Assignees
Labels
good first issueGood for newcomersGood for newcomersmodule: ciIssues related to continuous integrationIssues related to continuous integrationtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Type
Projects
Status
To triage
Status
No status
Milestone
Relationships
Development
Select code repository
Activity
lucylq commentedon May 7, 2025
cc @digantdesai