-
Notifications
You must be signed in to change notification settings - Fork 292
Add profiling to benchmarking #2032
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2032
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 3441bf9 with merge base a96eeb1 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (2)
benchmarks/microbenchmarks/utils.py:477
- [nitpick] The two adjacent f-strings are concatenated without a delimiter, which may result in an improperly formatted output. Consider adding a separator or splitting the information into separate columns to align with the header.
f"{result.config.shape_name} ({result.config.m}, {result.config.k}, {result.config.n})" f"{result.model_inference_time_in_ms:.2f}",
benchmarks/microbenchmarks/benchmark_inference.py:97
- The generate_model_profile function currently returns only one value (profile_file_path), but the calling code expects a tuple with two elements. Please update the function to return the expected tuple or adjust the caller accordingly.
result.profiler_json_path, result.perfetto_url = generate_model_profile(m_copy, input_data, config.profiler_file_name)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Comments suppressed due to low confidence (1)
benchmarks/microbenchmarks/benchmark_inference.py:96
- The function generate_model_profile returns a single value (profile_file_path) whereas this line expects a tuple; either update the function to return a tuple or assign the return value to a single variable.
result.profiler_json_path, result.perfetto_url = generate_model_profile(m_copy, input_data, config.profiler_file_name)
Please refer the PR stack #1997 for complete picture. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, do the generated profiles make sense?
Add profiler support to benchmarks. The new config param
enable_profiler
enable profiling on a model. The chrome trace for a model is stored in/profiles
in the output directory.