Skip to content

Commit ea4ca22

Browse files
committed
Auto merge of #10920 - blyxyas:speedtest, r=llogiq
Add `SPEEDTEST` In the `master` branch, we currently don't have any way to test the performance of a single lint in changes. This PR adds `SPEEDTEST`, the environment variable which lets you do a speed test on a lint / category of tests with various configuration options. Maybe we should merge this with `lintcheck` 🤔 See the book page for more information. changelog:none
2 parents 83d0682 + 57923c3 commit ea4ca22

File tree

2 files changed

+63
-6
lines changed

2 files changed

+63
-6
lines changed

book/src/development/speedtest.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# Speedtest
2+
`SPEEDTEST` is the tool we use to measure lint's performance, it works by executing the same test several times.
3+
4+
It's useful for measuring changes to current lints and deciding if the performance changes too much. `SPEEDTEST` is
5+
accessed by the `SPEEDTEST` (and `SPEEDTEST_*`) environment variables.
6+
7+
## Checking Speedtest
8+
9+
To do a simple speed test of a lint (e.g. `allow_attributes`), use this command.
10+
11+
```sh
12+
$ SPEEDTEST=ui TESTNAME="allow_attributes" cargo uitest -- --nocapture
13+
```
14+
15+
This will test all `ui` tests (`SPEEDTEST=ui`) whose names start with `allow_attributes`. By default, `SPEEDTEST` will
16+
iterate your test 1000 times. But you can change this with `SPEEDTEST_ITERATIONS`.
17+
18+
```sh
19+
$ SPEEDTEST=toml SPEEDTEST_ITERATIONS=100 TESTNAME="semicolon_block" cargo uitest -- --nocapture
20+
```
21+
22+
> **WARNING**: Be sure to use `-- --nocapture` at the end of the command to see the average test time. If you don't
23+
> use `-- --nocapture` (e.g. `SPEEDTEST=ui` `TESTNAME="let_underscore_untyped" cargo uitest -- --nocapture`), this
24+
> will not show up.

tests/compile-test.rs

Lines changed: 39 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -217,12 +217,45 @@ fn main() {
217217
}
218218

219219
set_var("CLIPPY_DISABLE_DOCS_LINKS", "true");
220-
run_ui();
221-
run_ui_toml();
222-
run_ui_cargo();
223-
run_internal_tests();
224-
rustfix_coverage_known_exceptions_accuracy();
225-
ui_cargo_toml_metadata();
220+
// The SPEEDTEST_* env variables can be used to check Clippy's performance on your PR. It runs the
221+
// affected test 1000 times and gets the average.
222+
if let Ok(speedtest) = std::env::var("SPEEDTEST") {
223+
println!("----------- STARTING SPEEDTEST -----------");
224+
let f = match speedtest.as_str() {
225+
"ui" => run_ui as fn(),
226+
"cargo" => run_ui_cargo as fn(),
227+
"toml" => run_ui_toml as fn(),
228+
"internal" => run_internal_tests as fn(),
229+
"rustfix-coverage-known-exceptions-accuracy" => rustfix_coverage_known_exceptions_accuracy as fn(),
230+
"ui-cargo-toml-metadata" => ui_cargo_toml_metadata as fn(),
231+
232+
_ => panic!("unknown speedtest: {speedtest} || accepted speedtests are: [ui, cargo, toml, internal]"),
233+
};
234+
235+
let iterations;
236+
if let Ok(iterations_str) = std::env::var("SPEEDTEST_ITERATIONS") {
237+
iterations = iterations_str
238+
.parse::<u64>()
239+
.unwrap_or_else(|_| panic!("Couldn't parse `{iterations_str}`, please use a valid u64"));
240+
} else {
241+
iterations = 1000;
242+
}
243+
244+
let mut sum = 0;
245+
for _ in 0..iterations {
246+
let start = std::time::Instant::now();
247+
f();
248+
sum += start.elapsed().as_millis();
249+
}
250+
println!("average {} time: {} millis.", speedtest.to_uppercase(), sum / 1000);
251+
} else {
252+
run_ui();
253+
run_ui_toml();
254+
run_ui_cargo();
255+
run_internal_tests();
256+
rustfix_coverage_known_exceptions_accuracy();
257+
ui_cargo_toml_metadata();
258+
}
226259
}
227260

228261
const RUSTFIX_COVERAGE_KNOWN_EXCEPTIONS: &[&str] = &[

0 commit comments

Comments
 (0)