You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: test/integration/scheduler_perf/README.md
+16-15
Original file line number
Diff line number
Diff line change
@@ -33,10 +33,10 @@ Currently the test suite has the following:
33
33
34
34
```shell
35
35
# In Kubernetes root path
36
-
make test-integration WHAT=./test/integration/scheduler_perf ETCD_LOGLEVEL=warn KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=^$$ -benchtime=1ns -bench=BenchmarkPerfScheduling"
36
+
make test-integration WHAT=./test/integration/scheduler_perf/... ETCD_LOGLEVEL=warn KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=^$$ -benchtime=1ns -bench=BenchmarkPerfScheduling"
37
37
```
38
38
39
-
The benchmark suite runs all the tests specified under config/performance-config.yaml.
39
+
The benchmark suite runs all the tests specified under subdirectories split by topic (`<topic>/performance-config.yaml`).
40
40
By default, it runs all workloads that have the "performance" label. In the configuration,
41
41
labels can be added to a test case and/or individual workloads. Each workload also has
42
42
all labels of its test case. The `perf-scheduling-label-filter` command line flag can
@@ -46,11 +46,12 @@ a comma-separated list of label names. Each label may have a `+` or `-` as prefi
46
46
be set. For example, this runs all performance benchmarks except those that are labeled
47
47
as "integration-test":
48
48
```shell
49
-
make test-integration WHAT=./test/integration/scheduler_perf ETCD_LOGLEVEL=warn KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=^$$ -benchtime=1ns -bench=BenchmarkPerfScheduling -perf-scheduling-label-filter=performance,-integration-test"
49
+
make test-integration WHAT=./test/integration/scheduler_perf/... ETCD_LOGLEVEL=warn KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=^$$ -benchtime=1ns -bench=BenchmarkPerfScheduling -perf-scheduling-label-filter=performance,-integration-test"
50
50
```
51
51
52
-
Once the benchmark is finished, JSON file with metrics is available in the current directory (test/integration/scheduler_perf). Look for `BenchmarkPerfScheduling_benchmark_YYYY-MM-DDTHH:MM:SSZ.json`.
53
-
You can use `-data-items-dir` to generate the metrics file elsewhere.
52
+
Once the benchmark is finished, JSON files with metrics are available in the subdirectories (`test/integration/scheduler_perf/config/<topic>`).
53
+
Look for `BenchmarkPerfScheduling_benchmark_YYYY-MM-DDTHH:MM:SSZ.json`.
54
+
You can use `-data-items-dir` to generate the metrics files elsewhere.
54
55
55
56
In case you want to run a specific test in the suite, you can specify the test through `-bench` flag:
56
57
@@ -59,19 +60,19 @@ Otherwise, the golang benchmark framework will try to run a test more than once
59
60
60
61
```shell
61
62
# In Kubernetes root path
62
-
make test-integration WHAT=./test/integration/scheduler_perf ETCD_LOGLEVEL=warn KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=^$$ -benchtime=1ns -bench=BenchmarkPerfScheduling/SchedulingBasic/5000Nodes/5000InitPods/1000PodsToSchedule"
63
+
make test-integration WHAT=./test/integration/scheduler_perf/... ETCD_LOGLEVEL=warn KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=^$$ -benchtime=1ns -bench=BenchmarkPerfScheduling/SchedulingBasic/5000Nodes/5000InitPods/1000PodsToSchedule"
0 commit comments