Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks for DefaultLTT start/stop #5595

Merged
merged 3 commits into from
Oct 17, 2024

Conversation

shakuzen
Copy link
Member

Start and stop are called on the critical path. We should have benchmarks for them to evaluate changes that may affect their performance.

The benchmark is done differently than most other benchmarks because we want to isolate the time taken by starting a sample with a fixed number of existing samples and stopping a sample that hasn't been stopped. These both require specific states before the benchmarked code runs and that state will be altered by running the benchmark. If the state is not reset between benchmark invocations, we are measuring the time to start a new sample with increasingly more samples present, which we expect to take increasingly longer. That did not seem like the right approach. Likewise, the stop benchmark would result in stopping an already stopped sample if multiple invocations are made without resetting the state, which we expect to be substantially faster than stopping an unstopped sample. Suggestions are welcome on any better way to handle this.

See #5591

Start and stop are called on the critical path. We should have benchmarks for them to evaluate changes that may affect their performance.
@shakuzen shakuzen added type: task A general task performance Issues related to general performance module: benchmarks An issue that is related to our performance tests labels Oct 16, 2024
@shakuzen shakuzen added this to the 1.14.0 GA milestone Oct 16, 2024
We should get a better idea of average performance with a random sample rather than a sample in a fixed position in the active tasks collection.
Using the invocation-level setup method, we can use average-time rather than single shot.
@jonatan-ivanov jonatan-ivanov merged commit 3c252c2 into micrometer-metrics:main Oct 17, 2024
7 checks passed
@jonatan-ivanov jonatan-ivanov deleted the ltt-bench branch October 17, 2024 23:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: benchmarks An issue that is related to our performance tests performance Issues related to general performance type: task A general task
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants