Skip to content

chore: add query and reflection benchmarks #386

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Mar 23, 2025
Merged

Conversation

makspll
Copy link
Owner

@makspll makspll commented Mar 23, 2025

Summary

  • Adds more interesting benchmarks around querying and reflection, revealing slower parts of BMS and/or Bevy
  • Re-create all plots based on benchmarks present every push to main

@makspll makspll requested a review from Copilot March 23, 2025 17:24
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds new benchmarks for query and reflection to highlight slower aspects of BMS and/or Bevy. Key changes include updating the Xtasks::Bench variant to accept a publish flag, adjusting the corresponding bench function signature and behavior, and adding pre-benchmark hooks and random function registrations in the test harness.

Reviewed Changes

Copilot reviewed 4 out of 20 changed files in this pull request and generated no comments.

File Description
crates/xtask/src/main.rs Updated Xtasks::Bench variant and bench function to handle a publish flag
crates/testing_crates/script_integration_test_harness/src/lib.rs Added pre_bench hook calls in Lua and Rhai benchmark routines
crates/testing_crates/script_integration_test_harness/src/test_functions.rs Added random and reseed functions for test utilities
crates/testing_crates/script_integration_test_harness/Cargo.toml Added rand and rand_chacha dependencies to support new functions
Files not reviewed (16)
  • assets/benchmarks/function/call.lua: Language not supported
  • assets/benchmarks/function/call.rhai: Language not supported
  • assets/benchmarks/function/call_4_args.lua: Language not supported
  • assets/benchmarks/function/call_4_args.rhai: Language not supported
  • assets/benchmarks/math/vec_mat_ops.lua: Language not supported
  • assets/benchmarks/math/vec_mat_ops.rhai: Language not supported
  • assets/benchmarks/query/1000_entities.lua: Language not supported
  • assets/benchmarks/query/1000_entities.rhai: Language not supported
  • assets/benchmarks/query/100_entities.lua: Language not supported
  • assets/benchmarks/query/100_entities.rhai: Language not supported
  • assets/benchmarks/query/10_entities.lua: Language not supported
  • assets/benchmarks/query/10_entities.rhai: Language not supported
  • assets/benchmarks/reflection/10.lua: Language not supported
  • assets/benchmarks/reflection/10.rhai: Language not supported
  • assets/benchmarks/reflection/100.lua: Language not supported
  • assets/benchmarks/reflection/100.rhai: Language not supported
Comments suppressed due to low confidence (3)

crates/xtask/src/main.rs:726

  • [nitpick] The use of the name 'execute' for the bench parameter here is inconsistent with the 'publish' field in the enum. Consider renaming 'execute' to 'publish' for clarity and consistency.
Xtasks::Bench { publish: execute } => Self::bench(app_settings, execute),

crates/xtask/src/main.rs:1226

  • [nitpick] The function parameter 'execute' could be renamed to 'publish' to match the enum variant and clearly convey its purpose in controlling benchmark publishing versus dry-run behavior.
fn bench(app_settings: GlobalArgs, execute: bool) -> Result<()> {

crates/testing_crates/script_integration_test_harness/src/lib.rs:358

  • The pre_bencher function is retrieved with .ok() and then immediately unwrapped later. Consider handling potential errors from the pre_bencher call to avoid unexpected panics during benchmarking.
let pre_bencher: Option<Function> = ctxt.globals().get("pre_bench").ok();

@makspll makspll merged commit bac836f into main Mar 23, 2025
20 checks passed
@makspll makspll deleted the chore/more-benchmarks branch March 23, 2025 18:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant