-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Init property benchmarks #396
Conversation
📝 WalkthroughWalkthroughThe changes involve two updates in the benchmarks. In Changes
Sequence Diagram(s)sequenceDiagram
participant Pytest
participant TP as test_property()
participant BM as _run_benchmark()
participant AV as AwesomeVersion
Pytest->>TP: Invoke test_property(version, property)
TP->>AV: Create AwesomeVersion instance with version
loop _DEFAULT_RUNS iterations
BM->>AV: Retrieve specified property
end
BM-->>TP: Return benchmark results
TP-->>Pytest: Report benchmark metrics
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
benchmarks/test_compare.py (1)
32-32
: Typo in function name: "_run_banchmark".There's a typo in the function name
_run_banchmark
which should be_run_benchmark
. This typo appears consistently in all four benchmark function definitions.- def _run_banchmark() -> None: + def _run_benchmark() -> None:Also applies to: 39-39, 46-46, 53-53
benchmarks/test_properties.py (4)
14-18
: Unused dictionary definition "semver_first".The
semver_first
dictionary is defined but not used anywhere in the file. Consider either using it or removing it to avoid confusion.- semver_first = { - "ensure_strategy": AwesomeVersionStrategy.SEMVER, - "find_first_match": True, - }If this dictionary is intended for future use, add a comment explaining its purpose:
+ # Dictionary for future use with SEMVER strategy configurations semver_first = { "ensure_strategy": AwesomeVersionStrategy.SEMVER, "find_first_match": True, }
38-45
: Type hint for "version" parameter could be more specific.The type hint for the
version
parameter (str | int | float
) is correct but could be more specific by using a Union type from the typing module for better compatibility with older Python versions.- version: str | int | float, + version: Union[str, int, float],And add the import at the top:
from __future__ import annotations + from typing import Union
46-49
: Typo in benchmark function name and opportunity for property access optimization.There's a typo in the function name, and the current property access method using
getattr
could be optimized for certain properties.Fix the typo and consider optimizing the property access:
- def _run_banchmark() -> None: + def _run_benchmark() -> None: for _ in range(_DEFAULT_RUNS): - getattr(obj, class_property) + # Direct property access is faster than getattr for known properties + if class_property == "prefix": + obj.prefix + elif class_property == "modifier": + obj.modifier + elif class_property == "modifier_type": + obj.modifier_type + elif class_property == "strategy": + obj.strategy + elif class_property == "strategy_description": + obj.strategy_description + elif class_property == "major": + obj.major + elif class_property == "minor": + obj.minor + elif class_property == "patch": + obj.patch + else: + getattr(obj, class_property)Alternatively, you could keep the current approach for simplicity since this is a benchmark and you might want to test the actual performance of
getattr
.
48-49
: Consider benchmarking individual properties separately.For more granular results, consider separating benchmarks for each property rather than using a parameterized approach. This would allow you to see the performance of each property access individually in the benchmark report.
Example:
@pytest.mark.parametrize( "version", ("v1.2.3", "v.1.2.3", "1.2.3"), ) def test_property_prefix(benchmark: BenchmarkFixture, version: str) -> None: """Benchmark for AwesomeVersion prefix property.""" obj = AwesomeVersion(version) @benchmark def _run_benchmark() -> None: for _ in range(_DEFAULT_RUNS): obj.prefixThis approach would give you more detailed performance insights per property.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
benchmarks/test_compare.py
(1 hunks)benchmarks/test_properties.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Upload coverage to Codecov
- GitHub Check: Run benchmarks
🔇 Additional comments (3)
benchmarks/test_compare.py (1)
21-26
: Function parameter formatting improves readability.The parameters of the
test_compare
function have been reformatted from a single line to multiple lines, which improves readability and maintainability of the code.benchmarks/test_properties.py (2)
1-13
: New property benchmarks file with appropriate imports and multiplier.The file introduces property benchmarks for the
AwesomeVersion
class with all necessary imports. Using_DEFAULT_RUNS = DEFAULT_RUNS * 1_000
ensures sufficient iterations for meaningful benchmark results.
20-37
: Well-structured parameterization for comprehensive property testing.The parameterization is well-structured and comprehensive, covering various version formats and properties. The use of list comprehensions with unpacking (
*[...]
) is a clean approach to generate multiple test cases.
Proposed change
Type of change
Additional information
Checklist
make lint
)