Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue 11312] Data test run results always show failures #11313

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

vglocus
Copy link

@vglocus vglocus commented Feb 15, 2025

Resolves #11312

Problem

Previously, a RunResult for a data test always showed 0 failures for a passing test, no matter what the DataTestResult returned. For example a test that will error or warn if '> 10' and the result is 4 failures, the RunResult returns 0 failures, because it is passing.

Solution

For data tests, always set failures to the actual failure count, even if technically not a failure.

Checklist

  • I have read the contributing guide and understand what's expected of me.
  • I have run this code in development, and it appears to resolve the stated issue.
  • This PR includes tests, or tests are not required or relevant for this PR.
  • This PR has no interface changes (e.g., macros, CLI, logs, JSON artifacts, config files, adapter interface, etc.) or this PR has already received feedback and approval from Product or DX.
  • This PR includes type annotations for new and modified functions.

Previously, a RunResult for a data test always showed 0 failures for a passing test, no matter what the DataTestResult returned.
For example a test that will error or warn if '> 10' and the result is 4 failures, the RunResult returns 0 failures, because it is passing.

On style; I added this line in order to comply with existing code style. However this renders setting status (pre-existing) and failures (as off this) before the control flow redundant.
Alternative style would be to set status and failures as passing before the controlflow and skip the else.
@vglocus vglocus requested a review from a team as a code owner February 15, 2025 08:17
Copy link

cla-bot bot commented Feb 15, 2025

Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA.

In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR.

CLA has not been signed by users: @vglocus

Copy link
Contributor

Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide.

@github-actions github-actions bot added the community This PR is from a community member label Feb 15, 2025
Copy link

cla-bot bot commented Feb 17, 2025

Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA.

In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR.

CLA has not been signed by users: @vglocus

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community This PR is from a community member
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Test run results should include failure value, even if not fail or warning
1 participant