Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TST] functions/(add_columns|conditional_join).py scripts are not covered by test cases #1196

Closed
Zeroto521 opened this issue Nov 9, 2022 · 10 comments

Comments

@Zeroto521
Copy link
Member

Zeroto521 commented Nov 9, 2022

The codecov/project of latest commit 3687219 on dev branch is 77.60% (-20.04%).

The following methods or scripts aren't covered by test cases.
Please go codecov to see details.

  • functions/add_columns.py::add_columns
  • functions/conditional_join.py

We could learn that the coverage doesn't like our thought after adding --cov-append option.
pytest tests != pytest tests -m "turtle" + pytest tests -m "not turtle"

@Zeroto521
Copy link
Member Author

Zeroto521 commented Nov 9, 2022

But that is weird. Testing logic is the same for whether it is a PR or dev branch now.
I see the PR's codecov/project is 97.66%, not 77.60%.

@Zeroto521
Copy link
Member Author

Or another solution is that directly use pytest tests.
Don't split test cases into "turtle" and "not turtle" two parts.
Don't figure out the reason

@samukweku
Copy link
Collaborator

if i remember correctly , and correct me @ericmjl , we used the turtle because some tests were taking quite some time to run, and we needed a way to let them run without timing out?

@Zeroto521
Copy link
Member Author

The aim of #1185 and #1190 these two PRs are to keep the same testing logic no matter whether it's the dev branch or a PR.

The dev branch codecov/project history.

@ericmjl
Copy link
Member

ericmjl commented Nov 12, 2022

if i remember correctly , and correct me @ericmjl , we used the turtle because some tests were taking quite some time to run, and we needed a way to let them run without timing out?

That's almost right, @samukweku! We wanted the test suite to complete faster to minimize feedback loop timing. Without splitting, we would take ~40 minutes for all tests to pass. Now it's a maximum of 26 minutes, with a possibility of further reducing if we split out testing of the longest-running tests. Of course, the alternative is to reduce the testing time of those long-running tests.

@ericmjl
Copy link
Member

ericmjl commented Nov 12, 2022

@Zeroto521 let's look at the latest test runs. --cov-append generally does the right thing here.

@Zeroto521
Copy link
Member Author

Zeroto521 commented Nov 12, 2022

For a single environment is working. Speed up via splitting different parts (parallelizing test cases).
But the aim of #1143 is to set multi-environments and compat with them. So here is to parallelize multi-environments, not test cases. Splitting test cases are not working to speed up. Splitting test cases get more clear for CI.

image

@Zeroto521
Copy link
Member Author

I suddenly found there has an option addopts in pyproject.toml.
It may affect the code coverage.

pyjanitor/pyproject.toml

Lines 33 to 34 in c5ba7ef

[tool.pytest.ini_options]
addopts = "--cov=janitor/ --cov-report=html --cov-report=xml --cov-report=term-missing --durations=0"

@Zeroto521
Copy link
Member Author

I suddenly found there has an option addopts in pyproject.toml. It may affect the code coverage.

pyjanitor/pyproject.toml

Lines 33 to 34 in c5ba7ef

[tool.pytest.ini_options]
addopts = "--cov=janitor/ --cov-report=html --cov-report=xml --cov-report=term-missing --durations=0"

This is the reason. The code coverage is back to normal. See #1204

@Zeroto521
Copy link
Member Author

this issue could be closed. It was already fixed in #1204

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants