Skip to content

fix: enable parallel test execution with pytest-xdist in CI workflow #620

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

deependujha
Copy link
Collaborator

@deependujha deependujha commented Jun 11, 2025

Before submitting
  • Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

partial work on #612
follow-up work from #608 and #338

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

Copy link

codecov bot commented Jun 12, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 83%. Comparing base (e0c6ffe) to head (59056f0).

Additional details and impacted files
@@         Coverage Diff         @@
##           main   #620   +/-   ##
===================================
  Coverage    83%    83%           
===================================
  Files        43     43           
  Lines      6609   6610    +1     
===================================
+ Hits       5492   5500    +8     
+ Misses     1117   1110    -7     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

run: coverage run --source litdata -m pytest tests -v --durations=100
run: |
pip install pytest-xdist
coverage run --source litdata -m pytest tests -v --durations=100 -n auto
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
coverage run --source litdata -m pytest tests -v --durations=100 -n auto
coverage run --source litdata -m pytest tests -v --durations=100 -n 2

lets start just with 2 and we can bump the scale later

@@ -556,6 +556,8 @@ def _terminate(self) -> None:
if self.remover and self.remover.is_alive():
self.remover.join()

sleep(5) # Give some buffer time for file creation/deletion
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather have loop testing the it was created/deleted

@@ -47,7 +47,7 @@ jobs:

- name: Tests
working-directory: tests
run: pytest . -v --cov=litdata --durations=100
run: pytest . -v --cov=litdata --durations=100 -n auto --reruns 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
run: pytest . -v --cov=litdata --durations=100 -n auto --reruns 2
run: pytest . -v --cov=litdata --durations=100 -n auto

rerun only specific tests, so in case something is really crashing, we won't take the test twice as long

@pytest.mark.flaky(reruns=5)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, these are just attempts to see if tests pass. The final pr will be updated to match the standard.

A weird thing I've seen is, after tests have completed, it still hangs for some reason. I'm currently investigating the causes.

@Borda
Copy link
Member

Borda commented Jul 1, 2025

@deependujha how much is missing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants