Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove unnecessary tests #175

Merged
merged 1 commit into from
Feb 22, 2024
Merged

remove unnecessary tests #175

merged 1 commit into from
Feb 22, 2024

Conversation

ngriffiths13
Copy link
Collaborator

Ignoring a set of slow tests by default and some other small cleanup

Copy link

vercel bot commented Jan 16, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
functime-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 16, 2024 10:43pm

@@ -340,16 +340,16 @@ def test_fractional_diff(pd_X):


### Temporarily commented out. Uncomment when benchmarking is ready. ###
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you can remove this comment too

@baggiponte
Copy link
Collaborator

Hey there, thanks for the PR. The proposed solution is to test less regressors for the moment? It can work right now to speed up the feedback loop.

I am not super expert on our internals but I think in the long run we should add to our roadmap to refactor the tests again to run forecasters on smaller datasets (eg the commodities data? or a fake one with 10 series 100 observations) to make sure they actually fit. What do you think?

@topher-lo too I'd love to hear your opinion on this. As we mentioned in the discord too, tests run too slow and fail (after 3/4 hours). We could run the benchmarks before every release or in a separate repo like polars does.

@topher-lo
Copy link
Contributor

We can disable most of the expensive datasets

@topher-lo
Copy link
Contributor

Also would recommend splitting up feature extraction and forecasting tests into two separate workflows.

@baggiponte
Copy link
Collaborator

Do you think we should do this now? Can we change the test dataset to commodities? How do we know what set of tests we can run on each PR?

@topher-lo topher-lo merged commit 3b911a7 into main Feb 22, 2024
10 checks passed
@topher-lo topher-lo deleted the ci/speed-up-testing branch February 22, 2024 00:11
@topher-lo topher-lo added the tests Testing label Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests Testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants