-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove unnecessary tests #175
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
@@ -340,16 +340,16 @@ def test_fractional_diff(pd_X): | |||
|
|||
|
|||
### Temporarily commented out. Uncomment when benchmarking is ready. ### |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you can remove this comment too
Hey there, thanks for the PR. The proposed solution is to test less regressors for the moment? It can work right now to speed up the feedback loop. I am not super expert on our internals but I think in the long run we should add to our roadmap to refactor the tests again to run forecasters on smaller datasets (eg the commodities data? or a fake one with 10 series 100 observations) to make sure they actually fit. What do you think? @topher-lo too I'd love to hear your opinion on this. As we mentioned in the discord too, tests run too slow and fail (after 3/4 hours). We could run the benchmarks before every release or in a separate repo like polars does. |
We can disable most of the expensive datasets |
Also would recommend splitting up feature extraction and forecasting tests into two separate workflows. |
Do you think we should do this now? Can we change the test dataset to commodities? How do we know what set of tests we can run on each PR? |
Ignoring a set of slow tests by default and some other small cleanup