functime-org / functime

Time-series machine learning at scale. Built with Polars for embarrassingly parallel feature extraction and forecasts on panel data.
https://docs.functime.ai
Apache License 2.0
1.02k stars 55 forks source link

remove unnecessary tests #175

Closed ngriffiths13 closed 7 months ago

ngriffiths13 commented 8 months ago

Ignoring a set of slow tests by default and some other small cleanup

vercel[bot] commented 8 months ago

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
functime-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 16, 2024 10:43pm
baggiponte commented 8 months ago

Hey there, thanks for the PR. The proposed solution is to test less regressors for the moment? It can work right now to speed up the feedback loop.

I am not super expert on our internals but I think in the long run we should add to our roadmap to refactor the tests again to run forecasters on smaller datasets (eg the commodities data? or a fake one with 10 series 100 observations) to make sure they actually fit. What do you think?

@topher-lo too I'd love to hear your opinion on this. As we mentioned in the discord too, tests run too slow and fail (after 3/4 hours). We could run the benchmarks before every release or in a separate repo like polars does.

topher-lo commented 8 months ago

We can disable most of the expensive datasets

topher-lo commented 8 months ago

Also would recommend splitting up feature extraction and forecasting tests into two separate workflows.

baggiponte commented 8 months ago

Do you think we should do this now? Can we change the test dataset to commodities? How do we know what set of tests we can run on each PR?