Closed Steviey closed 2 years ago
Could it be related to different lengths of .resample_results per model/workflow?
I noticed doing this:
predictions_tbl <- modeltime.resample::unnest_modeltime_resamples(m750_training_resamples_fitted)
View(predictions_tbl)
... results in a clickable, ready to drill down view in Rstudio.
... comparing with this, the result seems to be corrupted and is not clickable. The difference lies in the different lengths of .resample_results.
What would be, if we were able to reduce the predictions on min. length of resample.results?
update: There is more... .predictions=NULL in a GLMNET-model
Every failing model will be lost... Since we have no influence on how models will be treated in modeltime_fit_resamples(), there is no chance to fix it- other then hardcoded.
Error in if (is.numeric(args$mixture) && (args$mixture < 0 | args$mixture > : missing value where TRUE/FALSE needed
Ubuntu 16.x LTS, R latest, modeltime.ensemble latest
A submodels_tbl has 15 correctly fitted models. When I try to use them with modeltime_fit_resamples(), only a fraction of them show up in the result of that function (only 4) Is there an explanation available?
Side note: When I use less then the 15 models, the code breaks while fitting a glmnet-metaLearner. It promps:
... where the model ist correctly tagged with 'penalty=tune::tune()' I noticed the same effect with lasso (mixture=1).
My guess is, it will be forgotten anywhere in modeltime.ensemble-internal code. Currently I'm testing different metaLearners. Xgboost metaLearner seem to work only without xgboost submodels. Others work fine so far.
It would be nice to have a fallback/try-catch option in modeltime.resample. Otherwise code breaks in huge projects, any time something fails at this point.