Closed lg1000 closed 2 years ago
Hello, unfortunately I don't have a lot of time to fully look into the details right now. However, from a quick run using your code (but not in parallel) a few issues came to mind. The first is that top_p appears to be attempting to select more predictors than what are available/remaining based on your previous recipe steps, particularly step_corr
. Second, I don't think that this is the cause of the error, but I think that step_other
should be placed before step_dummy
, otherwise you won't have any factor variables left to pool, because you already have converted them all to dummy encoded variables. Your code ran fine for me when I omitted step_corr
. It might also be that the recipeselectors package is not being exported to the cluster when running in parallel, so you could try explicitly exporting it using the 'pkgs' argument within control_grid
.
Thanks a lot! I omitted step_corr and used the pkgs argument as you proposed and now it works. Now I will try to achieve the same with the finetune package
As I will show in a reprex below, I got some issues, tuning model arguments, and recipe arguments (from recipes and recipeselectors) both, by merging the grids. I tried numerous was, but always get the error message:preprocessor 3/3: Error: You cannot
prep()
a tuneable recipe. Argument(s) withtune()
: 'top_p'. Do you want to use a tuning function such astune_grid()
?If I tune all the model and recipe arguments except top_p, it all works fine. How can I understand this issue?