-
## 🚀 Deprecation
As the title says, consider to deprecate passing a container (dict, DictConfig, etc.) to the `LightningModule.save_hyperparameters`.
### Motivation
See #10280 for a bit more …
-
from {{XGBoostSteps}}:
{noformat}searchParams.put("_booster", new XGBoostParameters.Booster[]{ //gblinear crashes currently
XGBoostParameters.Booster.gbtree, //default, let's use it more ofte…
-
When using 2 hyper parameters when 3 is expected, the current error is "Request failed with response 500: None
list index out of range"
This took me about 20 minutes to hunt down. A more explicit e…
-
### Bug description
I am running my trainer with the auto_lr_find option, but the model's saved learning_rate reflects the original learning rate and not the final one selected by trainer.tune(). Thi…
-
I’m using tsfresh to generate tabular data from my time series. I have 3 channels per time series, and it generates 775 features each, so I have 2325 features total.
Fitting an EBM on my dataset (3…
-
when using ```viewPlp``` on a model trained with runPlp the hyperparameters searched over are not shown in the app. I tested this with a GBM model but could also be the case with other models.
I w…
-
I am trying to train on my custom dataset however it fails to find object in many images. I tried changing threshold values but identifications were not good. What I want to do is to improve model acc…
-
-
When the eval() function in the acquisition function class involves gradient backward, because line 102 of hebo/acq_optimizers/evolution_optimizer.py is ''with torch.no_grad():'', the gradient cannot …
-
Can you share a hyperparameter configuration for optimal performance? The default parameter configuration does not reproduce the performance reported in the paper.