-
## Description
The ability to optimize along with a univariate hyperparameter an uknown/hidden constraint for the parameter, where invalid parameter values are identified by failed function evaluatio…
-
Hi,
There seems to be at least two mismatch between the paper and repo hyperparameters.
1) SPR_weight
In paper: "We set λ SPR = 2 and λ IM = 1 during pre-training. Unless otherwise noted, all s…
-
**What would you like to be added**: In the current version, nni only allows specifying each hyperparameter's feasible region independently. But in some scenarios user may want to specify joint …
-
In #18 I propose using a grid search to fit the classifier hyperparameters ([notebook](https://github.com/dhimmel/machine-learning/blob/84a3271b8a11763616b62316cc589a40608e1852/1.TCGA-MLexample.ipynb)…
-
### Summary
Adding early stopping callback and add it as an additional training parameter for training
### Detailed Description
As of now one need to train a model until `numb_steps` is reached. It…
-
# Hyperparameters Tuning for XGBoost using Bayesian Optimization | Dr.Data.King
How to tune your XGBoost model hyperparameters? How to set up parallel computing for your model training which may take…
-
Hi,
I'm trying to reproduce the results you reported in the paper and unable to do so with the set of current hyperparameters. One notable problem is with per_gpu_eval_batch_size=1. Keeping it as i…
-
Hello JiaoR,
Thanks for your great work!
I’m currently using some slabs (nearly 3000) in the OpenCatalyst dataset training a model, and both val_lattice_loss and val_coord_loss are nearly 0.6 with …
-
Hello,
I've tried in vain to find suitable hyperparameters for SAC in order to solve MountainCarContinuous-v0.
Even with hyperparameter tuning (see "add-trpo" branch of [rl baselines zoo](https:…
-
I'm using audio data from my own realm to do a continuous training on the checkpoint of WavTokenizer-mdium. However, it was found that the model seemed to get worse and worse with the training, and at…