-
## 一言でいうと
ハイパーパラメーターの探索に、多腕バンディットを適用してより良いパラメーターほど学習時間を割り当てて慎重に選ぶという手法。ベースとなっているのは学習させる=>悪いもの半分を削る、を繰り返すSuccessiveHalvingという手法で、選別のための学習時間を効率的に割り振る
### 論文リンク
https://arxiv.org/abs/1603.06560
…
-
Currently the way to specify optimization metrics is a little messy. There is the scikit-learn-like `scoring` hyperparameter to specify multi-object optimization towards the specified metric and pipel…
-
I like the integrated approach of your autoML package.
Can optimization be improved (lower error with less training budget) compared to random tuning and frace optimization by including mlr hyperopt…
-
Implement additional support for optimization algorithms from hyperparameter optimization libraries. Potentional options could be optuna, scipy optimize, ray tune, or hyperopt.
-
While testing the performance of the PPO controller on the cartpole task, I encountered an issue where the training does not seem to converge, despite using the provided parameters (some changes are a…
-
Hi,
For the one in optimal_learning/python/python_version/log_likelihood.py, Line 107 is trying to apply numpy.log10 on hyperparameter_optimizer.domain._domain_bounds. I think we should issue a warni…
yf275 updated
8 years ago
-
Hi,
as i'm running smaller dataset than the full coco + doing fine-tuning, it doesn't take so much time to get reasonable results. So i was thinking to maybe try to run like bayesian optimization to …
-
It would be great if it could support BERT, LLaMA and other model training.
-
The error is pretty self-explanatory:
> ERROR mlflow.utils.async_logging.async_logging_queue: Run Id abec744c4f86451c91984386691ad733: Failed to log run data: Exception: Invalid metric name: 'eval_…
-
Milos,
I took the effort to break our projects into steps. Each step could be a script by itself (where applicable)
1) Data Import and Wrangling
2) Exploratory Data analysis
3) Split Data set…