-
MinGRU (without the LM layers) is considerably slower than standard nn.GRU. My test parameters were: input_size = 10, hidden_size = 100, seq_len = 1000, batch_size = 64.
From my profiler, tested i…
-
hi, sorry for this possible trivial question. how can i fix certain hyperparameters before the optimization? for example, my prior mean function is constant with a certain value, which should be fixed…
mh510 updated
3 years ago
-
I am still exploring the possibilities of GuildAI. How well does it integrate with other libraries to perform hyperparameter optimization beyond grid search and random search? For example can it int…
-
### Description
The [OptunaSearch](https://github.com/ray-project/ray/blob/master/python/ray/tune/search/optuna/optuna_search.py) currently uses in-memory storage and does not provide a way to config…
-
Currently the way to specify optimization metrics is a little messy. There is the scikit-learn-like `scoring` hyperparameter to specify multi-object optimization towards the specified metric and pipel…
-
Hello all,
I've been trying to combine dask-ml's tools in the most vanilla way that I can think of and they don't seem to fit together (no pun intended).
Specifically, I want to both train model…
-
https://arxiv.org/abs/1711.09846
https://dl.acm.org/citation.cfm?doid=3292500.3330649
This is an interesting approach to hyperparameter optimization I just came across. An implementation in DeepC…
-
# What
Hyperparameter Optimization is searching for the best set of hyperparameters for a machine learning model to improve its performance.
# Possible Solution
Perform hyperparameter tuning us…
-
Hello, is there any documentation detailing how to train new models?
prvst updated
3 weeks ago
-
## Information
The problem arises in chapter:
* [ ] Making Transformers Efficient in Production
## Describe the bug
while training i am getting proper F1 score of 0.755940
![image](https:…