-
Is there a way to specify partial grid search in the sweep yaml file? By partial, what I mean is -
hyperparameter1 - [v1, v2, v3, v4, v5] - there are values for hyper parameter 1 that I want to searc…
-
Using the frontpage complete MNIST example, the below only works if _some_ hyperparameter search is added after Sequential
```
model = Sequential()
model.add(Dense({{choice([100, 200])}})) # Th…
-
# Problem
Would be good to have some *hyperparameter tuning* tools available for finding optimal *hyperparameters* for the different autoencoder variants.
## References
+ https://neptune.ai/blog/…
-
An initial effort has been done to include many models for the modelling and classification task. However, there are several small improvements that could greatly benefit the package.
Hyperparamete…
-
```python
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear…
-
# Improving Tune examples
All examples should have
* demonstrate how to retrieve best hyperparameter
* verbose comments for code
* set verbose=1 for better experience
* [Load the…
-
Hello, I am trying to clarify whether or not keras_tuner objects such as the BayesianOptimization tuner have the ability to take as input prior hyperparameter combinations (for example, from a gridwis…
-
I think it'd be good to have suggested bounds on hyperparameters to help people to choose sensible values. This may also help the optimisation-based approach to selecting hyperparameters (by putting b…
-
### Motivation
Simultaneously optimizing a general hyperparameter set, such as activation functions, and dataset-specific hyperparameter set, such as hidden dimension size. Doing the search in para…
-
The current experience with resume from checkpoint can improve. A few potential ways:
1. good defaults: Resuming from checkpoint should have as default using the last checkpoint saved, so the user …