Closed bryanlimy closed 3 months ago
Click to see where and how coverage changed
File Statements Missing Coverage Coverage
(new stmts)Lines missing
autoemulate
hyperparam_searching.py
autoemulate/emulators
neural_net_torch.py
autoemulate/emulators/neural_networks
mlp.py
rbf.py
332-336
tests
test_torch.py
Project Total
This report was generated by python-coverage-comment-action
Attention: Patch coverage is 91.66667%
with 2 lines
in your changes are missing coverage. Please review.
Project coverage is 90.83%. Comparing base (
b512fe3
) to head (59ee578
).
Files | Patch % | Lines |
---|---|---|
autoemulate/emulators/neural_networks/rbf.py | 33.33% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@mastoffel I think it is unavoidable that some of the MLP search cases would return NaN, especially with adam optimizer
@mastoffel I think it is unavoidable that some of the MLP search cases would return NaN, especially with adam optimizer
@bryanlimy Ah, I think for me it seems to come from sgd, when I remove it I don't really get NaN warnings. Is that different for you?
@mastoffel I think it is unavoidable that some of the MLP search cases would return NaN, especially with adam optimizer
@bryanlimy Ah, I think for me it seems to come from sgd, when I remove it I don't really get NaN warnings. Is that different for you?
Sorry yes I meant to say SGD. Same issue in https://github.com/alan-turing-institute/autoemulate/issues/191#issuecomment-1967106309
@mastoffel setting a higher minimum learning rate seems to be ok for now. I found a number of Issues where users report ValueError: Input y contains NaN.
with BayesSearchCV
, I will need to check if it is cause by something else in our code. Ideally we would want the searcher to move on to the next settings if the current model report NaN.
@mastoffel setting a higher minimum learning rate seems to be ok for now. I found a number of Issues where users report
ValueError: Input y contains NaN.
withBayesSearchCV
, I will need to check if it is cause by something else in our code. Ideally we would want the searcher to move on to the next settings if the current model report NaN.
@bryanlimy
BayesSearchCV
seems to work now. Was that related to the learning rate too?@bryanlimy ok, seems to work now!
Categorical
to address https://github.com/alan-turing-institute/autoemulate/issues/199np.int
inskopt
. As a quick fix, I addednp.int=np.int64
inhyperparam_searching.py
. There is an open issue on this https://github.com/scikit-optimize/scikit-optimize/issues/1171 onskopt
.LBFGS
optimizer to NN modules to match with scikit-learn MLP, addressing https://github.com/alan-turing-institute/autoemulate/issues/192initialize_optimizer
to ignoreweight_decay
ifLBFGS
is used as it doesn't support weight decay in PyTorch.1e-3
to avoid gradient exploding.verbose=0
anderror_score=np.nan
in_optimize_params
so that the values toRandomizedSearchCV
andBayesSearchCV
are the same. The default value oferror_score
toRandomizedSearchCV
isnp.nan
which is different from the default value of"raise" in
BayesSearchCV`.