Closed kzinovjev closed 1 year ago
Hello! After looking into this, it seems that the culprit is the relatively high (since this example uses a very small dataset) regularizer (lambdas=[0.1, 0.01]
in train_gap_model
). With zeta=2, the kernel values are much smaller, and these regularizers are too high. Changing it to lambdas=[1e-5, 1e-6]
gives back a nice correlation on the test set. Overall one should re-tune all hyper-parameters when changing the model.
Yep, working well with smaller regularizers. Makes sense, thanks!
Hi,
I noticed that if I change zeta to 2 in the MLIP_example notebook, the predicted energies become all the same and just equal to the sum of self contributions. Is there anything else that has to be done to use zeta=2? Thanks!