lab-cosmo / librascal

A scalable and versatile library to generate representations for atomic-scale learning
https://lab-cosmo.github.io/librascal/
GNU Lesser General Public License v2.1
80 stars 17 forks source link

zeta=2 not working in MLIP example #426

Closed kzinovjev closed 1 year ago

kzinovjev commented 1 year ago

Hi,

I noticed that if I change zeta to 2 in the MLIP_example notebook, the predicted energies become all the same and just equal to the sum of self contributions. Is there anything else that has to be done to use zeta=2? Thanks!

Luthaf commented 1 year ago

Hello! After looking into this, it seems that the culprit is the relatively high (since this example uses a very small dataset) regularizer (lambdas=[0.1, 0.01] in train_gap_model). With zeta=2, the kernel values are much smaller, and these regularizers are too high. Changing it to lambdas=[1e-5, 1e-6] gives back a nice correlation on the test set. Overall one should re-tune all hyper-parameters when changing the model.

kzinovjev commented 1 year ago

Yep, working well with smaller regularizers. Makes sense, thanks!