openkim / kliff

KIM-based Learning-Integrated Fitting Framework for interatomic potentials.
https://kliff.readthedocs.io
GNU Lesser General Public License v2.1
34 stars 20 forks source link

Fix bug when we try to set NN parameter with different dtype #141

Closed yonatank93 closed 8 months ago

yonatank93 commented 8 months ago

torch defaults to use float32. But, maybe we want to update the weights and biases in the NN model using a parameter array written as a numpy array, which defaults to usin float64. This commit fixes this problem and convert the parameter array to use whatever dtype used by the NN model.

yonatank93 commented 8 months ago

@mjwen When updating weights and biases using a parameter array, e.g., in the case when we are running bootstrap, do we want to convert the parameter array to use float32 (dtype used by the model) or do we want to change the dtype used by the model? Do you know any case when float32 is not enough?

mjwen commented 8 months ago

Most NN potentials are trained using float32 or even lower precision, although some other ML potentials (not NN based) like ACE and SNAP, do use double precision.

So, you probably can use single precision?

yonatank93 commented 8 months ago

So currently, I just convert the parameter array to whatever dtype used by the model. Do we want to keep this or is it safe if we just convert everything to float32? That is, instead of using:

dtype = param.dtype
param.data = parameters[ii].type(dtype)

what if we just do

param.data = parameters[ii].type(torch.float32)

?

yonatank93 commented 8 months ago

@mjwen Since I sorted out the issue on my end, should we close this pull request?

mjwen commented 8 months ago

Yes, let's do that

yonatank93 commented 8 months ago

Closing this PR, since we found the issue and it is not due to the current version of KLIFF.