Closed yonatank93 closed 8 months ago
@mjwen When updating weights and biases using a parameter array, e.g., in the case when we are running bootstrap, do we want to convert the parameter array to use float32 (dtype used by the model) or do we want to change the dtype used by the model? Do you know any case when float32 is not enough?
Most NN potentials are trained using float32 or even lower precision, although some other ML potentials (not NN based) like ACE and SNAP, do use double precision.
So, you probably can use single precision?
So currently, I just convert the parameter array to whatever dtype used by the model. Do we want to keep this or is it safe if we just convert everything to float32? That is, instead of using:
dtype = param.dtype
param.data = parameters[ii].type(dtype)
what if we just do
param.data = parameters[ii].type(torch.float32)
?
@mjwen Since I sorted out the issue on my end, should we close this pull request?
Yes, let's do that
Closing this PR, since we found the issue and it is not due to the current version of KLIFF.
torch defaults to use float32. But, maybe we want to update the weights and biases in the NN model using a parameter array written as a numpy array, which defaults to usin float64. This commit fixes this problem and convert the parameter array to use whatever dtype used by the NN model.