Closed yonatank93 closed 8 months ago
By default, it is using torch.float32 via torch.Tensor here. Are you using float64?
Yes, a PR would be great!
What I was trying to do was update the weights and biases using a parameter vector written as a numpy array, which I believe defaults to using float64.
It is a bit strange -- the torch.Tensor used here will actually convert the param to float32.
Given that that function is called here, I thought the original one would work?
I tested it by adding the below block
# sizes, _, _ = calc.get_size_opt_params()
p = np.random.randn(641)
print("@@ flag 1: p.dtype", p.dtype)
calc.update_model_params(p)
after line 199 in example_nn_Si.py, and everything works fine.
So, #141 may not be needed?
Are you doing something different?
There is also a possibility that I got the error because I was accessing lo-level functions in kliff. I will send you the script that I used tomorrow.
@mjwen I think I see what was wrong with the script I used. So, at the beginning of my script, I added the following line; if I dropped this line, then I had no problem at all:
torch.set_default_tensor_type(torch.DoubleTensor)
Additionally, there might be a mismatch in the calculated fingerprints that I exported. I tried adding the line above in example_nn_Si.py, I didn't get any issue.
Yes, I believe so. Your saved fingerprints and the parameters in the model can be of different data type.
Closing because of #141
I found a bug, where if I tried to update the NN model parameters using some numpy array and attempted to compute the predictions, I got the following error:
I think to fix it, we need to update this line so that the updated parameters have the same dtype as the original parameters. Any thought? I can create a PR about this.