Closed hajanssen closed 1 year ago
Thanks for your contribution, @hajanssen ,
However, I think it might be better to not use bd.float
directly for type casting after all. Your solution would work in this specific case, but it will break using bd.float
as dtype argument for other functions like bd.array
and so on.
In stead I propose to use bd.array(..., dtype=bd.float)
directly in stead of bd.float(...)
.
I updated the PR as such.
Thanks again for your input!
Ah ok, good foresight with the change, and thanks for the feedback and quick change!
Have a good day!
Hello,
i had an issue using the cuda backend. The type casting, with
bd.float()
in grid.py dosen work fortorch.float64()
This cant be used like NumPynp.float64()
to my knowledge.A torch equivalent may be the following:
pytorchValue= torch.tensor(someValue,dtype=torch.float64())
I have proposed some changes that have done the trick for me. I hope it is an ok approach, if not I am happy to hear a critique.
Thanks for this nice package I enjoy it a lot :) Greeting Hauke