flaport / fdtd

A 3D electromagnetic FDTD simulator written in Python with optional GPU support
https://fdtd.readthedocs.io
MIT License
454 stars 116 forks source link

bugfix with float type casting in pytorch backend #58

Closed hajanssen closed 1 year ago

hajanssen commented 1 year ago

Hello,

i had an issue using the cuda backend. The type casting, with bd.float() in grid.py dosen work for torch.float64() This cant be used like NumPy np.float64() to my knowledge.

A torch equivalent may be the following: pytorchValue= torch.tensor(someValue,dtype=torch.float64())

I have proposed some changes that have done the trick for me. I hope it is an ok approach, if not I am happy to hear a critique.

Thanks for this nice package I enjoy it a lot :) Greeting Hauke

flaport commented 1 year ago

Thanks for your contribution, @hajanssen ,

However, I think it might be better to not use bd.float directly for type casting after all. Your solution would work in this specific case, but it will break using bd.float as dtype argument for other functions like bd.array and so on.

In stead I propose to use bd.array(..., dtype=bd.float) directly in stead of bd.float(...).

I updated the PR as such.

Thanks again for your input!

hajanssen commented 1 year ago

Ah ok, good foresight with the change, and thanks for the feedback and quick change!

Have a good day!