flaport / fdtd

A 3D electromagnetic FDTD simulator written in Python with optional GPU support
https://fdtd.readthedocs.io
MIT License
454 stars 116 forks source link

Error using Cuda #48

Closed hajanssen closed 2 years ago

hajanssen commented 2 years ago

Hello,

I had an issue using the "torch.cuda" backend. Adding an Object yields the following Error.

TypeError                                 Traceback (most recent call last)
[<ipython-input-7-87b12cc5ca33>](https://localhost:8080/#) in <module>()
     34 permittivity = np.ones((180,180,1))
     35 permittivity += circle_mask[:,:,None]*(refractive_index**2 - 1)
---> 36 grid[500-180//2:500+180//2, 500-180//2:500+180//2, 0] = fdtd.Object(permittivity=permittivity, name="object")
     37 
     38 

[/usr/local/lib/python3.7/dist-packages/fdtd/grid.py](https://localhost:8080/#) in __setitem__(self, key, attr)
    367             x=self._handle_single_key(x),
    368             y=self._handle_single_key(y),
--> 369             z=self._handle_single_key(z),
    370         )
    371 

[/usr/local/lib/python3.7/dist-packages/fdtd/objects.py](https://localhost:8080/#) in _register_grid(self, grid, x, y, z)
     67             self.permittivity = self.permittivity[:, :, :, None]
     68         self.inverse_permittivity = (
---> 69             bd.ones((self.Nx, self.Ny, self.Nz, 3)) / self.permittivity
     70         )
     71 

[/usr/local/lib/python3.7/dist-packages/torch/_tensor.py](https://localhost:8080/#) in __array__(self, dtype)
    730             return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
    731         if dtype is None:
--> 732             return self.numpy()
    733         else:
    734             return self.numpy().astype(dtype, copy=False)

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

This happened on my local machine and on Google Colab. As I have understood, the error happens, because self.permittivity in objects.py is not copied to the GPU. For me, this can be mitigated with self.permittivity = bd.array(self.permittivity) in a previous line.

Greetings, Hauke

flaport commented 2 years ago

Thanks Hauke. This should be now fixed in fdtd>=0.2.5.

Next time when you already know the answer to the problem, feel free to open a PR right away 😉