MLResearchAtOSRAM / tmm_fast

tmm_fast is a lightweight package to speed up optical planar multilayer thin-film device computation. Developed by Alexander Luce (@Nerrror) in cooperation with Heribert Wankerl (@HarryTheBird).
MIT License
53 stars 22 forks source link

missing tmm_fast_core #20

Closed driofernandes closed 1 year ago

driofernandes commented 1 year ago

tmm_fast_torch.py seams broken without tmm_fast_core

Nerrror commented 1 year ago

tmm_fast_torch is deprecated, I'll remove it with the next release. The Pytorch functionality is directly integrated into the coh_vec_tmm_disp_mstack function (terrible name, will change that in a future release, too). You can just pass either a torch.Tensor or a np.array to the function and it will compute with both.

driofernandes commented 1 year ago

thanks! essentially I'm trying to reproduce the example from the Appendix 3 of your paper. I tried to replace tmm_fast_torch by coh_vec_tmm_disp_mstack, but I get an error (see below)

import numpy as np
import torch
from tmm_fast.vectorized_tmm_dispersive_multistack import coh_vec_tmm_disp_mstack as tmm

wl = np.linspace(500, 900, 301)*1e-9
theta = np.deg2rad(np.linspace(0, 90, 301))

n_layers = 12
stack_layers = np.random.uniform(20, 150, n_layers)*1e-9

stack_layers[0] = stack_layers[-1] = np.inf
optical_index = torch.tensor(np.random.uniform(1.2, 5, n_layers*len(wl)).reshape(1,n_layers,len(wl)))
optical_index[0,-1,0] = 1

stack_layers = torch.tensor(stack_layers.reshape(1,n_layers), requires_grad=True)

wl = torch.tensor(wl)
theta = torch.tensor(theta)
result = tmm('s', optical_index, stack_layers, theta, wl)['R']

mse = torch.nn.MSELoss()
error = mse(result, torch.zeros_like(result))
error.backward()

Nb: different from the function in the paper, the coh_vec_tmm_disp_mstack function accepts either numpy arrays or torch tensors. that's the reason for the conversion in my version of the code

this is the error I get:

Exception has occurred: RuntimeError
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.DoubleTensor [1, 301, 301, 12]], which is output 0 of AsStridedBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
  File "G:\My Drive\05_Other_projects\04_filters\test2.py", line 29, in <module>
    error.backward()
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.DoubleTensor [1, 301, 301, 12]], which is output 0 of AsStridedBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

thanks for your help!

Nerrror commented 1 year ago

Hey, I think the problem was in how the clamping was done with the delta function. I published a new release an PyPi (v0.2.1) with some minor bugfixes, this should also fix the problem you're encountering (It worked for me at least).

Btw, you can now directly do

from tmm_fast import coh_tmm

instead of the unwieldy vectorized_tmm_dispersive_multistack