kaanaksit / odak

Scientific computing library for optics, computer graphics and visual perception
https://kaanaksit.com/odak
Mozilla Public License 2.0
176 stars 52 forks source link

Gerchberg-Saxton phase retrieval method #7

Closed kaanaksit closed 3 years ago

kaanaksit commented 3 years ago

This issue will track the work on migrating code from @askaradeniz that implements Gerchberg-Saxton to Odak. There are two veins to this migration. One of them is migrating the code in a way that is suitable to work with Numpy and Cupy. The second deals with the torch implementation, which I believe @askaradeniz can immediatly initiate as his code is already applicable to torch case.

We will also add test cases to test folder for both method.

kaanaksit commented 3 years ago

Gerchberg-Saxton phase retrieval method for Numpy/Cupy case is added with commit d2c4e3f1a997f94dac3a305ec7279fb189e82bb5 .

A test routine can be found as in here. @askaradeniz please do not start conversion to torch until I verify this routine with a real holography setup.

kaanaksit commented 3 years ago

This routine is verified with a real holography setup. @askaradeniz in case you are interested in transferring this piece of code to learn module, the Numpy/Cupy version is ready.

askaradeniz commented 3 years ago

I converted the current code of gerchberg-saxton method to torch with (https://github.com/kunguz/odak/commit/f5e0a16cb83360b3bc9060d0e45a327811bb2111).

The results may not match exactly because of the differences we noticed in #10 but they seem close to each other. These are the current reconstruction results from the numpy/cupy and torch versions with (https://github.com/kunguz/odak/commit/edc987256afe6bfad16aae4031e047a717999b60):

numpy/cupy: output_amplitude

torch: output_amplitude_torch

kaanaksit commented 3 years ago

I suppose this concludes and closes this case.

kaanaksit commented 3 years ago

In fact, we may be able to overcome that tiny difference in results by comparing:

I should also highlight that when @rongduo experimented the absolute maximum difference was 10, in her case she uses numpy. In my case it was 15, I use cupy. At the very least above two comparisons may help us understand further. Shall we initiate and examine those two at a separate issue @askaradeniz ? Would you be willing to take the lead on that?

askaradeniz commented 3 years ago

I suspect that the difference in absolute distance you see is due to the randomization of the input field. https://github.com/kunguz/odak/blob/edc987256afe6bfad16aae4031e047a717999b60/test/test_learn_beam_propagation.py#L73

Ofcourse, I can take the lead about the matching issue.

kaanaksit commented 3 years ago

Makes perfect sense. Do we get the same results without it? If so no additional issue is needed, all we need to do is to comment that line.

But wait, I thought torch and numpy comparison uses the same original field, no? https://github.com/kunguz/odak/blob/edc987256afe6bfad16aae4031e047a717999b60/test/test_learn_beam_propagation.py#L79

askaradeniz commented 3 years ago

I mean they can give different absolute difference everytime we run the test case because of the randomization. So, it is normal to have different absolute differences at each run. However, our problem is that 10 or 15 difference should be much smaller as both of them use the same field.

askaradeniz commented 3 years ago

Maybe we can just leave it as is and reopen the issue when someone needs more precise matching. @kunguz Would it be OK?

kaanaksit commented 3 years ago

Sure, but we don't have an understanding at the moment on where does the difference come from. U1 returns same for example but right after final fft2 results diverges. Analysing set_amplitude should be straight forward.

kaanaksit commented 3 years ago

Well, actually, I think even if we don't fix it right now, having an issue is a reminder for the future.