Open lilychou2000 opened 3 months ago
Here are the parameters you've provided for the 'ninja' measurement:
Wavelength (lam): 26 Height (h): 350 Width (w): 260 Shift direction: Up Shift step: 1
You can use these parameters to apply the 'shift' operator to the 'Mask' 3D data and generate the required 'mask_3d_shift' for the network input.
If there are any further issues or questions during the implementation, please contact me at wangxin97@bit.edu.cn
Thank you for your reply. I have resolved this issue : )
I have read your code and found that the test.py is code for simulated measurement. And in this code, "Mask" in "cameraSpectralResponse.mat" is used to calculate dual_A,dual_At,shift,shift_back, while the “mask_3d_shift” in "mask_3d_shift.mat" is used as "Phi_batch" in the network. I have checked the diffrence between "Mask" and "mask_3d_shift" and I found it seems that "mask_3d_shift" is the complete coded aperture and the shift cols of "Mask" are zeros. I want to run the code in the real data, but in data offered by rTVRA (ninja in "scene01.mat" and doll in "scene02.mat"), there are only "Mask". Could you please share the code for real data? I am very appreciate of it.