mehta-lab / waveorder

Wave optical models and inverse algorithms for label-agnostic imaging of density & orientation.
BSD 3-Clause "New" or "Revised" License
12 stars 3 forks source link

Phase transfer function is numerically small #151

Open ziw-liu opened 9 months ago

ziw-liu commented 9 months ago

We have been observing inconsistent dynamic range issues with phase reconstructions across different datasets. When I try sweeping the Tikhonov regularization strength, the same FOV would also have different dynamic ranges:

regularization strength: 1e-4 std: 0.0071132425
regularization strength: 1e-3 std: 0.0012211832
regularization strength: 1e-2 std: 0.00013627978

It seems like the product of the regularization parameter and the width of the histogram stays the same (inverse relation). When inspecting the transfer function, the numerical values appear to be small:

>>> transfer_function = zarr.open("transfer_function.zarr")["real_potential_transfer_function"]
>>> tf = np.abs(transfer_function)
>>> tf.min()
0.0
>>> tf.max()
0.030031698
>>> tf.mean()
0.007650641
>>> tf.std()
0.0053157224

Looking at how the inverse was applied, it seems like the output would be dominated by the regularization parameter we typically use:

https://github.com/mehta-lab/waveorder/blob/366ef1b842d5fb3c577a4df65c53f1638df770b1/waveorder/util.py#L972-L978

Here if H_eff_abs_square is much smaller than reg_coeff, the output will be close to being divided by the reg_coeff alone, resulting in the inverse relation between regularization strength and the dynamic range of the reconstructed image.

@mattersoflight suggests that we rescale $H{eff}$ to $[0, 1]$, so that $H{eff} \gg \rho$.

The config I used to generate the transfer function:

input_channel_names: ["BF"]
time_indices: all
reconstruction_dimension: 3
phase:
  transfer_function:
    wavelength_illumination: 0.450
    yx_pixel_size: 0.1494
    z_pixel_size: 0.205
    z_padding: 30
    index_of_refraction_media: 1.4
    numerical_aperture_detection: 1.35
    numerical_aperture_illumination: 0.85
    invert_phase_contrast: false
  apply_inverse:
    reconstruction_algorithm: Tikhonov
    regularization_strength: 0.001
    TV_rho_strength: 0.001
    TV_iterations: 1

Dataset: /hpc/projects/comp.micro/mantis/2023_08_18_A549_DRAQ5/0-zarr/a549_draq5_1/a549_draq5_labelfree_1.zarr/0/0/0

talonchandler commented 9 months ago

My experience with Tikhonov-regularized least squares is that this is expected behavior. Increasing the regularization parameter penalizes large solutions, so it makes sense that your solutions get smaller as you increase the regularization parameter.

@mattersoflight suggests that we rescale $H{eff}$ to $[0, 1]$, so that $H{eff} \gg \rho$.

I'm not sure if this will give you the behavior you're looking for. If you 100x H, then your estimate for the phase will need to go down by 100x to match the data, and I expect that your choice of regularization parameter will need to go up by ~100x. (Please prove me wrong!)


I'm guessing that the desired behavior is regularization-parameter independent reconstructions. If so, we may need to move beyond Tikhonov-regularized least squares with this property specifically in mind.

ziw-liu commented 9 months ago

My experience with Tikhonov-regularized least squares is that this is expected behavior. Increasing the regularization parameter penalizes large solutions, so it makes sense that your solutions get smaller as you increase the regularization parameter.

Now that I think of it in this way, this behavior is indeed expected!

My initial observation with the reconstruction is that with 100x regularization, the image doesn't change significantly other than a 0.01 scaling factor (but all quite usable). So I was suspecting that some numerical issue obscured the smoothing effect.

ziw-liu commented 9 months ago

However I'm still puzzled by how the same observation can be explained by 100x different phase objects (in radians) where the structure remains almost the same.