Open detoma opened 1 year ago
Try to replace NaN's with 0.0 or mean value of neighbors before applying the UNSHARP_MASK
in the enhanced intensity routine.
Removing threshold masking from the enhanced intensity algorithm seems to make this much better.
There are some potentially variable hot pixels that are still causing problems.
The blob of high value pixels in the center line intensity for 20220901.203329.ucomp.1074.l1.3.fts
are:
IDL> print, data[420:421, 812:813, 0]
251.432 232.484
154.359 359.754
The artifacts appear in the "enhanced peak intensity" images created in the dynamics file images, even though they don't appear in the level enhanced intensity images.
The level 1 image 20220901.182014.ucomp.1074.l1.3.enhanced_intensity.gif
:
The level 2 enhanced peak intensity image 20220901.182014.ucomp.1074.l2.enhanced_peak_intensity.png
:
The differences between them:
The threshold masking is:
!null = where(intensity_center gt 0.4 $
and intensity_center lt 100.0 $
and intensity_blue gt 0.1 $
and intensity_red gt 0.1 $
and line_width gt 15.0 $
and line_width lt 60.0 $
and abs(doppler_shift) lt 30.0 $
and doppler_shift ne 0.0, $
complement=bad_indices, /null)
My understanding when I last looked at these a few months back; the algorithm broke down when we had negative intensities. I think this was related to the fitting code, but I don't remember if the problem was negative input into the fit or negative output from the fit.
The threshold masking is actually done after the enhanced intensity calculation. But the gauss fit clips negative values.
Maybe try this?
The above still produces the black square artifacts.
I am using the following scheme:
This doesn't have the black squares, but is blotchy in the outer field around where the black square artifacts would have been.
Mike the only other thing I can think of, is to compute the gaussian fit for all points, do the enhanced intensity and mask later based on our criteria.
We do not have signal in the outer fov. There will always be missing values there.
For comparison, here is the new enhanced peak intensity for the same date/time as above:
I am happy with this version. It is a little muddy where the black squares were, but there is basically no signal out there so what else can we do other than a more heavy-handed mask?
I do not like to mix line center pixels and gaussian fit. It looks good visually but it is misleading. If we save the peak intensity in a fits file it must the peak intensity not a mix of the two.
Originally, you were computing the peak intensity for all pixels, create the enhanced peak intensity for all pixels, and then mask both. Is that right?
OK, I have an change to only use the corrected pixels for the display image, not the FITS polarization file.
The peak intensity is always masked because of taking the log of the negative values.
We need to find a different approach.
We can try what we did for CoMP but there is no time before the workshop.
Taking the ABS
(or, equivalently, making the fit inputs complex, taking the complex log, and then retrieving the real part of the result), does not work:
The next idea would be to find the negative values and interpolate across them before sending them to the gaussian fit (for display purposes only). But that is a bit tricky because the negative areas are more than a single pixel, and I don't think I can get this working before tonight.
This issue is also affecting #130. Should I leave the squares, or replace pixels with the smoothed image for the display?
For a cheap interpolation that avoids the zeros would this work:
negative_min= min(image) if negative_min < 0 { image = image - negative_min ###should this be 2*negative_min just to avoid the zeros image = gauss_and_enhance(image) image = image + negative_min }
Last effort, I think this is reasonable:
This does:
negative_mask = enhanced_intensity le 0.0
negative_mask = morph_close(negative_mask, bytarr(3, 3) + 1B, /preserve_type)
negative_indices = where(negative_mask, n_negative_indices, /null)
display_enhanced_intensity = enhanced_intensity
display_enhanced_intensity[negative_indices] = enhanced_intensity_center[negative_indices]
For the sample image, it replaces 870038, or 66%, of the high noise pixels (mostly under the occulter or in the outer field) with the enhanced intensity from the line center. The pixels replaced are:
We need to revisit this with more time. Making a change, running the pipeline and then checking one image is not efficient.
We need to talk about how to run pisces of the pipeline outside of the pipeline for debugging and testing.
I ran this code outside the pipeline. It only took about 10 seconds to produce each of those images. Do you want to run on more than one example image?
To compare peak (enhanced) intensity to line center (enhanced) intensity, try dates around the tuning changes:
Tuning changes
date offset nm
2021-06-20 1.82
2021-09-23 1.87
2022-01-18 1.97
2022-02-09 1.99
2022-04-12 2.03
2022-06-14 2.07
2022-09-15 2.11
2022-11-23 2.15
The problem in the peak enhanced intensity are not the zeros or the negative values, are the high peak values.
@detoma:
When we fit a gaussian there are pixels (in some images hundreds of pixels) where the gaussian peak values are huge because we fit noise and get a crazy gaussian.
I will try a despike routine with a 3x3 kernel and see if this cleans it up enough.
Additionally, I do not like the radius being set to 5 in the unsharp mask. It sharpens a lot but also spreads artifacts in big 5x5 boxes.
Finally, I want to add a radial filter to see further out.
I am going to change this part of the code quite a bit, then we can test it on a few days and see if it works better.
Enhanced intensity GIF have still problem with "black squares" and interpolation.