Closed elevans closed 1 year ago
All modified lines are covered by tests :white_check_mark:
Comparison is base (
f8c7909
) 77.65% compared to head (de1ba3a
) 77.65%.:exclamation: Current head de1ba3a differs from pull request most recent head b5e16e2. Consider uploading reports for the commit b5e16e2 to get more accurate results
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I have a couple small suggestions, just about organization. Take or leave either of them: 1) In the Decon notebook, I think it would be really nice to display the original and deconvolved images side-by-side, so you can compare the difference more easily. 2) In the GLCM notebook, I'd suggest displaying the image right away - you talk about the image but unless I can immediately relate that to what I see in the image I immediately forget the details you listed off.
Hi @elevans
I would change the paragraph that says "this notebook utilizes the Richardson-Lucy Total Variation (RLTV) algorithm, which has significantly improved axial resolution over the standard RL algorithm"
The RLTV algorithm is used to limit noise, which can be a problem with the standard RL algorithm, however there is not alarge difference in 'resolution' between the standard and TV regularized version (as a side note resolution is a pretty nuanced topic and technically deconvolution improves 'contrast based' resolution methods, like the Rayleigh criteria, but does not restore out of band frequencies to a significant extent).
See below description and figure from my deconvolution/deep learning workshop. Even though the workshop used my opencl python version of RL, the figure linked below was actually generated a long time ago, for the 2015 ImageJ Conference, and used the version of RL from imagej-ops.
Thanks @bnorthan! I'll update my text to state that RTLV limits the noise that standard RL can amplify.
Awesome stuff @elevans .. you taught me some stuff today! 😄
This PR adds two new jupyter notebooks to the use case section in the PyImageJ documentation. These notebooks do the following:
Both of these notebooks only need ImageJ2 and no legacy :tada: so we don't need any changes to the code base. I also reorganized the use case section into sub sections:
The goal is to use these notebooks in the upcoming I2K 2023 workshop.