Open gselzer opened 6 years ago
Half of the edge artifacts were fixed by adding the logic present in the plugins that differs output between IntegerType
images and non-IntegerType
images. Honestly I have not figured out the reason as to why the plugin does this, however for the sake of consistency I have added it to the Op
s. The other half of the edge artifacts are due to the miscalculation of height and width in DefaultOverlayService
(see imagej/imagej-common#83), meaning that the plugin is not actually smoothing/sharpening the last row/column of its input images. I could fix this issue here by adding <=
to the driving for-loops, however this seems like it is avoiding the issue.
Hi @gselzer
Do you think the code could be a bit "DRYer"?? I can get almost the same result with a script that is a couple lines long.
#@ OpService ops
#@ ImgPlus input
#@OUTPUT ImgPlus(label="Smoothed") smoothed
// Create the kernel
kernel=ops.create().kernel([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]] as double[][], new DoubleType())
// Perform convolution
smoothed=ops.filter().convolve(input, kernel)
Of course, there is an issue with my script in that the output and calculations are forced to be 32 bit.
Potentially we could do the following
ops.create().kernel()
NaiveConvolveC
op handles the case of a ND image and 2D kernel properly (that is it applies the kernel to each slice). What do you think??
That is a good point @bnorthan, I had little understanding/awarenesss of the convolve Ops, so I didn't even think to use them. At this point I have DefaultSharpen
providing the same results as before, now using those two Ops, however there is more cleaning to do. It also is correctly handling 3D images
(Note that the left is the result from the plugin, I was struggling to get it to display a window title)
@bnorthan both Ops now pass most of the work on to helper Ops. Using some System.nanotime
s in the I found that the Op and the plugin had comparable completion times. Let me know if there is anything else that you see!
Hi @gselzer it looks to me like both smooth and sharpen are almost exactly the same. Also it would be nice to be able to take advantage of the work you have done on slice wise processing and normalization for other kernels. Do you think it would be a good idea to make an op called something like SliceWise2DConvolveAndNormalize
and then DefaultSmooth
and DefaultSharpen
would just call SliceWise2DConvolveAndNormalize
with the appropriate kernel ?? (In that case scale would be computed in the initialize function or something).
@bnorthan yes, I was also thinking about that as well, however I did not do it because I had no idea where we would house such an Op, since it does multiple things. I suppose maybe it could go in filter.convolve
?
Now that I think about it #556 is also similar in computation, however it is slightly different in the calculation of the value that goes into the output. If we were to do this I think we would want to make the Op a SliceWise2DConvolve
, and then let each Op
do its own normalization (because in the case of findEdges
the scale would be 0 if it was computed like in smooth and sharpen. Would you agree or do you see a better option?
Hi @gselzer
You are right we should make the Op
a SliceWise2DConvolve
and then let each Op do it's own normalization. In the case of the findEdges
it looks like it is 2 convolves and then the root of sum of squares of the two convolutions.
One other quick question. Do you think it is necessary to convert to double for the intermediate calculation?? Is there a reason for that?? In the original plugin it looks like everything is done in integer math.
Hi @gselzer
A few more thoughts after looking more closely at the IJ1 implementations.
Smooth looks to me like it is just a mean. Or am I missing a detail?? If smooth is just a 9 by 9 mean can we just re-use DefaultMeanFilter and pass it a 9 by 9 rectangle??
Find Edges is a bit specialized because some of the kernel coefficients would be 0, and using convolve would result in un-needed multiplications. Maybe there is a way to implement it by extending AbstractNeightbourhoodBasedFilter
Sharpen is implemented with Convolution. Do you think it be worth it to add an Normalizer
function to NaiveConvolve
to avoid looping through pixels twice?? (ie instead of calling Convolved
followed by Divide
integrate the normalization into Convolve, as is done in IJ1).
Just some thoughts. Let me know if I've misunderstood anything.
@bnorthan I think the only thing preventing us from using DefaultMeanFilter
for DefaultSmooth
is this line here. If you see a way around this I would be happy to use Mean instead.
As far as FindEdges
goes you are probably right with AbstractNeighborhoodBasedFilter
being faster / more efficient. I can try coding something up next week if we think it would be worthwhile to do that instead of using some SliceWise2DConvolve
. I am just hoping that it will not be too difficult to manage cutting down / aligning the ND image correctly for the 2D kernel as compared to the way we have it working now.
As far as adding a normalizer
function to convolve
goes I think this could be really powerful, however I am also thinking about separation of concerns here. I feel like adding a normalizer function as a parameter (I think this is the way we would do it?) kind of goes beyond the scope of what a convolve
function should do. I dunno...
Hi @gselzer
Have you tested to see if DefaultMeanFilter
and DefaultSmooth
produce the same result?? I think the line in question is used for rounding purposes, and the setReal function in AbstractIntegerType
may serve the same purpose. What do you think?
Is FindEdges
much of a priority right now given the ongoing redesign efforts?? I'm happy to review it further, but would it make sense to hold off until after the next major iteration of ops is released??
You are probably right. No need to make Convolve
too complicated unless there is an important reason. When people optimize for speed they may override the op with a native or GPU implementation anyway.
This PR adds two Ops adapted from the Sharpen and Smooth ImageJ core plugins.
This PR is currently a WIP because: 1) There are edge artifacts present in the Ops and I have not yet figured out the difference between the Ops and the plugins. As far as I can tell the Ops are summing up the neighborhood pixels the same as the Plugin, does. Further investigation is needed.
2) The Plugins should either be deprecated or just a wrapping of these Ops.