WilhelmusLab / IceFloeTracker.jl

Julia package for ice floe tracker
https://wilhelmuslab.github.io/IceFloeTracker.jl/
MIT License
1 stars 2 forks source link

Image sharpening is dependent on image size #418

Open danielmwatkins opened 2 months ago

danielmwatkins commented 2 months ago

The default settings of IceFloeTracker.imsharpen include parameters rblocks, cblocks that divide an image into a set number of row and column blocks. For a small image, the default values result in many blocks with few pixels, and for a large value, the image may be undersharpened. For consistent results, rblocks and cblocks should depend on the image size and consistently result in approximately square blocks.

cpaniaguam commented 1 month ago

@danielmwatkins Have you thought of a specific way to do this? An easy way would be to use the gcd = gcd(width, height) of the image's height and width, and choose rblocks, cblocks = width/gcd, height/gcd but this will require a careful choice of the image size.

danielmwatkins commented 1 month ago

@cpaniaguam That's basically the approach I was thinking about, depending on what gcd stands for in this case. There is likely some best-performing size of partition that depends on the spatial scale of the features we are trying to see. We'll want to have some level of tolerance for the image size not being exactly divisible by the size. I could see it also being important to make sure that if the segment is all ice that we don't stretch the histogram the same amount as if it was a mix of ice and water.

I wonder if the block-based method sometimes results in discontinuities in the level of sharpening?

cpaniaguam commented 1 month ago

@danielmwatkins gcd = greatest common divisor

danielmwatkins commented 1 month ago

Gotcha. Actually I was thinking more along the line of saying that the parameter would be something like block_size=200 and we'd have something that estimated the number of blocks that would make the block size closest to the desired size.

danielmwatkins commented 3 weeks ago

@cpaniaguam Monica, Minki, and I spent some time looking through the Matlab code. We determined that the matlab version does the approach I was suggesting: it takes a set pixel dimension, then determines the number of blocks for the adaptive equalization.

There’s also a step that calculates entropy within a block to determine whether there is enough variability in pixel brightness to apply the adaptive equalization. I think with these two adjustments we could get rid of a lot of the oversharpening issue.

cpaniaguam commented 3 weeks ago

@danielmwatkins Thanks for this! Could you point to the relevant blocks of code where these operations are performed?

danielmwatkins commented 3 weeks ago

Yes, in the version that's on the Wilhelmus lab git repo, look at the code near line 435 of MASTER.m. It calculates entropy for each tile, then only applies the histogram equalization if the entropy is larger than a threshold.