JuliaImages / ImageSegmentation.jl

Partitioning images into meaningful regions
Other
47 stars 23 forks source link

add the Chan-Vese Segmentation for Gray #84

Open JKay0327 opened 2 years ago

JKay0327 commented 2 years ago

This is an implementation of Chan-Vese Segmentation without vectorized codes.

JKay0327 commented 2 years ago

Different from my previous work: https://github.com/JKay0327/Chan-Vese, I try to modified the vectorized codes by using for cycle. What's more, I find that there appears a lot of repeat calculations when calculate the forward diff and backward diff, so I've done some works to eliminate the repeat calculations. I couldn't make sure whether CV works well on RGB images, so this version just available for Gray images.

JKay0327 commented 2 years ago

For 3D image, I take image mri in TestImages as an example, here is the demo:

julia> using Images, TestImages
julia> mri = testimage("mri")
julia> mosaicview(mri, ncol=9)

mri

julia> mri_seg = chan_vese(mri, μ=0.005, λ₁=0.8, λ₂=1.0, tol=1e-3, max_iter=2000, Δt=0.4, reinitial_flag=false)

mri_seg_1852

It seems that chan_vese already works well when input image has size 226 * 186 * 27.

I find it reach the convergence when number of iteration comes to 1852. So it works well on 3D image actually!

johnnychen94 commented 2 years ago

The extension to 3d case is great and I think it's worth a publication. Here's a very simple illustration on its superiority over frame-wise 2d processing:

using ImageSegmentation, TestImages, ImageShow, ImageCore
using StackViews

mri = Gray{Float64}.(testimage("mri"))
mri .+= 0.1 .* randn(size(mri)) # add small gaussian noise
mosaicview(mri, nrow=3, rowmajor=true)

mri_seg = chan_vese(mri, μ=0.005, λ₁=0.8, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.4, reinitial_flag=false);
mri_seg_per_frame =  chan_vese.(eachslice(mri, dims=3), μ=0.005, λ₁=0.8, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.4, reinitial_flag=false);

mosaicview(mri_seg, StackView(mri_seg_per_frame), nrow=6, rowmajor=true) .|> Gray

out

and if we check a slice:

mosaicview(mri_seg[70, :, :]', StackView(mri_seg_per_frame)[70, :, :]'; npad=10) .|> Gray

out

Quite obviously that directly processing 3d case produces more consistent and smooth result.

timholy commented 2 years ago

Other optional factors to consider:

timholy commented 2 years ago

@johnnychen94 that's a great comparison. Some might argue that the interpretation of the parameter values might need to differ between 2 and 3 dimensions (for example, the ratio of surface area-to-volume differs), so for a truly fair comparison you might need to tweak the parameters (at least μ) until you get the best match between the two results. Nevertheless, I am confident based on prior experience that the fundamental observation will hold: you'll get better, more consistent results in a natively 3d algorithm.

JKay0327 commented 2 years ago

This is a summary for works we've done in this PR:

The current performance of chan_vese algorithm in Julia is as follow:

julia> using Images, TestImages

julia> using ImageSegmentation

julia> img2d = testimage("cameraman");

julia> @btime chan_vese(img2d, μ=0.25, λ₁=1.0, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.5, reinitial_flag=false);
462.876 ms (119 allocations: 6.46 MiB)

The performance of sciki-image's implementation is as follow:

image = img_as_float(data.camera())
start =time.clock()
for i in range(100):
    cv = chan_vese(image, mu=0.25, lambda1=1, lambda2=1, tol=1e-3, max_iter=200, dt=0.5, init_level_set="checkerboard", extended_output=True)
end = time.clock()
print('Running time: %s Seconds'%((end-start)/100))

Running time: 3.2634375 Seconds

Using chan_vese on 2D input:

julia> using Images, TestImages, MosaicViews

julia> using ImageSegmentation

julia> img2d = testimage("cameraman");

julia> size(img2d) # 2D image
(512, 512)

julia> img2d_seg = chan_vese(img2d, max_iter=200);

julia> mosaicview(img_gray, segmentation, nrow=1, rowmajor=true)

cameraman_seg Using chan_vese on 3D input:

julia> img3d = testimage("mri");

julia> size(img3d) # 3D image
(226, 186, 27)

julia> img3d_seg = chan_vese(img3d, μ=0.005, max_iter=2000);

julia> mosaicview(mri, segmentation, ncol=9, rowmajor=true)

mri_seg

Some problems are still remain, we can solve these problems in next PR:

JKay0327 commented 2 years ago

A list to do before merge:

If there is anything I have omitted, please tell me and I'll extend this todo-list.

Unresolved comments: (Edited by: @johnnychen94)

After merging this:

timholy commented 2 years ago

You've brought this to an excellent state, and both the generalization to 3d and the performance advantage over skimage are gratifying. WRT the performance advantage, can you clarify whether both converged after the same number of iterations? It's important because different algorithms might, in principle, simply have different tolerances on convergence (I haven't checked), and it would not be fair to penalize skimage if the only difference is that it chooses to reach a higher standard before termination. Just checking the numeric value of the tolerance is sometimes insufficient unless you're certain that the two are being applied to the same measure (e.g., identical objective function). If anything is unclear about the comparison, you could try forcing early convergence in both after, say, 10 iterations, as in that case it seems almost certain that both will use the full 10 iterations (you could be certain if you can show their timing is linear in the number of iterations you allow) and then you know you'll be using comparable timing data.

we can solve these problems in next PR: Further improve the performance of chan_vese. Study whether to split chan_vese into "objective function" and "solver".

I agree that both of these are better left for a future PR.

But the algorithm for colorant images is different from the one for 3D image

Can you elaborate on this? AFAICT this is merely an issue of using ColorVectorSpace.Future.abs2 (https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r716017533) rather than (x-y)^2, dropping/modifying a couple of type annotations or single lines (https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r715359471, https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r715359834, and deleting img = float64.(channelview(img))). That's just dropping some unnecessarily-specific choices, could be done with a couple minutes of work, and if it suffices it's a big advance. (If it turns out to be more complicated, then I agree it's better left for a future PR.)

If there is anything I have omitted, please tell me and I'll extend this todo-list.

Hah, I see @johnnychen94 and I are almost simultaneous again!

JKay0327 commented 2 years ago

Any reason not to do https://github.com/JuliaImages/ImageSegmentation.jl/pull/84/files#r716011493? If there is, I'm fine with it, but worth checking whether it's an oversight.

It is shown that this will cause a nearly 100ms's performance reduction, and I have give out a benchmark in https://github.com/JuliaImages/ImageSegmentation.jl/pull/84/files#r716011493.

timholy commented 2 years ago

Sorry if the sequence was unclear. I made a suggestion, you pointed out the problem, I realized I had made an error and amended the suggestion. The performance should be identical because it produces the same output as your solution. for substantially less code while also being fully general across dimensionality.

EDIT: now I see you already added the missing .... Sorry I failed to understand the point you were trying to make.

JKay0327 commented 2 years ago

In my test, I've already change the code into:

# using kernelfactor
initial_level_set(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))...)
...
julia> @btime chan_vese(img_gray, μ=0.25, λ₁=1.0, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.5, reinitial_flag=false);
  575.990 ms (115 allocations: 6.46 MiB)

It produces the same output, but gets a worse performance. Do you means that we have to first find a way to make initial_level_set fully general across dimensionality, then consider the performance of this method?

timholy commented 2 years ago

EDIT: now I see that you had already fixed my first concern in your initial post on this topic, so I failed to realize that the performance problem you were worried about wasn't an artifact. Sorry about the confusion. The text below should still be useful, however.

So this is what makes it surprising:

julia> using ImageFiltering

julia> function initial_level_set_yours(shape::Tuple{Int64, Int64})
           x₀ = reshape(collect(0:shape[begin]-1), shape[begin], 1)
           y₀ = reshape(collect(0:shape[begin+1]-1), 1, shape[begin+1])
           𝚽₀ = @. sin(pi / 5 * x₀) * sin(pi / 5 * y₀)
       end
initial_level_set_yours (generic function with 1 method)

julia> initial_level_set_mine(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))...)
initial_level_set_mine (generic function with 1 method)

julia> ϕyours = initial_level_set_yours((5, 8));

julia> ϕmine = initial_level_set_mine((5, 8));

julia> ϕyours == ϕmine
true

However

julia> println(summary(ϕyours))
5×8 Matrix{Float64}

julia> println(summary(ϕmine))
5×8 OffsetArray(::Matrix{Float64}, 1:5, 1:8) with eltype Float64 with indices 1:5×1:8

so the types are different. This suggests you have an inferrability problem (slower and more allocations are typical symptoms). While the best fix is to solve the inferrability problem, in the short term perhaps you can use parent:

julia> println(summary(parent(ϕmine)))
5×8 Matrix{Float64}

which gives the exact same type, too.

Do you means that we have to first find a way to make initial_level_set fully general across dimensionality, then consider the performance of this method?

There's no absolute obligation, but there seems to be no reason to avoid being general (shorter and more general code is typically preferred).