Closed elbert5770 closed 10 months ago
Thanks for reporting this.
What is the julia
code snippet that you used ?
MarchingCubes
supports large integers:
mc = MC(vol, Int64)
march(mc)
I also suspect a 1
-based indexing porting error here.
The source data is 30 GB, so we'd need to arrange some way to transfer this if you are interested in trying to reproduce the result.
I'm always interested in correctness ! Especially in this case were the same algorithm is used and should return the same number of points / faces. Crafting a smaller reproducer would indeed help ...
The type of mc is: typeof(mc) = MC{Float16, Int64}
using CSV
using MarchingCubes
using Meshes
using SparseArrays
using FileIO
using StaticArrays
using Tables
using PlyIO
output(PlyIO::Module, m::MC, fn::AbstractString = "test.ply") = begin
ply = PlyIO.Ply()
push!(
ply,
PlyIO.PlyElement(
"vertex",
PlyIO.ArrayProperty("x", map(v -> Float32(v[1]), m.vertices)),
PlyIO.ArrayProperty("y", map(v -> Float32(v[2]), m.vertices)),
PlyIO.ArrayProperty("z", map(v -> Float32(v[3]), m.vertices)),
PlyIO.ArrayProperty("nx", map(n -> Float32(n[1]), m.normals)),
PlyIO.ArrayProperty("ny", map(n -> Float32(n[2]), m.normals)),
PlyIO.ArrayProperty("nz", map(n -> Float32(n[3]), m.normals)),
),
)
vertex_indices = PlyIO.ListProperty("vertex_indices", UInt8, Int32)
for i ∈ eachindex(m.triangles)
push!(vertex_indices, m.triangles[i] .- 1)
end
push!(ply, PlyIO.PlyElement("face", vertex_indices))
PlyIO.save_ply(ply, fn, ascii = true)
nothing
end
function read_microns(filename,dfmap)
dfmap::Matrix{Int64} .= CSV.File(filename) |> Tables.matrix
end
function fill_phitemp(phitemp,countslice,startframe,xval,yval,k_start,k_end,leni,lenj,dfmap,unique,dir_name,frame_name)
layers = countslice + startframe - 1
current_z = (countslice-1)*5
println(layers)
filename = string(dir_name,frame_name, layers, "_",xval,"_",yval,".csv")
read_microns(filename,dfmap)
for k in k_start:k_end
for j in 1:lenj
for i in 1:leni
phitemp[i, j,current_z+k ] = Float16(Int8(dfmap[i, j] == unique)*(-2)+1)
end
end
end
end
function make_ply(uniquelist,slices,startframe,xval,yval,dir_name,frame_name)
filename = string(dir_name, frame_name,startframe, "_",xval,"_",yval,".csv")
dfmap_temp = CSV.File(filename) |> Tables.matrix
leni, lenj = size(dfmap_temp)
@show leni, lenj
dfmap = Matrix{Int64}(undef,leni,lenj)
phitemp = zeros(Float16, leni, lenj,(slices-1)*5 )
current_z = 0
for unique in uniquelist
countslice = 1
k_start = 1
k_end = 3
fill_phitemp(phitemp,countslice,startframe,xval,yval,k_start,k_end,leni,lenj,dfmap,unique,dir_name,frame_name)
for countslice in 2:slices-1
k_start = -1
k_end = 3
fill_phitemp(phitemp,countslice,startframe,xval,yval,k_start,k_end,leni,lenj,dfmap,unique,dir_name,frame_name)
end
countslice = slices
k_start = -1
k_end = 0
fill_phitemp(phitemp,countslice,startframe,xval,yval,k_start,k_end,leni,lenj,dfmap,unique,dir_name,frame_name)
mc = MC(phitemp)
@show typeof(mc)
march(mc)
filename_out = string(dir_name, unique,"_",startframe, "_",xval,"_",yval,"_","16.ply")
output(PlyIO, mc, filename_out)
end
end
uniquelist = [864691136812081779]
slices = 400
startframe = 17241
xval = 170398
yval = 61241
dir_name = string(@__DIR__, "/../Neuron_transport_data/")
frame_name = "frame2_"
make_ply(uniquelist,slices,startframe,xval,yval,dir_name,frame_name)
vertex_indices = PlyIO.ListProperty("vertex_indices", UInt8, Int32)
Why Int32
there ? Could you please add @show length(m.vertices) length(m.triangles) length(m.normals)
after the mentioned line so that we make sure the bug isn't PlyIO
related ?
I've checked against the c++ code of http://thomas.lewiner.org/publication_page.php%EF%B9%96pubkey=marching_cubes_jgt.html for all simple cases (torus, hyperboloid, sphere, plane, ...):
Here is the debug output, we have the same results in julia
and from the vanilla c++
code:
(i, case) = (1, :cushin)
Writing 302 vertices and 600 triangles using `PlyIO`.
nverts=302 ntrigs=0.
Marching Cubes ran in 0.001886 secs.
acc=0 nverts=302 ntrigs=600.
Marching Cubes::writePLY(ply/cpp-1.ply)... 302 vertices written
600 triangles written
(i, case) = (2, :torus2)
Writing 11333 vertices and 22620 triangles using `PlyIO`.
nverts=11333 ntrigs=0.
Marching Cubes ran in 0.002755 secs.
acc=0 nverts=11333 ntrigs=22620.
Marching Cubes::writePLY(ply/cpp-2.ply)... 11333 vertices written
22620 triangles written
(i, case) = (3, :sphere)
Writing 792 vertices and 1572 triangles using `PlyIO`.
nverts=792 ntrigs=0.
Marching Cubes ran in 0.002061 secs.
acc=0 nverts=792 ntrigs=1572.
Marching Cubes::writePLY(ply/cpp-3.ply)... 792 vertices written
1572 triangles written
(i, case) = (4, :plane)
Writing 7038 vertices and 13720 triangles using `PlyIO`.
nverts=7038 ntrigs=0.
Marching Cubes ran in 0.002188 secs.
acc=0 nverts=7038 ntrigs=13720.
Marching Cubes::writePLY(ply/cpp-4.ply)... 7038 vertices written
13720 triangles written
(i, case) = (5, :cassini)
Writing 554 vertices and 1104 triangles using `PlyIO`.
nverts=554 ntrigs=0.
Marching Cubes ran in 0.002004 secs.
acc=0 nverts=554 ntrigs=1104.
Marching Cubes::writePLY(ply/cpp-5.ply)... 554 vertices written
1104 triangles written
(i, case) = (6, :blooby)
Writing 2168 vertices and 4352 triangles using `PlyIO`.
nverts=2168 ntrigs=0.
Marching Cubes ran in 0.002215 secs.
acc=0 nverts=2168 ntrigs=4352.
Marching Cubes::writePLY(ply/cpp-6.ply)... 2168 vertices written
4352 triangles written
(i, case) = (7, :hyperboloid)
Writing 12297 vertices and 24118 triangles using `PlyIO`.
nverts=12297 ntrigs=0.
Marching Cubes ran in 0.002534 secs.
acc=0 nverts=12297 ntrigs=24118.
Marching Cubes::writePLY(ply/cpp-7.ply)... 12297 vertices written
24118 triangles written
I'm now testing with a resolution of (500, 1_000, 2_000)
(1 billion cells) ...
=> I have the same results as the vanilla c++ code.
mesh generated in C with the LewinerMarchingCubes from NeurolabUSC, modified to accept Int64
How was the code modified ?
Could you try https://github.com/JuliaGeometry/MarchingCubes.jl/pull/22 ?
vertex_indices = PlyIO.ListProperty("vertex_indices", UInt8, Int32)
~Why
Int32
there ? Could you please add@show length(m.vertices) length(m.triangles) length(m.normals)
after the mentioned line so that we make sure the bug isn'tPlyIO
related ?~
Int32 is more than sufficient in my Ply case and should be okay in almost any circumstance, as a large model has tens of millions of vertices, not billions. The Int32 problem comes when the number of voxels is > 2e9, which is actually pretty easy to do. However, even your test with 1 billion voxels, you won't hit the Int32 limit.
Does your test check for zero face volumes? The meshes look nearly identical to the LewinerMarchingCubes but with about 0.1% of faces with zero volume.
My modifications were just to change some int to long and increase ALLOC_SIZE because I had > 2e9 voxels. You can diff the MarchingCubes.h and MarchingCubes1.2.c against the LewinerMarchingCubes to see the differences. Note that these files also have changes associated with reading data from CSV files. You can find the files at https://github.com/elbert5770/julia_educational_examples.
I don't know if this is related but SpaceClaim finds no problems with cushin from LewinerMarchingCubes but finds 'Inconsistent triangles' with MarchingCubes.jl. All the other examples look fine in SpaceClaim. The models look the same visually.
However, even your test with 1 billion voxels, you won't hit the Int32 limit.
Silly me, I'll check with a bigger case.
Does your test check for zero face volumes
No, I don't recall the c++ doing that either.
You can diff the MarchingCubes.h and MarchingCubes1.2.c against the LewinerMarchingCubes to see the differences.
:+1:
I don't know if this is related but SpaceClaim finds no problems with cushin from LewinerMarchingCubes but finds 'Inconsistent triangles' with MarchingCubes.jl
I'll try to check the :cushin
test case again. Could you precise at what resolution this happened (is it different from the nx=60, ny=60, nz=60
defaults) ?
You can diff the MarchingCubes.h and MarchingCubes1.2.c against the LewinerMarchingCubes to see the differences.
Diffing is difficult because you are using a c++ -> c
translated algorithm and not the vanilla implementation from the paper.
It would help if we could reduce the bug to the cushin
example.
Here's the c version I started from: https://github.com/neurolabusc/LewinerMarchingCubes. The diff is pretty straightforward, but don't look in the cpp folder, use the MarchingCubes.c and MarchingCubes.h. I assume their LUTs are identical to the original code. They have an interesting quote: "This is a simple NIfTI to mesh implementation using the AFNI marching cubes code. Ziad Saad ported the C++ algorithm of Thomas Lewiner to C. Ziad's original port used the lookup table dated 13/07/2002, whereas this repository updates this to the tables from 12/08/2002. This updated table provides consistent winding order for triangles."
I used the default resolution for the test case. It would be nice if the cushin example isolated the bug. If you find it, I can quickly test against the 30 GB case.
@elbert5770 and @t-bltg the rationale for my repository was to fix winding errors in AFNI, as described here. I would suggest you visualize results on a tool that uses winding order to distinguish front and back faces (e.g. turn off Back Face Lighting). For example Surfice or the web based NiiVue.
I do also have a WASM port that allows you to validate your results with an online web demo.
I note that Julia already has the QuadricMeshSimplification that I ported from Sven Forstmann's incredibly efficient implementation. I would suggest that the README page for the MarchingCubes also provides an example that applies simplification following meshification, as huge volumes can create immensely complicated meshes.
Thanks @neurolabusc for the clarification and the pointers here.
What I'm failed to understand is that we already use the latest table from 12/08/2002. So I still must understand where the winding fix is defined and what we missed here ...
I would suggest that the README page for the MarchingCubes also provides an example that applies simplification following meshification
Good point, I've opened a dedicated issue about this (https://github.com/JuliaGeometry/MarchingCubes.jl/issues/23).
Side note: how can I read an
=> https://github.com/JuliaNeuroscience/NIfTI.jl.nii
file in julia ?
This is a nice reproducer to play with (bet.nii
from https://github.com/neurolabusc/LewinerMarchingCubes):
using MarchingCubes
using PlyIO
using NIfTI
main() = begin
ni = niread("bet.nii")
@show size(ni)
mc = MC(Float32.(ni.raw))
println("lewiner")
march(mc, 128)
MarchingCubes.output(PlyIO, mc)
println("legacy")
march_legacy(mc, 128)
MarchingCubes.output(PlyIO, mc)
return
end
main()
And the corresponding output from https://github.com/afni/afni/issues/342:
# old 1.0.20020713
./MarchingCubes bet.nii 128 0
input voxels: 142 178 142
output mesh vert: 422219 tris: 842822 ms: 63
# patched 1.0.20020812
$ ./MarchingCubes bet.nii 128 0
input voxels: 142 178 142
output mesh vert: 422216 tris: 842664 ms: 66
# legacy
./MarchingCubes bet.nii 128 1
input voxels: 142 178 142
output mesh vert: 421286 tris: 840992 ms: 63
# MarchingCubes.jl
$ julia bug.jl
size(ni) = (142, 178, 142)
lewiner
Writing 422229 vertices and 842798 triangles using `PlyIO`.
legacy
Writing 421286 vertices and 840992 triangles using `PlyIO`.
@elbert5770 I think I've found the bug, despite being careful in porting the algorithm (see https://github.com/JuliaGeometry/MarchingCubes.jl/pull/24).
Here is the new output, matching the c code:
$ julia bug.jl
size(ni) = (142, 178, 142)
lewiner
Writing 422216 vertices and 842664 triangles using `PlyIO`.
legacy
Writing 421286 vertices and 840992 triangles using `PlyIO`.
Could you check the main
branch against your 30Gb file ?
Unfortunately, it does seem to impact the tests, and I don't want to add a binary file (bet.nii
) to the repository, so this won't be covered by CI unless we find a small test.
Working on it now
@t-bltg
Unfortunately, it does seem to impact the tests, and I don't want to add a binary file (
bet.nii
) to the repository, so this won't be covered by CI unless we find a small test.
Can I suggest for your test cases you generate the voxel-based images using a simple formula. This will provide a robust test without requiring much disk space for the code. The most popular shape for this would be a static scene with some 3D metaballs. The figure from the Lewiner paper includes interlocking tori to show the edge cases where the classic marching cubes does poorly. For my own software (e.g. MRIcroGL) I use the borg surface. I am not familiar with Julia, but here is a minimal Matlab script for borg voxels. The formula creates voxels with continuous intensities, and you choose a threshold brightness to define the mesh isosurface.
It's exactly the same errors in SpaceClaim and 8514 faces with zero volume with the following packages:
Status `~/.julia/environments/v1.9/Project.toml`
[6e4b80f9] BenchmarkTools v1.3.2
[336ed68f] CSV v0.10.11
[944b1d66] CodecZlib v0.7.3
[a93c6f00] DataFrames v1.6.1
[5789e2e9] FileIO v1.16.1
[f67ccb44] HDF5 v0.17.1
[42fd0dbc] IterativeSolvers v0.9.3
[033835bb] JLD2 v0.4.38
[da04e1cc] MPI v0.20.19
[3da0fdf6] MPIPreferences v0.1.10
[299715c1] MarchingCubes v0.1.8 `https://github.com/JuliaGeometry/MarchingCubes.jl.git#main`
[eacbb407] Meshes v0.36.6
[42171d58] PlyIO v1.1.2
[90137ffa] StaticArrays v1.7.0
[286e6d88] SymRCM v0.2.1
[bd369af6] Tables v1.11.1
[64499a7a] WriteVTK v1.18.1
[37e2e46d] LinearAlgebra
[2f01184e] SparseArrays ```
Hi @neurolabusc,
I've just added more examples from the MarchingCubes code.
Can I suggest for your test cases you generate the voxel-based images using a simple formula.
Do you mean evaluate an implicit function to generate data on the fly ? I think that this is already what we do in the tests:
https://github.com/JuliaGeometry/MarchingCubes.jl/blob/5d1b4695571b161aa3ce1cb4c5b44bdc809c4ae9/test/runtests.jl#L8-L105 https://github.com/JuliaGeometry/MarchingCubes.jl/blob/5d1b4695571b161aa3ce1cb4c5b44bdc809c4ae9/src/example.jl#L48
Unfortunately, they do not seem to hit the bogus line as the surface in bet.nii
does.
How was this file generated ?
It's exactly the same errors in SpaceClaim and 8514 faces with zero volume with the following packages:
Damn, there might be more than one bug then :confused:. I'll inspect the code more closely tomorrow.
@t-bltg I wonder if you can run the marching cubes on your existing tests but adjust the isosurface threshold until you find a level that fails. The bet.nii
is a T1-weighted MRI scan after using FSL's brain extraction technique (BET). I added two spheres to the image: one solid and one hollow to provide a test for the hole filling.
I would actually recommend looking at OffsetArrays.jl https://github.com/JuliaArrays/OffsetArrays.jl to see if you can set up the arrays to be zero-indexed and use the C look up table as is, rather than potentially introducing errors caused by 1-indexing the LUT
I would actually recommend looking at OffsetArrays.jl
I went that road, also tried to use regular n-dimensional julia arrays, but I remember performance issues wrt compiled language such as c++.
The only way to match performance for the LUT was to use statically sized data (Tuple
s).
@elbert5770 would you be able to reduce the size of the 30Gb file for a reproducer, or be able to share it somehow, so that I can try to narrow the issue as I did with the bet.nii
example ?
I would actually recommend looking at OffsetArrays.jl
I went that road, also tried to use regular n-dimensional julia arrays, but I remember performance issues wrt compiled language such as c++. The only way to match performance for the LUT was to use statically sized data (
Tuple
s).
Do you still have that code? Because there are potentially other reasons for the slowdown. Perhaps we could start a repository to play around in.
I think the first version in the git history of src/lut.jl
was the vanilla LUT (0-based indexing):
It shouldn't be too difficult to adapt to OffsetArrays.
if you can set up the arrays to be zero-indexed and use the C look up table as is, rather than potentially introducing errors caused by 1-indexing the LUT
@elbert5770, I've added a check script in https://github.com/JuliaGeometry/MarchingCubes.jl/pull/26 to enforce checks for 0/1 based indexing errors of the LUTs.
Some LUTs were changed for consistency, but it shouldn't change the MarchingCubes algorithm output.
Can you check the 30Gb case against latest main
branch ?
I'm closing this, since there haven't been any answer over the last weeks.
The following is a mesh analysis with SpaceClaim of a mesh generated in C with the LewinerMarchingCubes from NeurolabUSC, modified to accept Int64 and thus large numbers of voxels and arrays instead of NIIFTI:
The listed 'errors' are not actually errors.
However, with MarchingCubes.jl, there are 8514 faces with zero volume, out of 5,421,474 faces:
I suspect that the 2000 x 2000 x 1995 voxels are hitting an Int32 limit somewhere, or there is a slight error in the LUT's due to the 1-index conversion. I think the latter is more plausible. The source data is 30 GB, so we'd need to arrange some way to transfer this if you are interested in trying to reproduce the result.