seung-lab / RealNeuralNetworks.jl

A unified framework for skeletonization, morphological analysis, and connectivity analysis.
Apache License 2.0
39 stars 2 forks source link

index out of bound error #11

Closed xiuliren closed 7 years ago

xiuliren commented 7 years ago

have already get swc file of more than 10 neurons, so this error happens occasionally.

skeletonization from global point cloud and dbf using TEASAR algorithm...
total number of points: 827520
ERROR: LoadError: ArgumentError: An index is out of bound.
 in sparsevec(::Array{Int64,1}, ::UnitRange{Int64}, ::UInt32, ::Function) at ./sparse/sparsevector.jl:117
 in create_node_lookup(::Array{UInt32,2}) at /home/nico/.julia/v0.5/TEASAR/src/NodeNets.jl:347
 in #NodeNet#3(::Array{Float64,1}, ::TEASAR.NodeNets.#alexs_penalty, ::Tuple{UInt32,UInt32,UInt32}, ::Type{T}, ::Array{UInt
32,2}) at /home/nico/.julia/v0.5/TEASAR/src/NodeNets.jl:75
 in (::Core.#kw#Type)(::Array{Any,1}, ::Type{TEASAR.NodeNets.NodeNet}, ::Array{UInt32,2}) at ./<missing>:0
 in macro expansion at ./util.jl:188 [inlined]
 in trace(::TEASAR.Manifests.Manifest, ::UInt32) at /home/nico/.julia/v0.5/TEASAR/src/Manifests.jl:75
 in #trace#1(::String, ::String, ::UInt32, ::Function, ::UInt32) at /home/nico/.julia/v0.5/TEASAR/scripts/skeletonize.jl:20
 in (::#kw##trace)(::Array{Any,1}, ::#trace, ::UInt32) at ./<missing>:0
 in macro expansion at /home/nico/.julia/v0.5/TEASAR/scripts/skeletonize.jl:90 [inlined]
 in macro expansion at /home/nico/.julia/v0.5/ProgressMeter/src/ProgressMeter.jl:478 [inlined]
 in main() at /home/nico/.julia/v0.5/TEASAR/scripts/skeletonize.jl:89
 in include_from_node1(::String) at ./loading.jl:488
 in process_options(::Base.JLOptions) at ./client.jl:265
 in _start() at ./client.jl:321
while loading /home/nico/.julia/v0.5/TEASAR/scripts/skeletonize.jl, in expression starting on line 99
xiuliren commented 7 years ago

this error comes from a super big neuron (id: 76197), which expands to the whole range of XY section!

image

xiuliren commented 7 years ago

suggestions from @nicholasturner1

I'd try the same thing I mentioned before - compare `prod(max_dims)` to the max index into the sparse vector (`maximum(Int[sub2ind(max_dims, points[i,:]... ) for i=1:num_points ])`) (edited)

[10:43] 
the `prod` value should always be greater than or equal to the other one unless the indexing scheme is somehow wrong

indeed, it is this problem. max_dims = (3276,975,1750)

ERROR: LoadError: AssertionError: prod(max_dims) > maximum(Int[sub2ind(max_dims,points[i,:]...) for i = 1:num_points])
 in create_node_lookup(::Array{UInt32,2}) at /home/nico/.julia/v0.5/RealNeuralNetworks/src/NodeNets.jl:349
 in #NodeNet#3(::Array{Float64,1}, ::RealNeuralNetworks.NodeNets.#alexs_penalty, ::Tuple{UInt32,UInt32,UInt32}, ::Type{T}, 
::Array{UInt32,2}) at /home/nico/.julia/v0.5/RealNeuralNetworks/src/NodeNets.jl:74
 in (::Core.#kw#Type)(::Array{Any,1}, ::Type{RealNeuralNetworks.NodeNets.NodeNet}, ::Array{UInt32,2}) at ./<missing>:0
 in macro expansion at ./util.jl:188 [inlined]
 in trace(::RealNeuralNetworks.Manifests.Manifest, ::Int64) at /home/nico/.julia/v0.5/RealNeuralNetworks/src/Manifests.jl:7
8
 in #trace#1(::String, ::String, ::UInt32, ::Function, ::Int64) at /home/nico/.julia/v0.5/RealNeuralNetworks/scripts/skelet
onize.jl:20
 in (::#kw##trace)(::Array{Any,1}, ::#trace, ::Int64) at ./<missing>:0
 in main() at /home/nico/.julia/v0.5/RealNeuralNetworks/scripts/skeletonize.jl:94
 in include_from_node1(::String) at ./loading.jl:488
 in process_options(::Base.JLOptions) at ./client.jl:265
 in _start() at ./client.jl:321
while loading /home/nico/.julia/v0.5/RealNeuralNetworks/scripts/skeletonize.jl, in expression starting on line 99
nicholasturner1 commented 7 years ago

You mentioned that your chunk size is (512,512,512) before, and that the coordinates were local. If that's the case, the maximum possible value of max_dims should be (512,512,512), since it should be the maximum value in each dimension of the passed coordinates. Perhaps you're just making the data structures for the graph once you've finished chunk-wise processing.

nicholasturner1 commented 7 years ago

You're using UInt32, so this is another overflow problem. prod(3276,975,1750) > typemax(UInt32)

xiuliren commented 7 years ago

exactly, this is overflow. but not because the 512x512x512, but because the neuron spans a big space and the max_dims is too big. prod((3276,975,1750)) > Int(typemax(UInt32)) = true

fixing issue: 6f581de5e512f152b86f51ba54a9f0a8e6f48bd7 a5c7d2f3e23bf3da22c7f5e3ce519a67c2008dc7