seung-lab / igneous

Scalable Neuroglancer compatible Downsampling, Meshing, Skeletonizing, Contrast Normalization, Transfers and more.
GNU General Public License v3.0
44 stars 17 forks source link

Extract skeletons using igneous #73

Closed albertplus007 closed 4 years ago

albertplus007 commented 4 years ago

Hi all, nice library Recently I am using your library to extract neuron skeleton. But I am curious why the skeleton I extracted is broken and discontinuous, It shows below: I change the skeleton to swc file and show it using Vaa3d, image 1

In the red box the skeleton is broken, but the actual neurons are intact, why does it happen? How to slove it?

william-silversmith commented 4 years ago

Hi Albert,

There are two possible reasons for this. Either there is a hole in the segmentation, or you tripped on a bug (?) I've encountered where if the scale factor is too high, sometimes it seems to cause problems (other times not?). Try reducing the scale factor to 4 and let me know if that helps. Usually when I see this issue, it is not easily reproducible and disappears e.g. when I do a fresh install or something.

Will

albertplus007 commented 4 years ago

Actually, I have try scale=2, scale=4, scale=3, but it also broken. I am just using the google segmentation to extract skeleton, the label i use here is 1099405435 So you mean i will reinstall igneous?

william-silversmith commented 4 years ago

What resolution level are you running the skeletons against? Sometimes a sufficiently high downsample can introduce breaks in the segmentation. How are you running Igneous? Is it against a local copy?

albertplus007 commented 4 years ago

I use the 256x256x320nm resolution, does the low resolution influence the result? I download the segmentation in my local file, and use the local file to extract the skeleton. Here is my code:

cloudpath1 = 'file:///mnt/d/braindata/google_segmentation/google_256.0x256.0x320.0/'
mip = 0
# First Pass: Generate Skeletons
tasks1 = tc.create_skeletonizing_tasks(
    cloudpath1, 
    mip, # Which resolution to skeletionize at (near isotropic is often good)
    shape=Vec(512, 512, 512), # size of individual skeletonizing tasks (not necessary to be chunk aligned)
    sharded=False, # Generate (true) concatenated .frag files (False) single skeleton fragments
    spatial_index=False, # Generate a spatial index so skeletons can be queried by bounding box
    info=None, # provide a cloudvolume info file if necessary (usually not)
    fill_missing=True, # Use zeros if part of the image is missing instead of raising an error

    # see Kimimaro's documentation for the below parameters
      teasar_params={
    'scale': 4,
    'const': 20, # physical units
    'pdrf_exponent': 4,
    'pdrf_scale': 100000,
    'soma_detection_threshold': 1100, # physical units
    'soma_acceptance_threshold': 3500, # physical units
    'soma_invalidation_scale': 1.0,
    'soma_invalidation_const': 300, # physical units
    'max_paths': None, # default None
  },
    object_ids=[1099405435], # Only skeletonize these ids
    mask_ids=None, # Mask out these ids
    fix_branching=True, # (True) higher quality branches at speed cost
    fix_borders=True, # (True) Enable easy stitching of 1 voxel overlapping tasks 
    dust_threshold=0, # Don't skeletonize below this physical distance
    progress=False, # Show a progress bar
    parallel=1, # Number of parallel processes to use (more useful locally)
  )
tq = MockTaskQueue()
tq.insert_all(tasks1)
william-silversmith commented 4 years ago

256nm resolution is far lower than my typical usage and is highly likely to fragment to produce the holes you are witnessing. Ordinarily, I run skeletonization at 32x32x40 resolution. It can be run at 64x64x40, but this causes the skeletons to start snaking along the sides of labels as they start getting very thin.

That's kind of a big computational leap for you guys, so maybe at least try using 128x128x160 first?

william-silversmith commented 4 years ago

Also, you can try using the LocalTaskQueue to execute jobs in parallel or the new FileQueue protocol to have multiple worker processes attack a large job.

# Task creation process
tq = TaskQueue('fq:///mnt/d/braindata/queue') # for example
tq.insert(tasks)

# worker processes
tq = TaskQueue('fq:///mnt/d/braindata/queue') # for example
tq.poll(verbose=True, tally=True)
albertplus007 commented 4 years ago

Now, i run the skeleton task on 64x64x80nm resolution, the same code for the same label as above, but the result seems broken again, the figure as below: 1 It seem better than the low resolution, but still broken in some region. I also try the 32x32x40nm resolution, but it does not finish yet.

Did something go wrong during the fusion? I set the dust_threshold = 2 and tick_threshold = 0 very small to get a complete result.

william-silversmith commented 4 years ago

I think the gaps are real. neuroglancer link

image
william-silversmith commented 4 years ago

You can try fixing them with kimimaro.join_close_components but that technique is just a heuristic and you need to make sure it is doing something sensible.

https://github.com/seung-lab/kimimaro/

albertplus007 commented 4 years ago

Thanks for the tip, I forget to see the label in neuroglancer. If the result of the segmentation is broken, the result of skeleton is indeed broken. Did you use this method to put them together? kimimaro.join_close_components. It seems that the radius needs to be adjusted very precisely.

william-silversmith commented 4 years ago

I didn't do the skeletonization for the Google segmentation. They used a derivative of another library. It looks like they joined the skeletons in post processing though as the skeletons are terminating at the furthest point of the broken pieces which is characteristic of TEASAR. I don't know what method they used, though they may have put it in the methods section of a paper somewhere.

albertplus007 commented 4 years ago

Thanks a lot. Another question: I use the 64x64x80nm resolution and set mip = 0 to extract skeleton, then I use the 32x32x40nm resolution but set mip = 1(i just want to make a double downsampling, make this dateset's resolution is the same as the 64x64x80nm) to extract skeleton. What is the difference from the two method? Does the result of the same label is the same or have some thing special?

william-silversmith commented 4 years ago

Assuming everything is configured correctly, there should be no or minor differences between the two skeletons. The only reason for a difference would be if different downsampling methods were used to achieve the lower resolution layer (e.g. 2x2x2 striding vs 2x2x2 mode). The mip level determines how CloudVolume downloads an image, but once the image is in memory Kimimaro takes over and is agnostic to where the image came from. If there are minor differences in the bounding box of the layers, that could cause some differences too.

william-silversmith commented 4 years ago

Closing this question due to inactivity. Please reopen or open a new issue if you need help! ^_^