seung-lab / cross-section

Tools for computing properties of cross sections of 3D voxelized images.
GNU Lesser General Public License v3.0
3 stars 2 forks source link

Add an option to output ids of voxels that are intercepted by the plane? #5

Open wanqingy opened 7 months ago

wanqingy commented 7 months ago

Hi!

I am actually looking for a way to get the cross sectional surface that is orthogonal to the direction of travel along the axons. More specifically, I need the outline of this cross sectional surface. I was so excited to see your package (through Forrest) but later realized that it's currently only output the area of the cross sectional surface.

Would it be possible to add an option to output all the voxel ids of the cross sectional surface?

Thanks! Wan-Qing

william-silversmith commented 7 months ago

Hi Wan-Qing!

That's certainly something I can do. Would you prefer the output as a list of voxel locations or as an image where only the intercepted voxels are set to foreground?

The algorithm already does something similar to the latter internally, I sometimes extract it to visualize for debugging purposes.

Will

wanqingy commented 7 months ago

Imagery output would be great! Thanks, Will!

william-silversmith commented 7 months ago

Hi Wan-Qing,

The current master branch (not on PyPI yet) has a new function: xs3d.cross_section that returns a float32 image where each voxel's value is its contribution to the area. This should be easy to manipulate into whatever form you want. Let me know if that works for you and I can release it on PyPI.

Will

wanqingy commented 7 months ago

Yes, this works! Looking forward to try it out! Thank you!

From: William Silversmith @.> Date: Friday, February 9, 2024 at 5:02 PM To: seung-lab/cross-section @.> Cc: Wan-Qing Yu @.>, Author @.> Subject: Re: [seung-lab/cross-section] Add an option to output ids of voxels that are intercepted by the plane? (Issue #5)

Hi Wan-Qing,

The current master branch (not on PyPI yet) has a new function: xs3d.cross_section that returns a float32 image where each voxel's value is its contribution to the area. This should be easy to manipulate into whatever form you want. Let me know if that works for you and I can release it on PyPI.

Will

— Reply to this email directly, view it on GitHubhttps://github.com/seung-lab/cross-section/issues/5#issuecomment-1936784831, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BCVKML73CSRM7XFXKFNP7TDYS3BK5AVCNFSM6AAAAABDBASS52VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZWG44DIOBTGE. You are receiving this because you authored the thread.Message ID: @.***>

william-silversmith commented 7 months ago

it's released! let me know if you run into any problems

On Mon, Feb 12, 2024, 3:00 AM Wan-Qing Yu @.***> wrote:

Yes, this works! Looking forward to try it out! Thank you!

From: William Silversmith @.> Date: Friday, February 9, 2024 at 5:02 PM To: seung-lab/cross-section @.> Cc: Wan-Qing Yu @.>, Author @.> Subject: Re: [seung-lab/cross-section] Add an option to output ids of voxels that are intercepted by the plane? (Issue #5)

Hi Wan-Qing,

The current master branch (not on PyPI yet) has a new function: xs3d.cross_section that returns a float32 image where each voxel's value is its contribution to the area. This should be easy to manipulate into whatever form you want. Let me know if that works for you and I can release it on PyPI.

Will

— Reply to this email directly, view it on GitHub< https://github.com/seung-lab/cross-section/issues/5#issuecomment-1936784831>, or unsubscribe< https://github.com/notifications/unsubscribe-auth/BCVKML73CSRM7XFXKFNP7TDYS3BK5AVCNFSM6AAAAABDBASS52VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZWG44DIOBTGE>.

You are receiving this because you authored the thread.Message ID: @.***>

— Reply to this email directly, view it on GitHub https://github.com/seung-lab/cross-section/issues/5#issuecomment-1938195716, or unsubscribe https://github.com/notifications/unsubscribe-auth/AATGQSKNAM2KQG2HNAPT2CDYTHDZ3AVCNFSM6AAAAABDBASS52VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZYGE4TKNZRGY . You are receiving this because you commented.Message ID: @.***>

william-silversmith commented 7 months ago

Hi @wanqingy! I released the feature we discussed in version 1.4.0 as projection = xs3d.slice(labels, position, normal)

So long as you don't care about the orientation of the projection, I think this should be pretty good for your purposes. If you need the orientation to be more similar to a standard basis let me know. I haven't figured out a good system for doing that yet (I'm using whatever pops out of a cross product rn).

wanqingy commented 7 months ago

Thank you, Will!

I just tested it using binary image and it worked great! I will put into real work and keep you posted!

From: William Silversmith @.> Date: Thursday, February 15, 2024 at 3:09 PM To: seung-lab/cross-section @.> Cc: Wan-Qing Yu @.>, Mention @.> Subject: Re: [seung-lab/cross-section] Add an option to output ids of voxels that are intercepted by the plane? (Issue #5)

Hi @wanqingyhttps://github.com/wanqingy! I released the feature we discussed in version 1.4.0 as projection = xs3d.slice(labels, position, normal)

So long as you don't care about the orientation of the projection, I think this should be pretty good for your purposes. If you need the orientation to be more similar to a standard basis let me know. I haven't figured out a good system for doing that yet (I'm using whatever pops out of a cross product rn).

— Reply to this email directly, view it on GitHubhttps://github.com/seung-lab/cross-section/issues/5#issuecomment-1947486626, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BCVKML3FQQT2EQBOANJFMSTYT2IR3AVCNFSM6AAAAABDBASS52VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBXGQ4DMNRSGY. You are receiving this because you were mentioned.Message ID: @.***>

wanqingy commented 7 months ago

Hi Will!

I tested it out for a real case and I think the projection image does not correct for anisotropic voxel resolution, right?

For a reference: here is my code: https://gist.github.com/wanqingy/0d1a38cd852f42a49080e4bd9cee783e here is my manual slicing: https://neuroglancer.neuvue.io/?json_url=https://global.daf-apis.com/nglstate/api/v1/4863895665639424

william-silversmith commented 7 months ago

You're right. The entire computation is performed in voxel space. For area computation, that's not such a big deal, you just change your normal to compensate I think, but for visualization... yea it becomes important again.

I'll have to look into this.

wanqingy commented 7 months ago

More important than visualization for me is that the width of myelin got squished when slicing off-axes. And it becomes inconsistent to detect myelin with my current strategy. I could probably find a way to compensate for it but if would be better if you have a way to account for anisotropy! Thanks as always!

william-silversmith commented 7 months ago

I think I may have fixed it!

What do you think? This is taken from minnie65. I believe the difference in orientation is due to the difference in how the vertical axis is displayed in Neuroglancer vs Microviewer.

Before anisotropy correction. (normal: [0,1,0])

image

After anisotropy correction. (normal: [0,1,0])

image

The location in minnie65:

image

https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B8e-9%2C%22m%22%5D%2C%22y%22:%5B8e-9%2C%22m%22%5D%2C%22z%22:%5B4e-8%2C%22m%22%5D%7D%2C%22position%22:%5B122074.1875%2C96123.5%2C21339.3671875%5D%2C%22crossSectionOrientation%22:%5B0.5%2C0.5%2C-0.5%2C-0.5%5D%2C%22crossSectionScale%22:2.170592127183442%2C%22projectionScale%22:262144%2C%22layers%22:%5B%7B%22type%22:%22segmentation%22%2C%22source%22:%22precomputed://gs://iarpa_microns/minnie/minnie65/seg%22%2C%22tab%22:%22segments%22%2C%22segments%22:%5B%22864691135368664562%22%5D%2C%22colorSeed%22:3662937279%2C%22name%22:%22seg%22%7D%5D%2C%22selectedLayer%22:%7B%22visible%22:true%2C%22layer%22:%22seg%22%7D%2C%22layout%22:%22yz%22%7D

wanqingy commented 7 months ago

It looks great!!!

william-silversmith commented 7 months ago

Released in 1.5.0 lmk how it goes.

wanqingy commented 6 months ago

It worked for my data, but now it took me about 10x longer to get the projection image. Does it sound right to you?

william-silversmith commented 6 months ago

Unfortunately, while 10x seems a bit much, the way the algorithm works is that it's moving fewer units per a step in the most distorted direction (the highest resolution axis is taken to be 1, the most is taken to be min/max units per a step).

I wasn't sure exactly how you were using this, so I didn't make high performance a priority like I have for the area calculation. What are your performance criteria?

wanqingy commented 6 months ago

I am in testing phase to run my myelin detection through axons of about 100 proofread cells. If it works well, I guess we could apply it on the whole minnie65 dataset if the cost (cloud computing) is reasonable (~5k). I need to figure out the computing time I can spend for each center point under that budget.

william-silversmith commented 6 months ago

For reference, the area calculation that I've been working on seems to take a bit under 2 hours to analyze every vertex in every skeleton in a 512^3 cutout from a Fly brain. Minnie65 has less gnarly neurons, but if those measurements carried over, I estimate it would cost:

153090 tasks 6600 core-seconds/task 0.01 $/core-hr / 3600 sec/hr = $2,806

I have some more work to do on that that will probably inflate that number somewhat (dealing with boundaries). There may also be ways to speed up the algorithm. The number 6600 is not tested, it is extrapolated from improvements to a previous version that took about 8000 seconds.

I have reason to suspect that the slicing operation can be made faster more easily than the area calculation.

wanqingy commented 6 months ago

wow, that's super fast for the Fly brain! How did you get the # of tasks (153090) for minnie65? total # of cells?

william-silversmith commented 6 months ago

Skeletonizing tasks run at 512x512x512 voxels and have 1 voxel overlap (so 513x513x513). It's easy to estimate based on the dataset dimensions provided a representative task is measured.

It's actually a bit faster in some ways as there are black regions. It can also be slower on e.g. glia (so many branches) or soma (larger cross sections).

By way of comparison, the same task takes only 5 minutes (~300 seconds) to skeletonize, so I'm a bit salty that measuring the cross section alone takes so much longer (but it makes some sense due to the nature of the problem...).

william-silversmith commented 6 months ago

On this test:

import numpy as np
import xs3d
from tqdm import tqdm

labels = np.ones((501,501,501), dtype=bool, order="F")

for i in tqdm(range(500)):
    slc = xs3d.slice(labels, [i,251,251], [1,1,1], anisotropy=[4,4,40])

I'm getting roughly 20 iterations per a second. For isotropic, I'm getting 83 it/sec. My contrast, for area estimation, I get an an average iteration speed of 7 it/sec (range: 6 to 9).

I did a little experimentation, and I suspect it will be difficult to raise the performance substantially. I looked at your code again, and I noticed the resolution used is [4,4,40], which means that given each output pixel is roughly evenly physically spaced, that can amount to a 10 increase in pixels in the z direction depending on the plane orientation.

With normal [1,0,0] and anisotropy [4,4,40], I'm getting 8 iterations per a second, which is pretty sensible given the amount of extra work being done.

It might be possible to do better. The code is outputting 16 to 50 MB / sec in the YZ plane depending on data width whereas it's possible to get processing speeds in excess of 100 or even 1000 MB/sec for some algorithms.

wanqingy commented 6 months ago

Thanks for testing!

I also noticed that I was using mip0 segmentation to test your code. After changing it to mip2 with [32,32,40] resolution, it is much faster. I guess that's because mip2 is less anisotropic. I am testing the code on an example neuron now.

william-silversmith commented 6 months ago

It seems that the main cost of the algorithm isn't the slicing operation at all, but the memory allocation of the output array. This is something I can do something about.

wanqingy commented 6 months ago

BTW: how did you decide to go with [512,512,512] bounding box? I think the main cost of time for me now is to download the segmentation cutouts. Currently, I am using [1024,1024,80].

william-silversmith commented 6 months ago

At mip 2, it gives a good balance between memory usage and visual context for the skeletonization algorithm (which is much longer than image download).

I've done some more experiments and it looks like I can make the slice operator much faster. It's still giving weird results so not ready yet, but the general method is giving much higher performance for highly anisotropic volumes.

william-silversmith commented 6 months ago

Okay, got something working! I'm still iffy about some of the logic, but it seems to be working and it is much faster.

EDIT: I was right about the iffy logic. It's not quite capturing the real bounding box just yet. Need to think about this some more.