Open Kaonashi22 opened 8 months ago
Yes, we can certainly to that. I will implement it after we merge #8
Can we update the script to extract the metrics in all tracts ?
sure-- i'll try my best to do it today
how do you want the results? aggregated into a single CSV file, or one CSV file per tract?
Thank you! A csv file per tract would be more convenient
From: Julien Cohen-Adad @.> Sent: May 1, 2024 08:40 To: sct-pipeline/spine-park @.> Cc: Lydia Chougar, Dr @.>; Author @.> Subject: Re: [sct-pipeline/spine-park] Extraction of metrics in all tracts (Issue #10)
how do you want the results? aggregated into a single CSV file, or one CSV file per tract?
— Reply to this email directly, view it on GitHubhttps://github.com/sct-pipeline/spine-park/issues/10#issuecomment-2088409245, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BFCFJYUBF27CZEGW7CD2ORDZADPETAVCNFSM6AAAAABEAH4QY6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBYGQYDSMRUGU. You are receiving this because you authored the thread.Message ID: @.***>
Another consideration: some tracts are veeeeeery small (ie: less than a mm2 per axial slice). When considering tracts in isolation, metrics will be more prone to noise, hence if there is a pathology-related effect, it would possibly be non-statistically significant. Whereas when aggregating across tracts, the sensitivity to detect changes statistically will be higher. So my recommendation would be to aggregate tracts instead of studying them in isolation. Aggregation should follow hypotheses in terms of the tracts that are expected to be affected (eg: all descending tracts?)
I'm particularly interested in the intermedio-lateral columns, located in the lateral horns (in green). I'll look at it more in detail.
could you suggest which tracts to aggregate from the info_label.txt
The region that I'm interested (intermedio-lateral columns) should be located in these GM areas (~16 voxels in the axial plane for each side): _32, GM left intermediate zone, PAM50_atlas_32.nii.gz 33, GM right intermediate zone, PAM50_atlas33.nii.gz Do you think if it can be studied given the size?
WE can can also look at the GM globally 52, gray matter, 30:35
These could be "control regions": _51, white matter, 0:29 53, dorsal columns, 0:3 54, lateral funiculi, 4:13 55, ventral funiculi, 14:29
30, GM left ventral horn, PAM50_atlas_30.nii.gz 31, GM right ventral horn, PAM50_atlas_31.nii.gz
34, GM left dorsal horn, PAM50_atlas_34.nii.gz 35, GM right dorsal horn, PAM50_atlas_35.nii.gz
If not too small 4, WM left lateral corticospinal tract, PAM50_atlas_04.nii.gz 5, WM right lateral corticospinal tract, PAM50_atlas05.nii.gz
ok great, this is very helpful. Just to clarify, the regions you listed per paragraph (eg: [32, 33] vs. [32], [33]): are you suggesting to merge them? (again, the pros in merging is the better robustness to noise)
I think we can merge right and left (eg: 32 and 33)
Good, so I'll produce the following extraction strategies (one extraction per bullet point):
we can merge 54 and 55, but I would keep 53 separately unless you think it's too small Sounds good for the rest
I'm also thinking it could be interesting to merge all ascending vs descending tracts
I'm also thinking it could be interesting to merge all ascending vs descending tracts
could you please list the numbers to be included for the ascending vs. descending? that would save me some time 😊
descending: 4 5 8 9 10 11 16 17 18 19 20 21 22 23 24 25 26 27
ascending: 0 1 2 3 6 7 12 13 14 15
Thanks Julien!
Great, here is the updated list:
Thank you!
@Kaonashi22 can you give it a try with https://github.com/sct-pipeline/spine-park/commit/9ad4422f84001a3f5d70c70b3939dea30538faa8 ?
Using the last version https://github.com/sct-pipeline/spine-park/commit/9ad4422f84001a3f5d70c70b3939dea30538faa8 ?, here is the output I got DWI_FA.csv MTR_in_DC.csv
The columns voxels, map and std are empty... Also, the results corresponding to one subject are on "non-contiguous" rows
I also got this error:
Spinal Cord Toolbox (git-master-72ff27030efda92431ee831b45f31e2858b8552d)
sct_extract_metric -i sub-VS251_chunk-1_DWI_moco_FA.nii.gz -f label_sub-VS251_chunk-1_DWI_moco/atlas -l 52 -combine 1 -vert 2:12 -vertfile label_sub-VS251_chunk-1_DWI_moco/template/PAM50_levels.nii.gz -o /dagher/dagher11/lydia11/SPINE_park/RESULTS_0205/results/DWI_FA_52.csv -append 1
--
Load metric image...
Estimation for label: 52
Traceback (most recent call last):
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/scripts/sct_extract_metric.py", line 390, in <module>
main(sys.argv[1:])
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/scripts/sct_extract_metric.py", line 379, in main
agg_metric = extract_metric(data, labels=labels, slices=slices, levels=levels, perslice=perslice,
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/aggregate_slicewise.py", line 440, in extract_metric
labels_sum = np.sum(labels[..., label_struc[id_label].id], axis=labels.ndim-1) # (nx, ny, nz, 1)
IndexError: index 52 is out of bounds for axis 3 with size 37
https://github.com/sct-pipeline/spine-park/issues/10#issuecomment-2091831202 this error is caused by an issue which I've fixed in https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/4469. Once it is merged, you can git pull
from SCT's master version and from spine-park and the bug should go away.
After doing "git pull" and using the new script (cb51fb6), I still have the same error
can you send me:
ah! i see, because https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/4469 is not merged yet-- once it is merged you will be able to do it-- tagging @joshuacwnewton who can inform you once merged--
EDIT 20240505_235301 OK i just merged it-- can you pls give it another try
The first run gave me this error for a few subjects; when I rerun, it works
*** Generating Quality Control (QC) html report ***
Resample images to 0.600000x0.600000 vox
[33mConverting image from type 'uint8' to type 'float64' for linear interpolation[0m
Find the center of each slice
fig.savefig(img_path, format='png', transparent=True, dpi=300)
The first run gave me this error:
Traceback (most recent call last):
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/scripts/sct_deepseg.py", line 349, in <module>
main(sys.argv[1:])
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/scripts/sct_deepseg.py", line 329, in main
qc2.sct_deepseg(
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/reports/qc2.py", line 339, in sct_deepseg
_File "/export02/data/lydiac/spinalcordtoolbox/python/envs/venv_sct/lib/python3.9/contextlib.py", line 126, in __exit__
next(self.gen)
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/reports/qc2.py", line 90, in create_qc_entry
with mutex(name="sct_qc"):
File "/export02/data/lydiac/spinalcordtoolbox/python/envs/venv_sct/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/export02/data/lydiac/spinalcordtoolbox/spinalcordtoolbox/utils/fs.py", line 281, in mutex
semaphore.acquire()
File "/export02/data/lydiac/spinalcordtoolbox/python/envs/venv_sct/lib/python3.9/site-packages/portalocker/utils.py", line 493, in acquire
raise exceptions.AlreadyLocked()
portalocker.exceptions.AlreadyLocked_
These columns [vox] | MAP() | STD() are still empty DWI_FA_30-31.csv
Just to summarize the 2 different issues:
MTR_in_DC.csv
:
IndexError: index 52 is out of bounds
Since this issue has been fixed, I will minimize some of the "Resolved" comments on this issue so that it is a little easier to navigate.
DWI_FA_30-31.csv
[vox] | MAP() | STD()
columns.I am a bit confused, as the columns appear to be non-empty in the CSV file you shared:
Could you clarify what the issue is?
(I fixed the upload of the initial MTR data, then deleted the re-uploads.)
Could you clarify what the issue is?
These columns are empty when I open the csv files on the linux station where I'm running the analyses, but visible on my laptop, weird...
Ahhh, that is quite strange! It looks like for some reason, the column titles are shifted?
Right, the display is correct after changing the separator when opening the csv file, thank you!!
Perfect! I think we can close this issue, then? (Since @jcohenadad previously closed this issue in https://github.com/sct-pipeline/spine-park/commit/9ad4422f84001a3f5d70c70b3939dea30538faa8.)
A few more questions/comments before you close the issue! -is there a way to order the results by subject and level? I guess they are printed in the order in which they are generated. If not, I can do it manually -on the screenshot attached, there are values corresponding to "slice 2" (C2 level) on the three dwi chunks; same for "slice 5" (chunks 1 and 2) and "slice 9" (chunks 2 and 3).
-for some levels, values are equal to zero, but the maps look fine
-is there a way to order the results by subject and level? I guess they are printed in the order in which they are generated. If not, I can do it manually
Yes, exactly. I imagine that this is because of parallel processing + using -append
, so the rows are added as they're generated. I think manual ordering is the easiest solution to this.
-on the screenshot attached, there are values corresponding to "slice 2" (C2 level) on the three dwi chunks; same for "slice 5" (chunks 1 and 2) and "slice 9" (chunks 2 and 3).
Ahh, this is very strange. These seem like bugs to me?
-vert 2:12
for every image even when the chunks only have a subsection of the range. (Maybe the code is automatically creating an entry for the first value (2
) by default, even if vertlevel 2 is not in the image, which would definitely be a bug.)-for some levels, values are equal to zero, but the maps look fine
This seems related to the problem above (vertlevels at the start and vertlevels at the interface between the chunks). I have a feeling if we solve the erroneous table entries, then the zero values will go away too.
slice « 2 » is not located at the same physical location (ie absolute scanner coordinates) across chunks
slice « 2 » is not located at the same physical location (ie absolute scanner coordinates) across chunks
Is there a way to know the corresponding vertebral level for each chunk except looking at the images?
There are different ways to do that, depending on how you are going to use the information, some programatical ways are more optimal than others. To give you an example, here is a syntax:
for chunk in 1 2 3 ; do sct_extract_metric -i sub-BB277_chunk-${chunk}_DWI_moco_FA.nii.gz -l 51 -f label_sub-BB277_chunk-${chunk}_DWI_moco/atlas/ -vert 1:12 -perlevel 1 -vertfile label_sub-BB277_chunk-${chunk}_DWI_moco/template/PAM50_levels.nii.gz -o levels.csv -append 1 ; done
That generates this output file: levels.csv, which lists what levels are present in a chunk, and what are the corresponding slices for each level.
I see on the csv file that some vertebral levels are still extracted from different chunks: for example level 10 from chunks 1 and 3; chunk 1 doesn't cover this level (upper part of the cord)
I see on the csv file that some vertebral levels are still extracted from different chunks: for example level 10 from chunks 1 and 3; chunk 1 doesn't cover this level (upper part of the cord)
can you please share the CSV file and the data so I can reproduce/understand
This is the csv file that you shared here:
That generates this output file: levels.csv, which lists what levels are present in a chunk, and what are the corresponding slices for each level.
I overlaid the chunks on the T2w image, and noticed that chunk 1 is not the top chunk but the middle one, which explains the overlap of vertebral levels in the CSV file (chunk number is in white):
The reason is because I was working on a dataset that was created before I fixed https://github.com/sct-pipeline/spine-park/issues/16. If you run this in the latest version of the code if should work fine.
Got it, thank you! Is there a way to add this command "do sct_extract_metric" to the script or I will have to run it at the end?
Is there a way to add this command "do sct_extract_metric" to the script or I will have to run it at the end?
Implemented in ad7a4ba1df751f2ee116efe17f2c3a11b7b70adf. I haven't run the code so please run in one subject to make sure there is no bug
The code exited with this error: _FileNotFoundError: [Errno 2] No such file or directory: '/dagher/dagher11/lydia11/SPINE_park/Test_1505/data_processed/sub-AM267/dwi/sub-AM267_chunk-1_DWI_moco/template/PAM50levels.nii.gz'
I think the actual name of the folder it's looking for is _label_sub-AM267_chunk-1_DWImoco Chunks 2 and 3 were not processed.
Sorry about that-- it is now fixed in ecd02d1. I tested the code and it runs on my end.
Thank you! I'm testing the new version
Using the last version of the script, I got these results. There is a level 2 for the three chunks, but only chunk 1 has values as expected (zero for chunks 2 and 3). Is there a reason? For this specific subject, chunks 1 and partially 2 cover level 4, but together they fully cover it. Should I take the mean of the values in the two chunks?
Can we add a loop to extract metrics (MT, T1, diffusion) in all regions and tracts in the "info_label.txt" file?