Closed rohanbanerjee closed 8 months ago
The centerline
/segmentation
and disc labels
are a mandatory requirement for the preprocess_normalise.py
pipeline to run. For my dataset, I am using the centerline
and disc labels
as my derivatives.
My first guess was that there might be some issue with the centerline
i.e. the centerline might not be covering the whole spinal cord or the disc labels
might not have all the 26 discs labeled. I have verified this and this is not the case. Attaching some files to confirm this.
Hope this clarifies the issue a little more.
Hmm. I am trying the reproduction steps on the master
branch, however I'm running into an error inside the preprocess_normalize.py
script:
Status: 0%| | 0.00/1.00 [00:00<?, ?B/s]sub-HarshmanDobby SC segmentation exists. Extracting centerline from /home/joshua/repos/template/bids_data/derivatives/labels/sub-HarshmanDobby/anat/sub-HarshmanDobby_T2_label-SC_mask.nii.gz
Status: 100%|##########| 1.00/1.00 [00:02<00:00, 2.22s/B]
Traceback (most recent call last):
File "/home/joshua/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/223.7571.203/plugins/python/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/joshua/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/223.7571.203/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/joshua/repos/template/preprocess_normalize.py", line 698, in <module>
main(sys.argv[1])
File "/home/joshua/repos/template/preprocess_normalize.py", line 672, in main
points_average_centerline, position_template_discs = average_centerline(list_centerline = list_centerline,
File "/home/joshua/repos/template/preprocess_normalize.py", line 273, in average_centerline
distances_discs_from_C1[Centerline.regions_labels[disc_number]] = distances_discs_from_C1[Centerline.regions_labels[disc_number - 1]] + average_length[Centerline.regions_labels[disc_number - 1]][1]
KeyError: 'L4'
The error occurs here, inside of the average_centerline()
function of the script:
The error occurs because:
disc_number
is 24Centerline.regions_labels[disc_number - 1]
is 'L4'
average_length
is a dict that only goes up to 'L3'
, hence the error.Digging in further:
average_length
comes from length_vertebral_labels
which comes from new_vert_length
which comes from list_centerline[0].distance_from_C1label
.centerline.distance_from_C1label
comes from centerline.index_disc
which comes from centerline.list_labels
.list_labels
is a hardcoded list inside of spinalcordtoolbox/types.py
that stops at 23.So, it seems like, before the straightening, there is indeed an issue with the centerline. The issue appears to be because SCT hardcodes a list of labels inside the Centerline
object that stops at 23. I wonder why?
On a whim, I tried extending the list_labels
list to 26 to see what might happen. And, the error goes away! Plus, the straightening runs fine? I'm looking at the straightened image in FSLeyes and I think it has 26 discs, but I'm not 100% sure.
So the question now is: Why was the list_labels
list in SCT limited to 23 in the first place?
So the question now is: Why was the
list_labels
list in SCT limited to 23 in the first place?
It looks like list_labels
used to go up to 30, but it was shortened in 2017 during a large refactoring effort: https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/1378. Specifically, commit https://github.com/spinalcordtoolbox/spinalcordtoolbox/commit/ecb0a6a8d25d648b0ff8ac51b1d38decea9754b8.
This commit doesn't really provide context for the change, though. I wonder if @benjamindeleener could provide some context as the author of the commit? :thinking:
I am not sure of the exact reason, but it might be because the PAM50 template did not cover the full spinal cord.
@joshuacwnewton Do you think if the script preprocess_normalize.py
dynamically creates the list_labels
instead of taking the values from spinalcordtoolbox/types.py
, it would solve this problem?
@joshuacwnewton Do you think if the script
preprocess_normalize.py
dynamically creates thelist_labels
instead of taking the values fromspinalcordtoolbox/types.py
, it would solve this problem?
Good question. Yes, this works.
To test quickly, I did the following in the preprocess_normalize.py
script:
from spinalcordtoolbox import utils as sct
-from spinalcordtoolbox.types import Centerline
from spinalcordtoolbox import straightening
from spinalcordtoolbox.centerline.core import ParamCenterline
from spinalcordtoolbox.centerline.core import get_centerline
from spinalcordtoolbox.image import Image
from spinalcordtoolbox.download import download_data, unzip
+from spinalcordtoolbox.types import Centerline
+# list_labels currently stops at 23, but we may have more discs
+# So, temporarily patch Centerline class to have a wider range
+# See: https://github.com/neuropoly/template/issues/65
+Centerline.list_labels = [50, 49] + list(range(27))
And, the script functions as intended. (Of course, you can do this dynamically as you mention based on last_disc + 1
instead of a hardcoded 27
.)
Hello, I am opening this issue regarding this: https://github.com/neuropoly/template/issues/60#issuecomment-1666162316 SCT version used: 6.0 (branch: jn/jn/60-fix-indexerror-during-straightening) Data for reproducing issue is at:
duke/temp/rohan/bids_data
To reproduce the issue:
git clone https://github.com/neuropoly/template.git
configuration_default.json
and rename it asconfiguration.json
configuration.json
python preprocess_normalize.py configuration.json
Current output:
sub-HarshmanDobby_T2_straight_norm.nii.gz
underBIDS_DATA/derivatives/sct_straighten_spinalcord
has23
disc levels.Expected output:
sub-HarshmanDobby_T2_straight_norm.nii.gz
underBIDS_DATA/derivatives/sct_straighten_spinalcord
should have26
disc levels. This is because we set the value oflast_disc
as26
.