CompImg / LST-AI

LST-AI - Deep Learning Ensemble for Accurate MS Lesion Segmentation
https://doi.org/10.1016/j.nicl.2024.103611
MIT License
21 stars 4 forks source link

Run the Docker and singularity file on HPC #21

Closed lyhoo23618-csu closed 4 weeks ago

lyhoo23618-csu commented 2 months ago

Hello everyone,

I am trying to run this great toolbox on HPC. So i used apptainer to build a sif image from the corresponding docker container (jqmcginnis/lst-ai:latest). Afterwards, i run this sif image using apptainer in my cluster. However, it failed in the middle. And the log file showed: ... running postprocessing... exporting segmentation... Limiting the number of threads to 64 Limiting the number of threads to 64 Running LST Segmentation. Running segmentation on /GPU:0. Running model 0. /usr/local/lib/python3.10/dist-packages/keras/src/layers/activations/leaky_relu.py:41: UserWarning: Argument alpha is deprecated. Use negative_slope instead. warnings.warn( Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/keras/src/ops/operation.py", line 234, in from_config return cls(**config) File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/convolutional/conv3d_transpose.py", line 120, in init super().init( File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/convolutional/base_conv_transpose.py", line 94, in init super().init( File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/layer.py", line 266, in init raise ValueError( ValueError: Unrecognized keyword arguments passed to Conv3DTranspose: {'groups': 1}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/local/bin/lst", line 7, in exec(compile(f.read(), file, 'exec')) File "/custom_apps/lst_directory/LST-AI/LST_AI/lst", line 320, in unet_segmentation(mni_t1=path_mni_stripped_t1w, File "/custom_apps/lst_directory/LST-AI/LST_AI/segment.py", line 102, in unet_segmentation mdl = load_custom_model(model, compile=False) File "/custom_apps/lst_directory/LST-AI/LST_AI/custom_tf.py", line 28, in load_custom_model return tf.keras.models.load_model(model_path, custom_objects=custom_objects, compile=compile) File "/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_api.py", line 189, in load_model return legacy_h5_format.load_model_from_hdf5( File "/usr/local/lib/python3.10/dist-packages/keras/src/legacy/saving/legacy_h5_format.py", line 133, in load_model_from_hdf5 model = saving_utils.model_from_config( File "/usr/local/lib/python3.10/dist-packages/keras/src/legacy/saving/saving_utils.py", line 85, in model_from_config return serialization.deserialize_keras_object( File "/usr/local/lib/python3.10/dist-packages/keras/src/legacy/saving/serialization.py", line 495, in deserialize_keras_object deserialized_obj = cls.from_config( File "/usr/local/lib/python3.10/dist-packages/keras/src/models/model.py", line 521, in from_config return functional_from_config( File "/usr/local/lib/python3.10/dist-packages/keras/src/models/functional.py", line 477, in functional_from_config process_layer(layer_data) File "/usr/local/lib/python3.10/dist-packages/keras/src/models/functional.py", line 457, in process_layer layer = saving_utils.model_from_config( File "/usr/local/lib/python3.10/dist-packages/keras/src/legacy/saving/saving_utils.py", line 85, in model_from_config return serialization.deserialize_keras_object( File "/usr/local/lib/python3.10/dist-packages/keras/src/legacy/saving/serialization.py", line 504, in deserialize_keras_object deserialized_obj = cls.from_config(cls_config) File "/usr/local/lib/python3.10/dist-packages/keras/src/ops/operation.py", line 236, in from_config raise TypeError( TypeError: Error when deserializing class 'Conv3DTranspose' using config={'name': 'conv3d_transpose', 'trainable': True, 'dtype': 'float32', 'filters': 140, 'kernel_size': [4, 4, 4], 'strides': [2, 2, 2], 'padding': 'same', 'data_format': 'channels_last', 'groups': 1, 'activation': 'linear', 'use_bias': False, 'kernel_initializer': {'class_name': 'HeUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None, 'output_padding': None}.

Exception encountered: Unrecognized keyword arguments passed to Conv3DTranspose: {'groups': 1}

Any help or guidance would be greatly appreciated.

Hao

jqmcginnis commented 2 months ago

@lyhoo23618-csu, I will look into this and report back.

However, in the meantime, I would like to disclose that @darkstorm4hack is not a maintainer of this repository. Additionally, I am unsure which file the links posted by @darkstorm4hack are directing us to, as there is no description and the same link has been posted three times. Please be careful!

lyhoo23618-csu commented 2 months ago

Hi, here’s an update. I found that the Singularity file (SIF) built from the CPU version of the Docker container (docker pull jqmcginnis/lst-ai_cpu) works well.

However, I’m a bit confused about the annotation file. Based on the publications of this tool, I understood that label 1 represents periventricular (PV), label 2 is for juxtacortical, label 3 is for subcortical, and label 4 is for infratentorial. In the output of my segmentation, lesions located in both the periventricular and subcortical regions are labeled as 2 (green in the figure), while part of the subcortical region is marked as label 3 (blue in the figure). Did I misunderstand the meaning of these labels?

image

twiltgen commented 2 months ago

Hi @lyhoo23618-csu, thank you very much for using our tool and providing feedback!

You have understood the labeling correctly.

We use 3D connected components to identify and label lesions. This method can sometimes result in lesions located in the subcortical region being labeled as juxtacortical if they are adjacent to another juxtacortical lesion. Because the labeling is based on 3D connected components, it is difficult to determine from the 2D image alone if this is the case in your example.

I hope this explanation is helpful. If you need further assistance, we can also get in touch via email and we can take a closer look at your 3D results.