I trained a model for only one fold and tested it with nnUNet_predict, which seems to show a good performance. So I am wondering if there's a way to serve the trained model using torchserve. The trained model I tried to use is located at:
nnUNet_trained_models/nnUNet/3d_fullres/.../nnUNetTrainerV2__nnUNetPlansv2.1/fold_0
I first tried to use the output model model_final_checkpoint.model directly:
which was not successful due to the following error:
2022-11-13T00:16:33,159 [INFO ] W-9000-nnunet-lungseg64_0.9-stdout MODEL_LOG - File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/ts/torch_handler/base_handler.py", line 115, in _load_torchscript_model
2022-11-13T00:16:33,161 [INFO ] W-9000-nnunet-lungseg64_0.9-stdout MODEL_LOG - return torch.jit.load(model_pt_path, map_location=self.device)
2022-11-13T00:16:33,161 [INFO ] W-9000-nnunet-lungseg64_0.9-stdout MODEL_LOG - File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/jit/_serialization.py", line 162, in load
2022-11-13T00:16:33,162 [INFO ] W-9000-nnunet-lungseg64_0.9-stdout MODEL_LOG - cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
2022-11-13T00:16:33,162 [INFO ] W-9000-nnunet-lungseg64_0.9-stdout MODEL_LOG - RuntimeError: PytorchStreamReader failed locating file constants.pkl: file not found
So the trained model didn't seem to be in TorchScript format, and I changed the nnunet code, so that I can save the trained model in TorchScript format like (in nnunet/training/network_training/network_trainer.py):
which caused an error while running torch.jit.script:
RuntimeError:
Expected integer literal for index. ModuleList/Sequential indexing is only supported with integer literals. Enumeration is supported, e.g. 'for index, v in enumerate(self): ...':
File "/home/ubuntu/code/nnUNet/nnunet/network_architecture/generic_UNet.py", line 400
# module: ModuleInterface = self.conv_blocks_context[d]
# x = module(x)
x = self.conv_blocks_context[d](x)
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
skips.append(x)
if not self.convolutional_pooling:
I dug a bit deeper but was not successful to make it work. Has anyone tried the same? Thanks.
I trained a model for only one fold and tested it with
nnUNet_predict
, which seems to show a good performance. So I am wondering if there's a way to serve the trained model using torchserve. The trained model I tried to use is located at:nnUNet_trained_models/nnUNet/3d_fullres/.../nnUNetTrainerV2__nnUNetPlansv2.1/fold_0
I first tried to use the output model
model_final_checkpoint.model
directly:which was not successful due to the following error:
So the trained model didn't seem to be in TorchScript format, and I changed the nnunet code, so that I can save the trained model in TorchScript format like (in
nnunet/training/network_training/network_trainer.py
):which caused an error while running
torch.jit.script
:I dug a bit deeper but was not successful to make it work. Has anyone tried the same? Thanks.