yhygao / CBIM-Medical-Image-Segmentation

A PyTorch framework for medical image segmentation
Apache License 2.0
260 stars 46 forks source link

prediction.py #19

Closed snaka99 closed 1 year ago

snaka99 commented 1 year ago
          Hello @yhygao!! I'm trying to run the prediction.py , I have copied the preprocess part from training in the preprocess of the prediction.py as you suggest but I get an error saying "RuntimeError: Sizes of tensors must match except in dimension 1. Expected 170 but got size 169 for tensor number 2 in the list." Have you got any idea what it can be from?

thank you in advance!

Originally posted by @snaka99 in https://github.com/yhygao/CBIM-Medical-Image-Segmentation/issues/18#issuecomment-1490539019

yhygao commented 1 year ago

Could you please provide more context for me to identify the problem? Like in which row of the script raises this error? This is typical because the size of the tensor is not correct, maybe because of interpolation. Or you can debug to see if the size of the tensor works as your expected.

snaka99 commented 1 year ago

Yes the error looks like this 336646788_549900240467294_5969289729776517797_n

dataset acdc model medformer 2d dimension and fold_0_best.pht for checkpoint

yhygao commented 1 year ago

I see the problem. The prediction.py I provide is for 3D models. For 2D models, you need to modify the corresponding config file and turn on the sliding window inference by setting sliding_window: True. Also add a new argument window_size: [256, 256]. 256 is an example, make sure it is consistent with the training.

I haven't tested it yet, as I'm very busy recently. I will have time to fix it later this week. The above modification should work for 2D models. Even if it doesn't work, it's not difficult to fix it. You can try to debug the tensor size or the order of axes.

snaka99 commented 1 year ago

Hi @yhygao , I am sorry to bother you again , but I tried the 3d medformer model this time and I get this error

"NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher]."

can it be a problem with the cuda? or have I done something wrong? ps I am running your framework in google colab

yhygao commented 1 year ago

I never have had this issue. It might be because of the environment. It would be better to provide more context, like the model, dataset, and 3D or 3D model for me to reproduce this error.

yhygao commented 1 year ago

I've pushed a new update. The new prediction.py supports both 2D and 3D inference.