Closed snaka99 closed 1 year ago
Could you please provide more context for me to identify the problem? Like in which row of the script raises this error? This is typical because the size of the tensor is not correct, maybe because of interpolation. Or you can debug to see if the size of the tensor works as your expected.
Yes the error looks like this
dataset acdc model medformer 2d dimension and fold_0_best.pht for checkpoint
I see the problem. The prediction.py I provide is for 3D models. For 2D models, you need to modify the corresponding config file and turn on the sliding window inference by setting sliding_window: True
. Also add a new argument window_size: [256, 256]
. 256 is an example, make sure it is consistent with the training.
I haven't tested it yet, as I'm very busy recently. I will have time to fix it later this week. The above modification should work for 2D models. Even if it doesn't work, it's not difficult to fix it. You can try to debug the tensor size or the order of axes.
Hi @yhygao , I am sorry to bother you again , but I tried the 3d medformer model this time and I get this error
"NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher]."
can it be a problem with the cuda? or have I done something wrong? ps I am running your framework in google colab
I never have had this issue. It might be because of the environment. It would be better to provide more context, like the model, dataset, and 3D or 3D model for me to reproduce this error.
I've pushed a new update. The new prediction.py supports both 2D and 3D inference.
thank you in advance!
Originally posted by @snaka99 in https://github.com/yhygao/CBIM-Medical-Image-Segmentation/issues/18#issuecomment-1490539019