Open Alisoltan82 opened 5 months ago
Looks like a CPU-GPU device tensor convert issue. Are you using only GPUs for inference. Any issues/inconsistencies of CPU/GPU input/output conversions?
I defined the device = torch.device (‘cuda’) Same in inference.json file
and noticeably, when I change the to “cpu” it works
Good day,
Trying to run whole-body segmentation model (Model_lowers.pt) on colab gpu and getting this error:
error message: NotImplementedError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend.
device is detected by torch.device = [Tesla 4] cuda version = 12.1 pytorch installed = 2.1.0+cu121 inference json device line = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
Any advice to resolve this?