Open KumoLiu opened 4 months ago
It looks to have died on the second last cell:
deformed_data_dict = rand_elastic(data_dict)
print(f"image shape: {deformed_data_dict['image'].shape}")
image, label = deformed_data_dict["image"][0], deformed_data_dict["label"][0]
plt.figure("visualise", (8, 4))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 5], cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 5])
plt.show()
I can't tell from that if it's the elastic deformation or the matplotlib calls that cause the crash, it could be either if the environment isn't right and some compiled code caused a crash.
Would be worth running with CUDA_LAUNCH_BLOCKING="1" if the test is running pre-processing on the GPU
Would be worth running with CUDA_LAUNCH_BLOCKING="1" if the test is running pre-processing on the GPU
I don't think the transforms are run on the GPU. You can see in this tutorial. https://github.com/Project-MONAI/tutorials/blob/main/modules/3d_image_transforms.ipynb
Would be worth running with CUDA_LAUNCH_BLOCKING="1" if the test is running pre-processing on the GPU
I don't think the transforms are run on the GPU. You can see in this tutorial. https://github.com/Project-MONAI/tutorials/blob/main/modules/3d_image_transforms.ipynb
There may be some interaction anyway with the CUDA components in Pytorch so it's worth trying. Are there any other environment variables to set to enhance debug output? We don't have much else to go on since we can't replicate the issue locally.
tried with CUDA_LAUNCH_BLOCKING=1
, the same error occurred.
Step to reproduce:
docker pull nvcr.io/nvidia/pytorch:24.03-py3
docker run ...
# install monai
git clone https://github.com/Project-MONAI/MONAI.git
python -m pip install --upgrade pip wheel
python -m pip install -r requirements-dev.txt
BUILD_MONAI=0 python setup.py develop
# install tutorial
git clone https://github.com/Project-MONAI/tutorials.git
python -m pip install -r requirements.txt; python -m pip list
# run notebook
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=1 ./runner.sh -t modules/3d_image_transforms.ipynb
Next thing to try is enabling fault handling: https://docs.python.org/3/library/faulthandler.html Add PYTHONFAULTHANDLER=1
to the command line and see if we get a stack trace when the fault happens. If this doesn't work it's a matter of tracing the code somehow to see which line is the last to execute before a segfault happens. That's painful but we can possibly find a tool to do that cleanly.
I traced the error back to the Convolution in the Gaussian filter, which is used by the Rand3DElasticd
transform. I suspect this issue is related to a previous bug I encountered. I guess PyTorch 24.03 container may not include the commit mentioned in the ticket, since the code runs successfully under PyTorch's nightly build. I will confirm it today.