Closed andy-s-ding closed 1 month ago
Same issue here, literally caused my server to shut down :(
Just wanted to follow up. It seems that I was wrong to assume that the input file size was the main factor for potential memory issues. It is actually the spatial dimensions of the input. Inference required predictions of 630 patches, then had to be resampled for export. The resampling step seems to be the culprit, as outlined in Issue #2192.
Rather than breaking up my image into chunks, I used SimpleITK to resample my input image to the original_median_spacing_after_transp
as outlined in the output plans.json
file (luckily this was created before the process crashed).
Inference on this resampled image completed without issue. I will close this issue, as it is similar to #2192.
Hello @andy-s-ding,
The model that I am using has a very low spacing in the plans.json.
The "original_median_spacing_after_transp": [0.43164101243019104, 0.31200000643730164, 0.43164101243019104 ]"
When I try to convert the file to this spacing, the file size is huge around >2Gb and further to this I get memory issues. Is there a way to work around this issue??
Hi, I had a question about inference. I trained nnUNet on a set of high resolution CT scans (512x512x512, 0.1mm^3/voxel), and inference has worked well with other CT scans of the same resolution and size. I recently tried to run inference on a CT scan obtained from a C-Arm (300x300x300, 0.5mm^3/voxel), which has a significantly smaller file size, but the process hangs before it even starts inference. Checking my System Monitor, I see that over the course of a couple hours, memory usage slowly creeps up until the process crashes completely.
I am running this on Ubuntu 22.04 LTS. File formats are .nii.gz. Below is the output from my Terminal. Any advice on this? Thank you!