Open uto-lt opened 11 months ago
Hi, thank you for your interest towards our work! What is your batch size, cache rate for training? I believe the preprocessing for FLARE dataset have been downsampled to 1.0x1.0x1.2. It shouldn't have any problem for both training and validation.
It will be great to let me also know the minimum and maximum resolution of x, y, z-axis across all samples in the dataset.
Hi, thanks for your reply @leeh43
I have found a solution to the question by using the device=torch.device('cpu')
of sliding_window_inference, which will leave gpu the small patch of image for inference and the rest patches will be cached in memory.
By the way, I still have a question, since I am participating in the FLARE23 competition, and the maximum resolution of x, y, z-axis across all samples in the dataset can be 512x512x512, the competition requires us to use less than 28g memory. If I want to use less memory, is the spacing as large as possible or as small as possible?
Looking forward to your help~
Hello, Thanks for your excellent work, and I am interested in your work very much. It goes well when I use the public dataset that you use in the code. But when I trained with my own dataset, which is a big one and the resolution of the images is also high, it arose the following error.
It worked well during the training phase, but the code reported an error during the validation phase. Do you know how to solve this problem? I would appreciate it.