JuliaWolleb / Diffusion-based-Segmentation

This is the official Pytorch implementation of the paper "Diffusion Models for Implicit Image Segmentation Ensembles".
MIT License
271 stars 35 forks source link

Input of size #29

Open Lateryears opened 1 year ago

Lateryears commented 1 year ago

Hi JuliaWolleb: I encountered the following error while running, RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [1, 5, 240, 224, 139] how can I solve it , resize it after squeezing?

JuliaWolleb commented 1 year ago

Are you working with 2D or 3D data? If you are working on 3D volumes, you will need to change the U-Net architecture to 3D

ZhangxinruBIT commented 1 year ago

Are you working with 2D or 3D data? If you are working on 3D volumes, you will need to change the U-Net architecture to 3D

Hello, @JuliaWolleb, I appreciate your work. Regarding the consumption of GPU resources for 3D data, do you have any insights? I used 3D data as input with batch size =1 and modified the Unet model with dim=3 to accommodate the 3D architecture, but encountered out-of-memory errors when using a Tesla V100 GPU with 32GB of memory.

JuliaWolleb commented 1 year ago

Hi Yes, processing 3D data might exceed the possibilities of your GPU. We solved the problem with a patch-based approach: Refer to our paper Diffusion Models for Memory-efficient Processing of 3D Medical Images, available at https://arxiv.org/abs/2303.15288

ZhangxinruBIT commented 1 year ago

Hi Yes, processing 3D data might exceed the possibilities of your GPU. We solved the problem with a patch-based approach: Refer to our paper Diffusion Models for Memory-efficient Processing of 3D Medical Images, available at https://arxiv.org/abs/2303.15288

Hello, @JuliaWolleb . Thank you for sharing your paper PatchDDM. It seems like a promising approach for handling large datasets with limited GPU resources. May I ask if you plan to make the code publicly available anytime soon? It would be great to see your method in action and try it out on our own datasets. Thank you!