ellisdg / 3DUnetCNN

Pytorch 3D U-Net Convolution Neural Network (CNN) designed for medical image segmentation
MIT License
1.92k stars 653 forks source link

model name #345

Open lgh010319 opened 2 months ago

lgh010319 commented 2 months ago

Hello author, sorry to disturb you I would like to ask how the model is imported. If I want to modify the model, where should I go to modify it? I couldn't find a model called DynUNet. Did you import it from the MONAI package? 329823431-7bc8c802-c2bb-46ca-8633-0457a1cbe075

ellisdg commented 2 months ago

Good questions!

I couldn't find a model called DynUNet. Did you import it from the MONAI package?

Yes, MONAI models are imported here: https://github.com/ellisdg/3DUnetCNN/blob/master/unet3d/models/pytorch/init.py#L1 So, you can specify different MONAI models and play around with the parameters.

If I want to modify the model, where should I go to modify it?

If you want to change the code of a model you can create a new model class in unet.py and change the name in the configuration file to use that class. If you want to specifically modify the DynUNet model, you could create a new class that inherits the DynUnet model from MONAI and modify select parts to your choosing.

You can also use the UNet models that I wrote for this project and play around with the code for those, but I don't think they perform as well as a some of the MONAI models, so that is why the DynUNet is the default for the example.

lgh010319 commented 1 month ago

Thank you, author I have already solved this problem, but now I have encountered a new one. You said that the data loading and enhancement speed seemed to be 10-20 times faster in the last update. But I couldn't find how the data loading is implemented. Is it the segmentation-py code? I would like to learn about your data import method and transfer it to my model. Another question is, due to the large volume of the images themselves, did your code uniformly resample the dataset into data of consistent size for 3D medical image segmentation? I hope you can help me answer this question

lgh010319 commented 1 month ago

Hello author, I have another question. What is the configuration of your computer? Why do I have no problems during training, but once I verify or predict, my memory explodes? My memory is 32g, how much should I expand?

ellisdg commented 1 month ago

Thank you, author I have already solved this problem, but now I have encountered a new one. You said that the data loading and enhancement speed seemed to be 10-20 times faster in the last update. But I couldn't find how the data loading is implemented. Is it the segmentation-py code? I would like to learn about your data import method and transfer it to my model.

Yes, segmentation.py is where the data is loaded using MONAI functions.

Another question is, due to the large volume of the images themselves, did your code uniformly resample the dataset into data of consistent size for 3D medical image segmentation? I hope you can help me answer this question

See line 53 where it resizes the image to the desired shape. Depending on the configuration parameters, it can also crop/pad the image to the desired shape, or randomly crop the image to the desired shape.

ellisdg commented 1 month ago

Hello author, I have another question. What is the configuration of your computer? Why do I have no problems during training, but once I verify or predict, my memory explodes? My memory is 32g, how much should I expand?

That is odd that the memory would explode for prediction rather than training. Prediction should be relatively easy compared to doing all the back propagation during training. I can train the model Brats model using a rather old GPU with 11GB of memory.

lgh010319 commented 1 month ago

谢谢你,作者我已经解决了这个问题,但现在我遇到了一个新的。您说上次更新的数据加载和增强速度似乎快了 10-20 倍。但我找不到数据加载是如何实现的。是 segmentation-py 代码吗?我想了解您的数据导入方法并将其传输到我的模型。

是的,segmentation.py 是使用 MONAI 函数加载数据的地方。

另一个问题是,由于图像本身的体积很大,您的代码是否将数据集统一地重新采样为大小一致的数据以进行 3D 医学图像分割?希望你能帮我回答这个问题

请参阅第 53 行,它将图像大小调整为所需的形状。根据配置参数,它还可以将图像裁剪/填充为所需的形状,或将图像随机裁剪为所需的形状。 Yes, I know segmentation. py is where the data is loaded, but I want to know which file referenced this data loading, and I couldn't find where the data loading took place

ellisdg commented 1 month ago

load_dataset_class loads the dataset according to the name specified in the configuration file. If the configuration file specifies the "SegmentationDatasetPersistent" then it will load the class defined in the segmentation.py file. You can also write your own dataset class and specify that in your configuration file instead.