mobaidoctor / med-ddpm

GNU General Public License v3.0
149 stars 16 forks source link

Multi_gpus #11

Closed XiaobingDean closed 6 months ago

XiaobingDean commented 9 months ago

Hi, I am wondering if I can set multi-GPUs in your model.

mobaidoctor commented 9 months ago

Of course! you can use multi-GPU training if you have more than one GPU on your hardware. You can simplify the process by using the "accelerate" library available at https://github.com/huggingface/accelerate.

XiaobingDean commented 9 months ago

I am also wondering if you have any recommended epoches for training the brats from scratch. I used 10000 to see the performance and didn't use your pre-trained model weight, and I got all noise and no images.

mobaidoctor commented 9 months ago

@XiaobingDean Which dataset from the BraTS challenge did you use for training, and how many images were involved? For instance, the BraTS 2021 challenge dataset contains 1,251 samples. If you used 10,000 iterations, referred to as epochs in our code (details available at: https://github.com/mobaidoctor/med-ddpm/issues/10), this equates to approximately 8 epochs based on the dataset size. This number of epochs is considered insufficient; ideally, you should aim for 100 epochs, which equates to 125,100 iterations. Additionally, it's important to note that the BraTS 2021 dataset includes some images of distorted and poor quality. Removing these low-quality outlier images could potentially accelerate the convergence of your training.

XiaobingDean commented 9 months ago

I am using a combined dataset, which contains approximately 1000 images, and I only want to train 2 modalities. You mentioned you trained single modality using 125000 iterations, which caused 3 days? Should I follow the same strategy you used? And the reason why I am getting all noise is probably caused by insufficient training? I also change the depth size from 144 to 32 due to Cuda oom, does this affect the performance?Thanks for your help and clarification.

XiaobingDean commented 8 months ago

I am also curious about why I have trained for over 100000 epoches for my Brats 2023 dataset and still got all noise when I try to produce the samples.

mobaidoctor commented 8 months ago

It's not possible to get all noise unless there's a step you missed, like preprocessing. How many images are in your training set? Could you share a batch of your training data with us? (to: mobaidoctor@gmail.com) We'd like to perform a quick check.

mobaidoctor commented 8 months ago

@XiaobingDean Hi, yesterday we released our preprocessing script for the BraTS dataset in our repository. You can modify the script according to your dataset's requirements. Please do not hesitate to contact us if you need any clarification or have any questions. Good luck!

mobaidoctor commented 7 months ago

@XiaobingDean Hi, if your issue is resolved and you have no further questions, I will proceed to close this issue.

XiaobingDean commented 7 months ago

I still cannot get high quality images, could you release those high quality images you used for training this? I want to see if there's something wrong with my data

mobaidoctor commented 7 months ago

@XiaobingDean Did you use our preprocessing script? If yes, then you've completed all necessary steps. It's important to ensure all images are high quality across all four modalities, with clear visuals in each plane and no distortions or artifacts. By using only high-quality images and removing any distorted samples, your model should produce outputs similar to our pretrained model. We used a private dataset for single modality and the BRATS2021 dataset for multi-modality synthesis. Unfortunately, we can't make these datasets publicly available.