Open Jason-u opened 2 months ago
If I want to preprocess smaller image patch sizes, do you have any recommended parameters to set?
@Jason-u Thank you for your interest in our work and for your inquiry. Apologies for the delayed response, we have been occupied with tight research schedules. Yes, you can train your model by adjusting the height (H) and width (W) according to your available GPUs. However, if you plan to train your model with a smaller patch size, please note that our model is not designed for patch-based training; it is intended for full-image processing. If you wish to train your model using patch-wise training and then reconstruct the image from patches, you may need to add additional layers to effectively arrange the patches and avoid edge artifacts.
Thank you for your response. I have adapted your preprocess_brats_data.py
script to preprocess other datasets in a single-modality format, resizing the images to (128, 128, 128). I am training on Brats2023, but after many epochs, the model performance is still unstable, with some results being good and others poor. I have seen in other answers that you mentioned the need to remove poor-quality images from the dataset. I have a question about this: why is your model so sensitive to noise, is this due to the masks or because of the Loss issue? Additionally, can you provide the code for evaluating the 3D-FID, MS-SSIM, and MMD metrics? I would like to use these to assess the quality of my generated images. Thank you again!
These are all images inferred by a single model, with significant quality differences.And it has a lot of noise in the background.
Hi @Jason-u, thank you for your inquiries. We apologize for the delayed response as we have been occupied with multiple research tasks. Regarding the evaluation metrics used, since previous studies have already evaluated synthetic MRI images, we utilized the evaluation metric codes directly from their official repositories:
As for BraTS 2023, we observed that some images are of very poor quality. If you plan to train with this dataset, we highly recommend starting with selected high-quality images and then fine-tuning with the lower-quality images. Note that this fine-tuning process may require additional epochs. Our current method is sensitive to low-quality images, as our model architecture is not very deep due to GPU constraints. We are currently working on the next version of our model, which will include multi-condition capabilities. In the meantime, if you need to train our method with BraTS 2023, please select high-quality images from your dataset to help your model converge faster and achieve better results. If you have any further questions or need clarification on any points, please feel free to reach out to us. Thank you.
Hello, may I ask if I can run the model by adjusting H and W with only V100 32GB, especially when using multiple GPUs?