mazurowski-lab / segmentation-guided-diffusion

[MICCAI 2024] Easy diffusion models (optionally with segmentation guidance) for medical images and beyond.
https://arxiv.org/abs/2402.05210
Other
54 stars 4 forks source link

How to generate new images using the shipped models? #4

Open lewesliu opened 2 months ago

lewesliu commented 2 months ago

In the Evaluation/sampling section, img_dir and seg_dir are needed to generate images, but where do I find them? Also, why do we need image inputs to do sampling? I think DDPM/DDIM starts with a noise image (or conditioned by a segmentation mask) only.

nickk124 commented 2 months ago

Hi, thanks for your questions!

Regarding where to find the segmentations and images: for licensing/owning reasons, I can't directly provide a link to the ones used in the paper as I have them preprocessed (DBC and CT-Organ), but they are publicly available at https://www.cancerimagingarchive.net/collection/duke-breast-cancer-mri/ (under "3D Breast and FGT MRI Segmentations") and https://www.cancerimagingarchive.net/collection/ct-org/, respectively.

For your second question, you're right, image inputs are not needed for sampling; this was an unintended relic of the experimental image translation/partial noising option that this was implemented a while ago. I fixed this just now in commit 2c03d7297e2f288fd7e1345ae994244d747da9f3 and reflected it in the tutorial; you don't need to provide img_dir for sampling now.

Please let me know if you have any questions or run into any issues.

Thanks!

lewesliu commented 2 months ago

Thanks for the updates. After pulling your commit, I'm still missing the config.json file to run your ddim-breast_mri-256 model, can you point me to the file?

nickk124 commented 2 months ago

Hi, my apologies for the missing files! I have added the config files to the google drive folder (https://drive.google.com/drive/folders/1OaOGBLfpUFe_tmpvZGEe2Mv2gow32Y8u), and updated the readme tutorial to explain how to use them.

lewesliu commented 2 months ago

@nickk124 it is now loading the model but I'm still getting an error when running ddim-breast_mri-256 model. I found that ln383 in eval.py says WIP, so it this pipeline working or should I try a seg guided model instead?

nickk124 commented 2 months ago

Hi,

Thanks for your patience with this bug (and my apologies for insufficient testing)! I've made a fix and tested sampling using the pretrained unconditional models on my end and it works (see sampled images below), so please try the latest commit and let me know if it doesn't work for you. Also, please note that using the pretrained checkpoints and config files requires renaming them once they are in the unet directory; please see the updated README evaluation section where I explain this.

0004 0001

lewesliu commented 2 months ago

Thanks, I'm able to get the results using cpu but still get stuck with my 2080Ti and cuda 11.8. Will try the training next.

nickk124 commented 2 months ago

Weird, what error are you getting? For reference, I'm running on cuda 12.2. I just updated the requirements.txt in the latest commit to show the exact versions for the packages I use, if that helps.

lewesliu commented 2 months ago

It just freezes with no progress, nvidia-smi shows about 3G gmem used but no computation activity.

On Thu, Apr 25, 2024 at 8:43 AM Nick Konz @.***> wrote:

Weird, what error are you getting? For reference, I'm running on cuda 12.2. I just updated the requirements.txt in the latest commit to show the exact versions for the packages I use, if that helps.

— Reply to this email directly, view it on GitHub https://github.com/mazurowski-lab/segmentation-guided-diffusion/issues/4#issuecomment-2077092540, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKU4TAMW5RRAOOVJ3KQOAXLY7D27JAVCNFSM6AAAAABGPPA4HSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZXGA4TENJUGA . You are receiving this because you authored the thread.Message ID: @.*** com>

nickk124 commented 2 months ago

That's strange, I think maybe sometimes that happens when there is an internal cuda issue, did you try running the code with CUDA_LAUNCH_BLOCKING=1? Could help with debugging.

Also, just to check: you installed torch for your cuda version 11.8 with pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118, right?

Finally, are you using the same package versions that I posted in requirements.txt?

I'm updating the readme and requirements.txt to be clear that the pytorch version installed needs to match your cuda version.

nickk124 commented 3 weeks ago

Hey @lewesliu, any progress on this? Just wanted to touch base.