pfriedri / wdm-3d

PyTorch implementation for "WDM: 3D Wavelet Diffusion Models for High-Resolution Medical Image Synthesis" (DGM4MICCAI 2024)
https://pfriedri.github.io/wdm-3d-io
MIT License
50 stars 5 forks source link

the appearance of the generated images #7

Closed Jason-u closed 38 minutes ago

Jason-u commented 23 hours ago

Dear author, I have some questions about the generated images from your model. I find the appearance of the generated images to be a bit strange. In layman's terms, they don't seem to resemble the images from the original dataset very much, mainly in terms of style. Why do the generated images not look like they were trained on the same dataset? I don't mean to offend, I'm just a bit puzzled. I apologize if my question seems inappropriate. image image

pfriedri commented 5 hours ago

@Jason-u The orientation of the brains is indeed different. This is due to us simply saving them with a simple affine matrix of np.eye(4) and not the one used in BraTS. This can however easily be changed.

Regarding other features of the image, it always depends on how long the network was trained, which resolution was chosen, and how the volumes are displayed in Slicer. In addition, the original BraTS images are preprocessed as described in the paper, which can also lead to slight changes.

In general, however, it makes little sense to simply compare two randomly selected images and draw conclusions about the entire dataset. The images in BraTS are very heterogeneous and show considerable differences in quality and contrast.

Jason-u commented 38 minutes ago

Thank you for your reply, I understand now.