Closed Jason-u closed 38 minutes ago
@Jason-u The orientation of the brains is indeed different. This is due to us simply saving them with a simple affine matrix of np.eye(4)
and not the one used in BraTS. This can however easily be changed.
Regarding other features of the image, it always depends on how long the network was trained, which resolution was chosen, and how the volumes are displayed in Slicer. In addition, the original BraTS images are preprocessed as described in the paper, which can also lead to slight changes.
In general, however, it makes little sense to simply compare two randomly selected images and draw conclusions about the entire dataset. The images in BraTS are very heterogeneous and show considerable differences in quality and contrast.
Thank you for your reply, I understand now.
Dear author, I have some questions about the generated images from your model. I find the appearance of the generated images to be a bit strange. In layman's terms, they don't seem to resemble the images from the original dataset very much, mainly in terms of style. Why do the generated images not look like they were trained on the same dataset? I don't mean to offend, I'm just a bit puzzled. I apologize if my question seems inappropriate.