Closed Fjr9516 closed 1 year ago
Hi,
SynthSeg performs its own intensity normalisation (rescaling between 0 and 1), so this is not the problem.
I think the problem is more that you use the --robust version with a high quality scan. The "robust" version has been developed to segment clinical scans, for which we often have bad SNR, low tissue contrast, low resolution, critical artefacts, etc (hence the name "robust"). Here you use a high quality atlas, so SynthSeg-robust is really not the ideal solution. So I would very strongly suggest that you drop the --robust flag.
One more thing: SynthSeg-robust expects scans that are NOT skull-stripped, since this almost never ever happens for clinical acquisitions. Moreover, SynthSeg-robust segments the extra-cerebral CSF, which doesn't appear in your template, or very little. I also retrained the normal version of SynthSeg (ie the one without --robust) to segment extra-cerebral CSF, but this version is much more flexible to skull-stripped data. So again, I would expect that dropping --robust would get you much better results.
Anyway, let me know how it goes ! :)
Hi!!
Thanks a lot for your prompt reply!
I actually tried the one without robust firstly to my template, but the result is even worse: (without robust v.s. with robust)
I understand your method is designed for more general cases. I used non-linear registration, that's why skull stripping is usually applied. Any more suggestions are most welcome!
Hmm, this is really intriguing because I would have expected SynthSeg to perform well on this nice image/template ! Could you please try to tun the same command but adding the --v1 flag ? This will use the older version of SynthSeg, where we do not segment the extra-cerebral CSF (which might be the reason why the current version of SynthSeg is failing). --v1 doesn't work with --robust, so the new command would be something like this
python SynthSeg_predict.py --i input_img --o output_img --v1
Let me know how it goes :)
Hi! Thanks for your reply! I tried the command as you suggested, but the result didn't seem better (left: segmentation with --v1; right: input image):
Maybe it is interesting to retrain the model on template-like data and expect the performance to be better. I might consider this in the future and will try to let you know if I get something interesting.
Hmm, okay I'm running out of options here hahaha.
It's just surprising that you obtain such bad results on such a nice image. I would be expecting SynthSeg to give you a very good segmentation for this type of easy image...
Out of curiosity, what dataset have you used to build your template ? Because I already SynthSeg failing dramatically on nice T1s, and this happened when I tested SynthSeg on the T1 images from which we got the training label maps. This didn't happen for all these T1s, for for 1 or 2 out of 20.
Hi!
Yes, I used a public dataset OASIS to create the templates. hhh, maybe more effort needs to be put here to decrease the gap between clinical use (i.e., raw data are more common) and AI use (i.e., data typically need some harmonization steps).
Nice work! Thanks for spending your time answering! I appreciate it!
Alright, this makes sense: the training data for SynthSeg was actually taken from OASIS, so your results are confirming what I saw before (i.e. SynthSeg sometimes fails on images that corresponds to the training label maps). Actually, our collaborators already spotted this on some OASIS cases. The reason why this happens remains unclear, but it looks like in this case mixing training and testing data is not so good. Maybe I should say something somewhere about trying to avoid running it on OASIS...
I know the results you showed are not pleading for my cause, but if we set them aside (as they are a particular case of mixing training/testing data), the goal of SynthSeg is to avoid having to harmonise/preprocess data. As opposed to many other works, which seek to align the test data and the operating space of the network (either by heavy preprocessing or domain adaptation), here we try to augment our training data as much as possible such that the trained network can operate on many different domains including clinical (low-quality) and research (high-quality) data. For example, no bias field correction, intensity normalisation, or even registration to a template is required beforehand. Now, I know that SynthSeg is obviously not perfect, and I agree that there are many improvements to be made, but I just wanted to highlight this point :)
Anyway, thanks for your feedback, really appreciated, that's how we improve our methods :)
Actually, I have one more question: have you checked that the image header is not wrong ?
That is very important, because Synthseg uses headers to align test scans to the training space. So if the header is wrong, then SynthSeg will give you very bad results.
EDIT: I don't know if you are allowed to send me this atlas, but I would be curious to have a look myself. My email is benjamin.billot.18@ucl.ac.uk
Okay, so @Fjr9516 sent me the image, and it turns out that the header was indeed corrupted. So once I corrected it, I was able to get a good segmentation from both SynthSeg (see below) and SynthSeg-robust.
I guess the lesson is that if SynthSeg gives you a very bad segmentation, you might want to check the header of your images. :)
Hi! Thanks for your help in figuring it out! Yes, I wrongly used the LIA matrix as default in my code.
PS: Found a great reference if you are new in the field: Orientation and Voxel-Order Terminology: RAS, LAS, LPI, RPI, XYZ and All That
Hi,
I tried to test SynthSeg on a constructed template T1w scan, and it gave me an undesired segmentation, like the following:
A DL-based deformable template creation model constructs the testing image, and the intensity is rescaled to the range [0,1]. So I also tried to restore the intensity to the original HU space and rerun the algorithm again, but it gave me a similar result.
Here is the command I used: python SynthSeg_predict.py --i input_img --o output_img --robust
Any suggestion would be appreciated!