Open malteekj opened 2 years ago
Unfortunately, I haven't been able to get anything meaningful either. I think the model might be functional, if the authors provide more information, such as windowing etc. But as I also described, I don't have complete faith in the model working with new data.
It might be better to train a new network on the data (which is publicly available).
When I used the publicly available data with a pre-trained model, the results are okay, but no where near as good as the demo shows. I will try know to use this data to train a new network and update you on the results. However, the training process does not seem all that clear. Please let me know if you find anything meaningful in your deep-dive of the code and public data. Here are a few images that I curated using the demo:
And here is what I got when using the publicly available LSFM data:
Thanks for the interesting work and paper. Can I suggest to update this repository with better documentation for the setup as well as a tutorial and pipeline for inference? I am very interested in using your pretrained networks, but it would be quite difficult to use in its current form.
The data preparation pipeline is quite cumbersome to use. It would be preferable to just load the data directly from Nifty or DICOM rather than generating TIFF files. Also, the resampling step should find the meta data of new incoming data and resample it to the same spatial resolution per pixel rather than just the same amount of pixels. The standardization step is also not commonly performed for CT, as HU is comparable across scans. Rather, windowing and normalization would be better.
I managed to run inference on one scan after some code-parkour, and it didn't really yield anything meaningful. This might be due to problems with using new data with your pipeline, so I hoped you might update the code, so it would be possible to use for the community.