Open thatsvenyouknow1 opened 3 weeks ago
Hi @thatsvenyouknow1 Thanks for testing vista3d out! appreciate the feedback. I believe the issue might due to how you save the LIDC data to nifiti file. You should not perform that intensity clip because that's not how MONAI scaleintensity transform (clip =True) work. I'm not familiar with pylidc so not sure if the header part is correct or not. I downloaded the 'LIDC-IDRI-0001' from https://www.cancerimagingarchive.net/collection/lidc-idri/ and used the following command to convert it to nifti files, you change that path to your downloaded dicom:
dcm2niix -z y -o folder_to_dcm_files .
Then I use monai bundle which wraps this vista3d repo and is much more memory efficient.
pip install monai==1.4.0;
python -m monai.bundle download "vista3d" --bundle_dir "bundles/";
cd bundles/vista3d
and change the input_dict
in configs/inference.json
to "input_dict": "${'image': 'lidc.nii.gz'}"
. I ran this command on a 12G old GPU and get the results.
python -m monai.bundle run --config_file configs/inference.json
Hi @heyufan1995, Thanks a lot for the quick reply. I applied the "pre-clipping" because the documentation for the ScaleIntensityRange transform specified:
With pre-clipping, the the pylidc data is normalized to values between 0 and 1, whereas without clipping, the data is between -0.54 and 1.99. I thought the goal was probably to have something in the range of [0:1]?
However, thanks to your example, I have figured out that the actual problem was the affine which appears to be used in the Orientation transform.
Thanks again!
Hey,
I have played around with the Vista3D model as I want to pseudolabel a few CT images from the LIDC dataset for MAISI. Unfortunately, I am encountering a problem where the segmentations look quite fuzzy.
For reproduction: I am running inference as explained in the README via
export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer_everything --image_file '<filename>nii.gz'
And this is the minimal code I use to get the LIDC sample:
I have also tried varying the transforms and patch size in the infer.yaml without being able to improve the result by much. I would appreciate if you have any hint as to what might be the problem (e.g. data input format, config settings, ...).
Thanks in advance!