WCHN / CTseg

Brain CT image segmentation, normalisation, skull-stripping and total brain/intracranial volume computation.
GNU General Public License v3.0
55 stars 17 forks source link

Missing probabalistic atlas #2

Closed squll1peter closed 3 years ago

squll1peter commented 4 years ago

Hello, I tried to put the toolbox in spm12 and run it, but the probabilistic atlas seems to be missing, so that the code would ended in error. Did I miss anything, or is there a place/website that I should visit first?

brudfors commented 4 years ago

Hello @squll1peter,

The probabilistic atlas is now included in the GitHub repository via Git LFS.

I am sorry that resolving this issue has taken so long. This code was in a very experimental phase when I first uploaded it to GitHub and has required significant modifications to work in a general way (improvements to the underlying mathematical models, re-training of the atlas, etc.). However, now it should work well as I have tested it on a large cohort of routine clinical CT images. I don't know if the task of segmenting and normalising CT scans still interests you, but I wanted to give you a heads up anyway.

Best wishes,

Mikael

squll1peter commented 4 years ago

Hello Mikael @brudfors: Thanks for your reply, I still think that brain CT segmentation is an interesting task. Congratulates on your research publication, this code piece is really a wonderful work. You don't have to feel sorry, instead , I have to thank you. It's very generous of you willing to share the atlas and code to the public!

I'm here to report my initial running status: After adding path(addpath) to the CTSeg and dependencies folder, the first problem that I encountered is that the Matlab reported "grad" is not a supported parameter of spm_diffeo(). This issue is resolved after updating spm to the newest version. Then I can complete the segmentation process without throwing error(around 5 hours per case on a i5-9400F/16GB ram desktop PC) Orig: image image

The followings are the outputs: wc n: image

c n: image

mu [Volume 1-6] image

y [Volume1-3] image

I'm not sure if the results are as expected, but it seems to me that segmentation had failed. I'm wondering if I missed anything in the pre-processing/software usage step, or does it have anything to do with the different texture of image on different CT scanner(The case is acquired on GE Revolution HD scanner) Meanwhile, I'm trying other cases to see if there is something different. Again, thanks for sharing the code and atlas.

brudfors commented 4 years ago

Hi @squll1peter

Thank you for your kind words, and thank you for testing the software!

Could you try setting the fifth argument of the CTseg function (correct_header) to true? Sometimes the orientation matrix in the nifti header is messed up so that the atlas does not align with the image data, that option should fix this issue.

Also, for a potential speed up, have a look at compiling SPM with OpenMP (if you have a compatible OS).

Best wishes,

Mikael

brudfors commented 4 years ago

Hi again @squll1peter

Just wanted to say that for quicker testing consider uncommenting line 116 - 119 in CTseg.m. If the algorithm works, you should still get GM, WM and CSF (although not as accurate).

Best wishes,

Mikael

squll1peter commented 4 years ago

Hello, Mikael @brudfors I've tested it on the original case and an additional case with _correctheader set to YES but no luck. However, there are several interesting finding:

  1. Behavior is a bit different in c3 of Case 2 .
  2. If uncomment the lines you mentioned (Ln115-119), there will be Figure output _(spmmb) Segmentation with seemly more correct segmentation. I'm currently re-running Case 1 on a faster machine with only L115 uncommented( i.e. default setting with figure output).

I think I'll report the current status in an more official way, with the nifti file that I use.

Steps to Reproduce

  1. Start matlab
  2. addpath to spm12, CTseg, diffeo-segment, auxiliary-functions
  3. run spm
  4. select PET & VBM
  5. select Batch in GUI
  6. add CT Segmentation module
  7. Set CT input, set correct orientation matrix to YES, then Run
  8. After finished results in the same directory of input are opened with MRICron.

Expected Results Segmented CT brain in c n output. Actual Results ====Case 1(same as the one in previous comment) Orig image r image c n image

wmc n image

y [Volume1-3] image

====Case 2(new case) Orig image

r image

c n image

====Case 1 with line115-119 uncommented in CTseg.m Figure _(spmmb) Segmentation image

Figure _(spmmb) template model image

c n image

Image file Link (Expires indefinitely) Scanner: GE Revolution HD CT, brain protocol (axial scan mode with gantry tilting), converted to NIFTI with dcm2nii

Run log in matlab
Case 1 Case 2

Environment Windows 10 1903 Matlab R2019a spm 12 v7771 CPU: intel i5-9400F, 16GB RAM.

brudfors commented 4 years ago

Hi @squll1peter

Thanks to you enabling plotting in the script I can see that the model files have not been found. Could you make sure that the files spm_mb_model.mat and spm_mb_mu.nii exist in the directory of CTseg. They are in the model.zip file, which should get automatically unzipped when the code is executed for the first time.

I have only tested the code on Linux and there might be some issue with Windows. I don't have easy access to a Windows machine, but I am setting up a virtual machine at the moment (but it will take some time).

If everything works then the figure (spm_mb) template model should show a model with K1=9, and the tissue classes should also model the brain much better. What has happened now is that, since the model files were not found, the model has been initialised as uninformative.

Once I have the virtual machine set up then I will debug it!

squll1peter commented 4 years ago

Hi Mikael @brudfors Your reply is precise, I think now I find out what happened. After I downloaded model.zip from Github LFS, I did decompress its content directly to CTSeg dir, but I also left the cloned dummy model.zip in the directory. So there are 1. Dummy model.zip 2. _spm_mbmodel.mat & _spm_mbmu.nii in that directory at the same time.

I then deleted _spm_mbmodel.mat & _spm_mbmu.nii from the directory and replaced the dummy model.zip with the one with true content, and run a quick test. This time I got the correct K1=9 message. CT segmentation output are also as expected. It's marvelous! image image On my machine, it seems that _spm_mbmodel.mat & _spm_mbmu.nii are extracted to current working directory instead of CTSeg , so in my initial run, the script couldn't find these two files, and dummy model.zip did not contain valid content either. The above quick test is run on a Ubuntu machine with 9980XE, I'll do a more comprehensive test on several cases these days, and also on Windows platform, too. Thanks again for your patience :) If you're interested in further testing results, you can keep this issue open for a while, and I will post them here.

brudfors commented 4 years ago

Great you got it working @squll1peter ! As you suggested, I will keep this issue open and also investigate how to make sure that the .zip file get extracted into the correct directory.

Thank you!

brudfors commented 4 years ago

@squll1peter , just wanted to let you know that I just pushed a fix to the unzip issue, thanks again for pointing it out!

squll1peter commented 4 years ago

Hello Mikael @brudfors: A brief update on current status & a newly found issue with coordination inconsistency in exported nifti. I'm started to run the code on several clinical brain scans that is performed on machines from different vendors, and cases with brain pathology . Results are still pending, here are some current screen captures(still fascinating): image image

Time require for segmentation is around 4000-5000 sec (70min - 85min) per case on the ubuntu machine with 9980XE(already compiled SPM with OpenMP support). Running status on windows platform will be available tomorrow.

When I try to put the exported segmentation together to verify the correctness of segmentation, I found out that the segmentations are misaligned when loaded into 3D slicer: image (c1: green, c2: yello, c3: red, background: resliced image r) It seems that c1 is aligned with resliced image, and c2 and c3 are aligned to each other. To verify that it is not a compatibility issue with 3D slicer, I compared the metadata of exported nifti (Shown fields are those with different content): image image It shows that indeed the difference exists before niftis are loaded into 3D slicer. In addition, the store info is the same between c2, c3 and y image (For your reference, mwc n and wc n have the same coordinates. )

I'll continue running the cases despite this issue. I expect to finish them all in 2-3 days.

brudfors commented 4 years ago

Hi @squll1peter

Thanks to your detailed reporting I quickly found the error. It was an indexing issue where I looped, not over the tissue classes, but over the number of input images when I modified the orientation matrix of the c* segmentations; therefore, only the c1 image had the correct orientation matrix. I just pushed a fix, so your issue should now be resolved.

Also, it might be worth re-downloading the model file at some point, as I found a small issue with the intensity prior. It should not change the results much though. Btw, I changed from using Git LFS to hosting elsewhere, because LFS did not have much storage. Just deleting the model files should automatically download the updated model (at least on Linux and Mac..).

Thank you,

Mikael

squll1peter commented 4 years ago

Hi Mikael @brudfors Sorry for keep you waiting , its been an busy week . I've done re-downloading model and pull the newest code from git for CTseg and its dependencies. So the following result is obtained by running the newest version of model and code. This post would be too lengthy if I put everything here, so I separate summary and test results into two posts.

Summary:

  1. The code can be run on my windows machine. It took about 10-12k sec/case. (2.5-3.5 hour) with i5-9400F +16GB ram.

  2. After your latest update on script, the coordinates store in c n now matches with each other and the primary (or resliced, if reslice is turned ON ) volume: image But currently I failed to test the forward deformation map, I think 3D slicer does not natively support this kind of deformation map. It is very interesting to map the template brain region back to the origin CT (e.g. frontal lobe, Wernick's area etc). Maybe I should turn to spm instead, but I'm rather new to spm, so any help/hint on how to use the deformation map will be useful :)

  3. Some Observations on segmentation results: 3a. If render segmentation result into 3D, It seems that gyrus segmentation performes better in the frontal and parietal lobe region. It might corresponds to the anatomical fact that the sulci is wider in these region. (frontal view image (right lateral view) image (Posterior view) image (inferior view with up being front) image 3b Some scenarios that segmentation might go wrong: image image image 3b-1, 4 ,6 junction of cortex ans CSF is segmented as gey matter, 2. posterior limb of internal capsule is not segmented (as other material, like old infarct or something else?) 3. white matter is segmented as grey matter . 5. 8 additional strucutres like venous sinus, vessels is segmented as grey matter 7. white matter lesion with low density is segmented as ventricle. 9. Beam hardening artifact makes the segmentation in pons inaccurate.

  4. Some notes on performance: 5a. Turn reslicing ON most of the time increases processing time about 20-30%. I think this is related to increased matrix size.
    5b Compile spm with OpenMP did improve processing speed
    5c. I've tried to increased defaults.stats.maxmem as described here, but didn't got any performance improvement.

  5. Some notes on NIFTI preparation: 6a. Brain CTs are scanned in step-an-shoot mode. For some reason, older version of dcm2nii will separate each step into different nifiti files, yeilding 2-4 nifiti file each corresponds to a part of whole brain. By using newer version of dcm2nii or dcm2niix with -m (merge) flag, or by using DICOM importer in spm, one can export a combine nifti file. 6b. In a specific CT scan performed on TOSHIBA Aquillion One, the volume is over sampled along Z axis, this might resulted in run failure during registration 圖片 It no longer throws error after I resampled the volume along Z axis.

Despite all the above mentioned points, this code is still one of the larger advancement trying to normalize and segment brain CT scan. Performing segmentation and quantification on brain CT scan is an important research topic: In our institute, brain CT scan is at least 4 times more than brain MRI scans, and I believe this is also the case worldwide. From The above observations, I have several thoughts:

  1. A preprocessing step might be needed to eliminate bias across different CT vendor/CT recon parameter, which might include but not are limited to: (1) different scatter correction algorithm that resulted in different HU bias in region near bone(2) Different IR level (different noise level). Recent advanced in deep learning performing kernel conversion might have this potential. e.g. this article and this one
  2. Atlas built for specific ethnic group/ disease group/ patient demographics(Age, gender) are required. Also, specifying the atlas used might also be important part in research using this method. I recalled that you have mention the origin of the current atlas in earlier version of the code , but I cannot find it now.
  3. If grey/white matter quantification is needed, an algorithm dealing with junction zone between cortex and CSF is required to fix segmentation error.

The next post will be the part with current run results on different vendors and pathological conditions.

squll1peter commented 4 years ago

Cont' from last post, this post is the current run results on different vendors and pathological conditions. A. CT Vendor testing: PA4 A1. TOSHIBA(Canon) Aquillion One : SUCCESS after additional preprocessing Acquired with step-and-shot mode, 4 segments, no gantry tilting 0.5mm thickness/0.25 slice interval (over-sampled on Z axis ), Orig: image Failed on initial run with reslice=ON 圖片 resliced volume r does not correct for oversampling either. Succeeded after resampled with 3D slicer. 圖片 Segmentation result: image

A2. Siemens SOMATOM Definition AS
step-and-shot mode, 4 segments, 0.6mm/0.6mm with gantry tilting Orig image Segmentation result: image

A3. Philips Inguenity CT step-and-shot mode, 4 segments, 0.625mm/0.625mm. No Gantry tilting Orig: image Segmented: image Transformation is somehow suboptimal at skull base image (testing nifti)

A4. GE Revolution HD step-and-shot mode, 8 segments, 0.625mm/0.625mm Orig image Segmented: image Transformation is somehow suboptimal at skull base and vertex (Testing nifti)

B. Pathological condition: Though the atlas you provided is obviously not for pathological conditions, but I'll give it a try anyway. B1. Subdural hemorrhage Orig: image Segmented: image (Gosh now I suddenly realize that left and right are inverted in 3D slicer and MRIcron )

B2. contusion/SAH Orig: image Segmented: image

B3: Old infarction Orig: image Segmented image (see skull base and posterior limb of internal capsule also ) (testing file)

B4: Diffuse atrophy Orig: image Segmented image

B5. Recent Stroke Left MCA Orig: image Segmented image

B6. Hydrocephalus Orig: image segmented image

B7. brain lesion with perifocal edema. Orig image Segmented image

B8 Leukoaraiosis/leukodystrophy Orig image Segmented image

B9 Hypoxic ischmic encephalopathy(HIE): Orig image segmented image

===== Is there any other testing that I can do to help?

brudfors commented 4 years ago

Thank you @squll1peter ! This is an amazing report! I will digest it and get back to you soon.

Best,

Mikael

squll1peter commented 4 years ago

Hello @brudfors Mikael: I've done more tests recently, with focus on the the effect of different CT reconstruction parameter on the result of segmentation. Recon parameter includes: Iterative reconstruction(IR) level / Display field-of-view(DFOV)/ Kernel Brief background CT vendors offers a wide variety of reconstruction options to choose from, tailored to different examined body part and purpose. Iterative reconstruction(IR) refers to the iterative statistical image post-processing algorithm that reduces image noise, the higher the level, the less the noise. Display field-of-view(DFOV) refers to the size of image, directly affects the size of voxel and noise. Larger DFOV means lower resolution but less noise Kernel refers to the filter used when reconstructing images from projection domain to image domain. Generally this parameter is given by a string, for example, "SOFT", "BONE", "FC41", "D". The naming of kernel is poorly organized , thus vendor will offer a suggestion table for different examine body part and purpose. Brain CT scan generally uses kernel between "SOFT" and "STANDARD".

Testing results Generally the less the noise, the better the registration and seg image image image Blue: Smooth kernel, highest IR, Small DFOV (Small voxel size, aka higher resolution with more noise. Precisely) Red: Smooth kernel, highest IR, regular DFOV (Normal recon FOV used clinically, smoother the clinical use due to soft kernel and higher IR level) Green: Smooth kernel, highest IR, Small DFOV (Larger voxel size, aka lower resolution with less noise) (The 3D surface mesh is smoothed by 3d slicer) I would say that the distortion in blue model is obvious, but Green model improved on segmentation detail as compared to Red model, picking up more gyrus and sulci in posterior parietal lobe, and suffers less segmentation error near bone.

Registration failed in this particular case with STANDARD kernel, DFOV and low IR level that is used clinically in our site.

Other attempts Registration failed in this case with causes other than oversampling issue. In the above case , the registration always failed in standard recon parameter that is used in clinical scenario. And it also failed in other recon parameter resulted in smoother but still noisy images(higher IR level, kernel unchanged). Registration mostly failed at a lower(i.e. later) zoom level stage, with an abrupt raise in error metric . Below are some of the error log: image image

Later, on these failed cases, I tried to set origin to anterior comissure; resampling volume to isotropic ( about 0.48mm each side), but both method failed either.

I then resampled them to 1x1x1mm voxel size (i.e. less noisy), and registration succeeded: image (BLUE) Recon parameter used clinically, then resampled to 1x1x1mm per voxel (GREEN) With Higher IR leve, then resampled to 1x1x1mm per voxel

Several thoughts after digestion Combined above testing result, I guess that as noise increases, current algorithm would first increase segmentation error, then registration started to get distorted due to failure in registration, then when noise is above certain threshold, registration failed.

I've noticed that most distortion occurred at skull base and basal of frontal lobe, and they are always inward distortion.(Review the previous test results in last post, I noted that there are distortion of different degree near skull base and frontal base in a portion of cases ), so the error might come from a systemic error in algorithm, and I'm wondering if it is related to the bias field correction? image Above is the output bias field distribution image. I expects that the bias of measured HU in CT behaves differently compared with MRI. In my knowledge, CT measurement bias is mostly cause by dense material(like bone which causes scatter and beam hardening artifact), and the bias often located to the region adjacent them. See below for example: image (Left: corrected for scatter artifact, Right: Uncorrected)

Vendors implemented different algorithm to overcome this issue, but there are still observable bias across different vendors (and across examinees with different skull bone mineral density).

Another issue is that there is remarkable deformation as noise level increases, which supposedly happened less freequently in registration algorithm using a multiple resolution technique . I'm not sure if adding smoothing (e.g. recursive Gaussian smoothing) before resampling for different zoom level would help (Smoothing might help in general multiple resolution registration scenario, to overcome issue in noisy image registration and image with different voxel sizes歐implemented in itk ).

Grey matter/CSF and CSF/Bone junction segmentation error As noted previously, segmentation error occurs at tissue junction image I think the main issue is that the algorithm refines segmentation according to the value of voxel, but unlike in MRI anatomical scan(T1-weighted scan) which has a fine gradual descend of value from inside out: MRI: White matter(highest value)=>grey matter=>CSF=>bone (lowest value) CT has a zig-zag like HU change inside out: White matter(lower value)=>grey matter(higher) =>CSF(lower) =>bone (higher) image (MRI) image (CT)

Summary

  1. To reduce possible failure in registration: for prospective cases, I would suggest reconstructing an additional series with parameter the reduce noise(higher IR level, smoother kernel) . In retrospective case, I would suggest resampling volume to reduce noise.
  2. I'm not so sure about this point, but CT-specific bias field correction , more segmentaion refinement algorithm and adding smoothing step before resampling different zoom levels might help further improving current segmentation algorithm.
brudfors commented 3 years ago

Hi @squll1peter

Thanks again! Will get back to you soon. I am at the moment training a, hopefully, better model. Should be ready soon.

Best wishes,

Mikael

brudfors commented 3 years ago

Hi again @squll1peter

Here, finally, comes my reply.

The back-end to the model:

https://github.com/WTCN-computational-anatomy-group/diffeo-segment

has seen a quite significant update -- thanks a lot to @JohnAshburner -- to be more accurate; but maybe more importantly, faster. Therefore, I hope that your reported run-times will substantially drop.

Furthermore, the CTseg model has been retrained to hopefully address some of the issues you faced. The updates are reflected in the repository README, but to summarise: the model now has seven classes where the four brain classes are GM, WM, CSF and sinuses, and the three classes outside the brain are bone, soft tissue and background. Note that the bone class also models calcifications and hyper-intensities. All these classes can be written after the algorithm has finished; in native, warped and modulated space.

The combination of the updated back-end and the retrained model should improve on some of the segmentation misclassification that you faced, e.g., bad details in the gyrus, differentiating between GM and WM in the cortex region, sinus and vessels segmented as GM, GM voxels appearing between CSF and skull, etc. To use the newest iteration of the code, please pull both CTseg and diffeo-segment, and also delete your model files (so the new ones are downloaded).

Regarding DICOM conversion, I would recommend using spm_dicom_convert. It additionally supports variable slice-thickness reconstruction, which is common in CT data (I have added this information to the README file).

Regarding using the forward deformation to warp template, etc. I have added an example on how to do this with SPM to the README file. Please let me know if it helps.

Pathology is an interesting avenue of further development. We have some thoughts on how to do this, but the initial focus is to get a model that works well for normal CTs, but with a large variability (in age, noise, morphology, scanner, etc.). Any ideas on how to better deal with pathology are welcomed!

Regarding your thoughts:

1.1. There is a bias field correction step in the algorithm that should deal with this; it will hopefully work better with the new code.

1.2. Agree, I will need to specify the details regarding the atlas. I was thinking of doing this in a future publication, as this method is not yet published.

1.3. My hope is that the new sinus class will help with this issue.

2.1. I agree, some denoising at the start would probably be helpful. The problem is just to have such a denoising method that is robust to many different CTs. I have been developing a denoising method for MRI that also works for CT (https://github.com/brudfors/spm_superres). It is unfortunately not yet as robust for CT as for MRI. I have some ideas on how to make it work better though and I am working on an updated, GPU based version.

2.2. Hopefully the bias field correction in the latest code+model will improve things.

Finally, I mentioned writing something on this method. As you seem to have an in-depth knowledge on CT imaging, as well as good ideas for validating the method; is this something you would be interested in getting involved in?

Thank you!

Mikael

squll1peter commented 3 years ago

Hello Mikael @brudfors : Thank for you reply and solid rework on the code & model. Remarkable improvement had been noted both in processing speeding and accuracy of segmentation. I'm currently re-running all above cases, results will be available within 2-3 days, and I'll post the results here. I deed have some thoughts about the utilization of this method on several scenarios, but I'm not confident about it , so maybe let me think for a while, and contact you on email for detail.

Brief on current running initial status:

  1. After pulling newest commit of CTSeg and diffeo-segment,the new model is downloaded successfully. All cases are converted to nifti with spm_dicom_import tools. Running throws error at first, but runs smoothly after removing auxiliary functions from path.
  2. There is visually observable better segmentation results. image Segmentation errors are still noted at some region (the region does not belong to 7 segments either) image I'll see if this is reproducible in other cases I've also noted that the segment Dural venous sinus (light blue) yielded a thin layer surrounding the brain, I'm not sure if this is as what you expected, cuz this might includes dura meninges in addition to dural sinus. I think it is not reliable to segment dura on CT in normal population given that dura is heavily obscured by partial volume effect of skull on CT.
  3. Processing time is remarkably shorter, about 30min/case on i9-9980XE with 64GB ram. It seems that current version of code better utilize more cores and runs with less idle time between different stages. Output logging is less comprehensive(thus less assuring), but I think this is only a minor issue. In the oversampled case I mentioned in the first testing report, matlab ended unexpectedly with "Killed" message output in the command line. After investigation, I found out that Matlab was killed by OS(Ubuntu) due to Out-Of-Memory error. image To further investigate the memory usage during running, now I started to log memory change during registration. I'll skip this case for now and go back to it later.
  4. After diffeo registration is done, I noted that there is a 2-4 min period during which the algorithm utilized only one core, image I think this is worth mention since this might implies a ~2-10% time saving if all cores can be utilized.
  5. I will test forward transformation after cases are done.(Are current model in MNI space?)
brudfors commented 3 years ago

Hi @squll1peter

I am glad to hear that the code seems to work better!

I changed the sinus class to instead be called just dura, as it indeed also models the dura meningis (thanks for the pointer, and sorry for being unclear). As the model is trained on combined MR and CT data, it can pick up the dura; but to segment it in CT is challenging I agree.

I also pushed a fix to diffeo-segment that should help with the drop in cores used after normalisation has finished. Thanks for finding that issue!

Please have a think and drop in me an email when you have time, no rush.

Thank you,

Mikael

brudfors commented 3 years ago

Hi @squll1peter

Just noticed that I forgot to answer your question about if the template was in MNI space. The answer is that the template is not in MNI space, as it was built in a group-wise fashion resulting in an optimal mean for that particular group. However, the CTseg code does provide the normalised segmentations in MNI space as the model.zip file includes an affine transformation that transforms the template into MNI. This transformation is applied, by default, to the resulting normalised tissues after registration is completed. Note that this is a feature I just added.

Best wishes,

Mikael

squll1peter commented 3 years ago

Hi Mikael @brudfors: Sorry for keep you waiting, the testing and visual analysis is more time consuming than I expected, and intermediate result had some points that I might want to modify the way I do testing. Testing on CT reconstruction parameter is still pending, but I'll post current results about running status here first, and segmentation result in various condition in next post.

  1. Running status: Currently all tests are run on the machine with following setup: Ubuntu 18.04, Matlab 2020a, Intel i9-9980XE with 64GB ram spm12 v7771, compiled with openMP; CTSeg & model at version July 22, 2020 Nifti are converted with dicom import tool of spm, except for some special cases. Nifti files, temporary files and outputs are stored on an nvme SSD (forgot to mention this important point previously)

With the July 22 version of code, if I plot memory usage over time, It shows: image , with each "step" indicates a case. From the plot, the mean processing time for each case is slightly less than 30mins. Memory usage is about 20-25GB during processing(typical size 512x512x[256-280], larger if resliced) Zoom into the plot, you can see that several grouped peaks forms one rigid registration peak + four diffeo registration groups. image Memory usage is stable and can be expected.

  1. Common segmentation error in this version: 2a. Some cortex are not segmented. often seen at temporal tip, parietal lobe, occipital lobe and cerebellum . "Not segmented" here means not belongs to any of the 7 output segments. 2b. Hypodense region around ventricles are more prone to be segmented as CSF This two issues are seen in many cases, examples: image image (discussion in 3c)

2c. I cannot tell if the segmented dura is valid on CT, cuz there is no way to verify this with only CT. Mean while, segmentation for dural venous sinus might failed in some cases. I think its partly due to that (1)theses region is affected by scatter artifact (2)Larger anatomical variation in venous sinus( see quick example in ref1 ) (3)The HU value of venous sinus is largely depends on the hemoglobin level of the patient. see quick examples in ref2 Nevertheless, Venous sinus is not the main point of normalization of brain.

2d. Vessel segmentation improved, most are segmented as CSF. some are still segmented Grey matter. But I think this is only a minor issue.

  1. Some observation about current model: 3a. I expects the model to be smooth, but the current model has zebra stripes along Z axis: image image UPDATE: this is already fixed in the latest version of model.

3b. Current model is not in MNI space ( I tested t myself these days & Thanks for your confirmation :) ) the model is slightly longer in AP dimension than overlaying AAL3 segments. image I think I'll test your newly updated codes recently.

3c. Lack of sulci at occipital lobe, temporal lobe; sulcus in parietal lobes(postcentral and interparietal sulcus to be specific) are seemly too wide when compared with occipital lobe. image I think this might be the cause that some cortex are not segmented, since it's hard for algorithm to register two brains with so different characteristics. This might also applies to periventricular white matter segmentation issue. This point is so important, cuz current cases that I picked for testing are not actually best fit for the model. So the correctness of algorithm might be confound by segmentation error resulting from trying to segment a improper group. If you can provide the information (age range, gender ) about the group that build the model, I think I can found similar cases for testing the algorithm with less bias (Though definitely different ethnic group :P )

  1. Deformation works nicely with your example. (Softmaxed cortex on native CT spaceith) image

=> Next post will be current segmentation result

squll1peter commented 3 years ago

In this part, I'm resting the cases in previous test. But just as what I mentioned in point 3c in last post, these cases are picked randomly at beginning, thus they might not reflect the real performance of algorithm.

A. Vendor testing A1. TOSHIBA (CANON) Aquilion One Failed initailly due to OOM error, success after resampled with 3D slicer.

Orig image Segmentation image Seg error (Periventricular white matter as CSF) image image (Cortex at parietal and occipital lobe are not segmented) image image

A2 SIEMENS SOMATOM Definition AS Orig image Segment image Seg error image

A3 Philips Inguenity CT Orig image Segment image Seg error image Obviously better in this case. (maybe youger patient)

A4 GE Revolution HD Orig image Segment image seg error image

B Pathological condition B1 Subdural hemorrhage Orig image Segment image Seg error image image

B2 SAH, IVH and secondary hydrocephalus Orig image Segment image Seg error image SAH are segmented as Dura.

B3 Old insults Orig image Segment image image (Insulted region are absent of grey matter. )

B4 Diffuse atrophy Orig image Segment image image

Error at cortex and periventricular white matter.

B5 Recent Left MCA stroke Orig image Segment image Interesting behavior, most Hypodense region in infarcted region as segmented as CSF, while as CSF is not segmented.

B6 Hydrocephalus Orig image

Segment image Some holes in dilated ventricles.

B7 Brain lesion with focal edema Orig image

Segment image image

B8 MCA territory infarction with hemorrhagic transformation Orig image Segment image

B9 Hypoxic ishchemic encephalopathy Orig image

Segment image

Phew! There supposed to be a Part 3 Recon parameter after this post, but I need some more time... Thank your(and you coworker) again for your continuous input into the algorithm. It's getting more and more mature!

squll1peter commented 3 years ago

Hello Mikael @brudfors : A quick update on the atlas part of testing: The atlas rendered in the last post was found to be the version before 22, July Current version is much smoother. image (Right: old atlas Left: New atlas ) I recall that the above testing result are using the latest code and model, but I'll reconfirm this on the safe side.

By using the affine matrix provided, I'm now able to map your atlas to MNI space image_00008 image_00014 image_00021 The gyrus fit fairly with the parcellation in other MNI space atlas (AAL3 parcellation in the example)

squll1peter commented 3 years ago

Hello Mikael @brudfors: Sorry for taking so long to reply. As I mentioned in the last post, I have some idea and change the plan of testing in the halfway. During the process, It's really amazing to realize that you have addressed most of the issues that I'm facing currently in your thesis. It's a really good work.

Brief I've extended the testing recon parameter and collected cases with MRI (two modalities within several days), as a baseline comparison. For clinical CT, I think it's an good idea to spend some time finding a good recon parameter for segmentation, since recon with a different parameter is essentially free, and it does not require additional radiation and scan time. The addition in recon time is minor, and can be setup at the console to start and transfer automatically. So, there will be 36 different combinations of reconstruction parameter now. (for your referenece, I've also watched some tutorials on youtube for spm basics during this period. )

Testing parameter options IR level: 20%, 60%, 100% Recon Kernal: Soft, STD#, DETAIL, BONE (BONE kernel is added here to test the effect of segmentation in very noisy image) FOV : Small( 20cm ), Standard(24cm) , Large(32cm)

Result I currently cannot find a better way to show all the results, so I'll just use spm check Reg 圖片 (24 of 36 testing sets, wc [n] ) 圖片 (Zoom in at basal ganglion) 圖片 (Zoom in at frontal cortex) Generally, cortex deep within sulci are poorly segmented. The difference in the segmentation result with different recon parameter is starting to get minor after July, 22 version of code and model update., but there are still observable difference in the robustness of segmentation when choose to use different recon parameter: segmentation20200808_00001 segmentation20200808_00005 segmentation20200808_00008 segmentation20200808_00010 segmentation20200808_00012 segmentation20200808_00014 segmentation20200808_00016 Green : Cortex segmentation build from clinical 3D T1 MRI with built-in TPM in spm Yellow : Regular FOV, STD# Kernel, 20%IR Red lines : Regular FOV, STD#kernel, 100%IR
Purple lines : Large FOV, STD# Kernel, 20% IR. This parameter setting catches more deeply located cortex.

I think it's getting more important to compare the segmentation results with the one that is done on MRI scan with subjective metrics like dice similarity score. So I've collected some clinical subjects with both CT with 36 recon parameter sets and MRI. Maybe we can work together to automate the testing pipeline.

Notes and issues found during the process I faced several issue during my attempt to further testing:

  1. If the CT volume are recon with different Fov, The coordination of CT would change slightly after it is resliced. This hinders the direct comparison between CT segments reconstructed with different FoV. I overcome this issue by creating another rigid transformation with elastix, and apply them accordingly.
  2. I currently cannot find a robust automatic way to align CT and MRI. By now, I align them manually. But an automatic way of registration might be mandatory in larger scale of testing. I've tried several options with suboptimal results: 2a. Register with BRAINS module in 3D slicer with 6 DOF. 2b. Register with Elastix module in 3D slicer with profile "Rigid" (Briefly, Advanced Mattes mutual info + 4 resolution levels + AGD ). I then increase bin count to 64, but the result is still not well. 2c. NJTV method provided in your thesis (pulled from github repository ) In my following days, I might want to try: increased sampling percentage, Do transformation on CT value to reduce difference between bone, air and brain tissue. Try skull stripping. But again, any suggestion would be helpful since I'm not a expert :).
  3. I would also like to test the de-noising algorithm in your thesis on clinical CT, and observe for the effect on segmentation, but currently I cannot find an optimal setting for that. The code is pulled from here. The default setting of step size is too large, and quickly smooths out CT details; iteration did not stop after threshold has been reached (I think this line force the loop run for at least 20 iterations). I also tried to modify lambascale to make the resulting lamba near 0.05 as what you suggested in the thesis, but I currently cannot find an optimal setting.
  4. Minor distortions/misalignment in the primary volume(MRI and CT in our case) results in large DSC loss in small-and-thin structures like cortex, but I cannot find an way to subjectively evaluate the degree of misalignment/distortion of MRI.
  5. Cortex segmentation is suboptimal in deep sulcus.

Other minor issues:

  1. The Image contrast of clinical MRI 3D T1 sequence is not designed for study, so segmentation result is not optimal.

Summary Again, it's very amazing to found out that you've already addressed most o the issues that I'm facing now. More testing and validation are needed, If you're interested in further testing and validation on clinical datasets, please let me know(maybe drop me an email). This issue list had been soooo long and a bit off the topic (Thanks for your patience!). I would like to suggest that we close this issue, and open an new one to discuss further testing and validation, if needed.

brudfors commented 3 years ago

Hi @squll1peter ,

Thanks for more great and interesting results! I agree, let's close this topic and move to our email thread (I will reply to your other emails soon, and answer those questions too). Regarding your above issues:

  1. Hmm, I wonder why this is? I would expect the reslicing in the CTseg code not to do that. Maybe you could share a problematic sample?

  2. Have you tried using the co-registration method in SPM12? I, and many other people, have found it very robust. Just use default settings:

    matlabbatch{1}.spm.spatial.coreg.estimate.ref = {'MRI.nii'};
    matlabbatch{1}.spm.spatial.coreg.estimate.source = {'CT.nii'};
    matlabbatch{1}.spm.spatial.coreg.estimate.other = {''};
    matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.cost_fun = 'nmi';
    matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.sep = [4 2];
    matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.tol = [0.02 0.02 0.02 0.001 0.001 0.001 0.01 0.01 0.01 0.001 0.001 0.001];
    matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.fwhm = [7 7];
    spm_jobman('run',matlabbatch);
  3. The CT denoising method in that code is not that robust unfortunately. The issue is that it requires to estimate the noise level in the CT scan, which is quite tricky. If this estimate is not always accurate, the method will not be able to generalise. If you know of a robust method for estimating noise in CT images, then it would work well. I am actually currently working on a GPU implementation of the same code (in Python). In that code I have a more robust method for estimating the noise (I think). I will hopefully make it public soon, will let you know when that happens.

  4. Maybe using the SPM12 coreg will help with the alignment issue? If not, it is a bit tricky. Manual inspection it non-optimal, but maybe our only resort?

  5. There are still some minor issues with the model currently used by CTseg. I hope to update it to a final version soon that will be trained from more data as well having some corrections to the underlying diffeo-segment code. Hopefully this will help.

Please feel free to reply by email and I'd be happy to work together to automate things!

Best wishes,

Mikael

JohnAshburner commented 3 years ago

CT files often have the origins set a bit strangely in their DICOM headers, which propagates into the NIfTI headers. Maybe if the origins could be re-centred first, then any subsequent MR-CT registration would have more chance of working. All the best, John

On 10 August 2020, at 12:24, Mikael Brudfors notifications@github.com wrote:

Hi @squll1peter ,

Thanks for more great and interesting results! I agree, let's close this topic and move to our email thread (I will reply to your other emails soon, and answer those questions too). Regarding your above issues:

Hmm, I wonder why this is? I would expect the reslicing in the CTseg code not to do that. Maybe you could share a problematic sample?

Have you tried using the co-registration method in SPM12? I, and many other people, have found it very robust. Just use default settings:

matlabbatch{1}.spm.spatial.coreg.estimate.ref = {'MRI.nii'}; matlabbatch{1}.spm.spatial.coreg.estimate.source = {'CT.nii'}; matlabbatch{1}.spm.spatial.coreg.estimate.other = {''}; matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.cost_fun = 'nmi'; matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.sep = [4 2]; matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.tol = [0.02 0.02 0.02 0.001 0.001 0.001 0.01 0.01 0.01 0.001 0.001 0.001]; matlabbatch{1}.spm.spatial.coreg.estimate.eoptions.fwhm = [7 7]; spm_jobman('run',matlabbatch);

The CT denoising method in that code is not that robust unfortunately. The issue is that it requires to estimate the noise level in the CT scan, which is quite tricky. If this estimate is not always accurate, the method will not be able to generalise. If you know of a robust method for estimating noise in CT images, then it would work well. I am actually currently working on a GPU implementation of the same code (in Python). In that code I have a more robust method for estimating the noise (I think). I will hopefully make it public soon, will let you know when that happens.

Maybe using the SPM12 coreg will help with the alignment issue? If not, it is a bit tricky. Manual inspection it non-optimal, but maybe our only resort?

There are still some minor issues with the model currently used by CTseg. I hope to update it to a final version soon that will be trained from more data as well having some corrections to the underlying diffeo-segment code. Hopefully this will help.

Please feel free to reply by email.

Best wishes,

Mikael

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

[ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "https://github.com/WCHN/CTseg/issues/2#issuecomment-671299301", "url": "https://github.com/WCHN/CTseg/issues/2#issuecomment-671299301", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

brudfors commented 3 years ago

Great point, thanks @JohnAshburner!

@squll1peter, If you first use the nm_reorient(pth,odir,vx,prefix,deg) function from the CTseg code (only on the CT), then apply the registration demo code above, it might work much better. Then, we just need to remember to set the reslicing option to false when segmenting with CTseg.