Open sriosq opened 5 days ago
Have you tried calculating the kernel position for the old “hardcoded” line I did just to test for yours vs the current one. Also, an easy verification would be to verify that the position we set to 1/3 should be the NaN (or 1, if the float precision didnt round to 0 in the kspace freq calculation)
No. I haven't tried either of this. I have only tried to replicate the image to confirm this one was the commit were we got the results on Wednesday. If you could instruct me to test with breakpoints I'll be able to verify this and confirm which version of code we should be using.
It's not super obvious to me where the null frequency was being set before vs after, so this might be incorrect, but I think that the position being set to 1/3 "now" is:
int(dimensions[0]/2-1/2*(dimensions[0]%2)), int(dimensions[1]/2-1/2*(dimensions[1]%2)), int(dimensions[2]/2-1/2*(dimensions[2]%2))
whereas before it was:
(dimensions[0]%2)*(kernel_dims[0]-1),(dimensions[1]%2)*(kernel_dims[1]-1),(dimensions[2]%2)*(kernel_dims[2]-1)
So, what Mathieu suggested is that you check if the kernel is equal to NaN (or 1 like Mathieu said) at those positions, before setting them to 1/3. Also, to check what index this corresponds to.
Hi Sebastien - I've run a thorough debug analysis of both https://github.com/shimming-toolbox/susceptibility-to-fieldmap-fft/commit/40acdc9ceac7eb3da1061f6f389d25f0eb62feb7 and https://github.com/shimming-toolbox/susceptibility-to-fieldmap- fft/commit/d9f785b082fb145d547ff03a53f23f1564ccc38, and found no difference in underlying implementation (the code does the exact same thing, only one replaces nan when it is at the center of the k-space while the other does it after the fftshift). I also ran both implementations on your image, and found the output to be identical. Thus, I could not reproduce your "good" result with https://github.com/shimming-toolbox/susceptibility-to-fieldmap-fft/commit/40acdc9ceac7eb3da1061f6f389d25f0eb62feb7.
Output using 40acdc9ce:
Output using latest version:
Did you actually run your image on https://github.com/shimming-toolbox/susceptibility-to-fieldmap-fft/commit/40acdc9ceac7eb3da1061f6f389d25f0eb62feb7 to reproduce your "good" result? If not, how did you determine that this commit was the "good" version?
As for the question raised by the title of this post, I see no convincing evidence that the code should be rolled back, it appears to be working as intended.
Here is a screen recording of everything I did tonight to investigate this (including a demonstration that 1/3 is being set to the middle of k-space / where the nan is after the division by 0), to help you understand how to debug when you suspect something is wrong in code, so that you can further look into this difference in outputs if you wish to do so: https://www.youtube.com/watch?v=GLgX5cqQW8s
Here's additional resources to learn about debugging: https://docs.python.org/3/library/functions.htm|#breakpoint https://docs.python.org/3/library/pdb.html https://code.visualstudio.com/docs/python/debugging https://stackoverflow.com/questions/4929251/how-to-step-through-python-code-to-help-debug-issues https://packaging.python.org/en/latest/guides/distributing-packages-using-setuptools/#working-in- development-mode https://stackoverflow.com/a/60167509
Okay, perhaps there's an underlying problem with my GitHub knowledge but what I did is git checkout [https://github.com/shimming-toolbox/susceptibility-to-fieldmap-fft/commit/40acdc9ceac7eb3da1061f6f389d25f0eb62feb7] , then pip install . the repo and run the code. I will study the video and understand if there is something I may have passed that affects the outcomes, because I am 100% sure I got different results. Thanks for the references. I will post here my conclusion and use the decided version for the chi-optimization code.
Okay, perhaps there's an underlying problem with my GitHub knowledge but what I did is git checkout [https://github.com/shimming-toolbox/susceptibility-to-fieldmap-fft/commit/40acdc9ceac7eb3da1061f6f389d25f0eb62feb7] , then pip install . the repo and run the code.
That is odd. I would suggest you use pip install -e .
instead of what the repo suggests (pip install .
); since you're developing with the code, changes in there will only be reflected in your commandline calls if it's installed via editable mode.
You could check which version your commandline "sees" by doing pip freeze
:
stack-data==0.6.3
-e git+https://github.com/shimming-toolbox/susceptibility-to-fieldmap-fft.git@40acdc9ceac7eb3da1061f6f389d25f0eb62feb7#egg=susceptibility_to_fieldmap_fft
sympy==1.13.3
As you can see, for me I was definitely running the 40acdc9cea commit in my test
How can I use pip freeze to verify what version of the code I used to ran my simulations? I want to verify this before doing pip install -e . Also, something I am using to verify where my susceptibility to field map repo's head is is using
git rev-parse HEAD
Doing this I got the hash 40acdc...
Rolling back different commits, the first one I see a difference is at 100ac14,
Mathieu, this might be a silly question but, when you reference to a certain "hash" do you mean before that commit was made or after that commit was made? Maybe this is the misunderstanding that leads to me having different results
Hello. I have run the pipeline using the "current" versions - hash : d9f785b082fb145d547ff03ae53f23f1564ccc38 I am attaching the image of the new simulated FMs. We can see how the trend and the values are different but, this simulation is missing skull and brain. I have tried FSL BET but mask needs to much manual correction. I have installed FreeSurfer (to use SamSeg) but I am not familiar with this software. I will add them to the segmentations + gaussian smoothing the body and comeback here with an update if the brain and the skull generate a big change.
I was curious on how all this changes would affect the FM from AMU VC (in terms of the metrics). Here is the demodulated FM using the current version of the code:
I used this mask for the average, with a value of -112.47347707615909 Hz
And finally, this is how the fieldmap looks like [-400,400] after demodulation:
With this I am prompt to believe that the missing masks are crucial. Working and comming with update ASAP. Thanks!
Mathieu, this might be a silly question but, when you reference to a certain "hash" do you mean before that commit was made or after that commit was made? Maybe this is the misunderstanding that leads to me having different results
I mean I'm using that specific commit
With this I am prompt to believe that the missing masks are crucial.
Interesting - yes this is very plausible I think! Very interested in seeing how your curve will change with brain/skulls.
Also, just out of interest, maybe you could compare the quality of the existing segmentations for a few other relevant regions, i.e. how good and/or large is your trachea segmentation vs the AMU, sinus, lungs?
Oh okay its just I got a bit confused on what the timeline looked like for the code when addressing the comands. On the other hand, do you have any tutorials for using SamSeg for the brain + skull segmentation? Just got it installed plus license Thanks!
On the other hand, do you have any tutorials for using SamSeg for the brain + skull segmentation? Just got it installed plus license
I think you just call samseg --i (MPRAGE).nii.g --o outputfolder/
- I ran this last week just for fun on an old MRI of my head and it worked fairly well with no customization of the command option. The output is an *.mgz
image, but can be loaded by most viewers that can open niftis (eg I opened it using FSLeyes)
But - maybe @NathanMolinier or @CharlesPageot would know if some other options worked better for them (I don't recall who exactly did the samseg's for the whole-spine dataset)
Also, just out of interest, maybe you could compare the quality of the existing segmentations for a few other relevant regions, i.e. how good and/or large is your trachea segmentation vs the AMU, sinus, lungs? Yes, Here some screenshots: I manually completed the trachea but its not smooth, plus it is also smaller.
I will work on adding + smoothing trachea and body when adding the brain and skull segmentations, thanks. I hope the command doesn't need much working on hehe
The lungs seem to be very similar
I wonder if this method used for CT segmentation of the trachea could work with MR images: https://www.youtube.com/watch?v=tJMGe3FMTk0
Maybe the image would need to be inverted first, but just thought I'd share
I've got a suggestion; if the hypothesis is that the fat/hump in the back of the neck is causing the field in the spinal cord to deviate from the expected trend, why not manually remove it from the segmentations and resimulate to see if this recovers the expected trend?
Hello team, fine-tuned the skull and brain segmentations missing.
I will now combine this to the segmentations, but I can see that when I overlay this new segmentations I will loose a lot of the sinues. Did this happen with you as well @mathieuboudreau ?
I am will for now basically force the segmentations to keep the air cavities but would like your feedback.
I will now combine this to the segmentations, but I can see that when I overlay this new segmentations I will loose a lot of the sinues. Did this happen with you as well @mathieuboudreau ?
You should add the airways last when combing them, here's the order I went with:
i.e. -> (first->last): background (air), body, brain, spine, lungs, skull, trachea, sinus, ear canal, and eyes.
Also, as you can see the sinus is pretty close to the edge of the skull; I'd suggest to grow it by a few mm to ensure this doesn't happen (I think I used 2 or 3 mm, see Margin option in the Segment Editor in Slicer 3D)
Oh I see, this is the way you combined them. But the way I had them created before makes it so I must add the skull and brain last. I did a python script for this and the final output is this:
I will retouch the trachea, maybe increase its volume a bit and make it more continuous, and take out the hump in the back as @evaalonsoortiz suggested. Then re run the pipeline.
Make sure you have a quick way to compare all of the new segmentations with and without the hump in the back. This way we can see if the spinal cord trend that doesn't match the measured B0 trend is due to presence of the hump.
Okay. I'll keep this in mind when plotting. - Update. I have smooth the trachea, extended it to be more realistic guiding me with the anatomical image. But the process of smoothing the body contour is proving to be more difficult than what expected. While gaussian smoothing I have realized that in the edges of the head, the head is just skull in some areas so I am manually correcting this. I was wondering if you encountered this too @mathieuboudreau. What I am doing now is after each smoothing I do fill-holes. And then check axial slices that might end up empty. Will follow up soon with a finished version, and then running the simulation.
This is an example of what I saw:
Also, for some reason when using the "fill-holes" my segmentations loose important information:
This is the final segmentation:
Update - @evaalonsoortiz @mathieuboudreau I have just finished demodulating (factor = -300Hz). This is by adding the skull, brain and taking the hump out. It looks very different. I will look at the segmentation from AMU VC and compare how it could be different and whats affecting the final b0 values. Open to feedback! I can send the chimaps - fms - segmentations. I have chimaps for chi = [0.4, -2.65, -4.2]
Here is a small gif comparing the segmentations. I have already added more volume to the trachea as well as making it smooth.
Something that is also different is that the "center" of the image is not the same for both, could this explain the difference in results?
No, the FOV should not impact the results.
Do you have a curve for with vs without the "hump" + with the measured field? The demodulated values look like they're really different from what you measured in-vivo...
I don't have a segmentation with hump that includes skull and brain. I can plot with "hump" (no skull or brain) + measured. I quickly plotted :
The simulation with skull + brain and no hump is using chi = -4.2
Im trying to think what could be different so that the FM values between S&B (Skull and bone) and AMU's are different? Note how AMU is also oddly similar in shape to the acquired
This is strange ....
Do you have additional AMU subjects to compare with? Did you simulate the AMU field map yourself? Are there any differences between the way the AMU field map and your own simulated field map are generated?
Do you have additional AMU subjects to compare with? Did you simulate the AMU field map yourself? Are there any differences between the way the AMU field map and your own simulated field map are generated?
The current versions of the subjects I've simulated are all on data.neuro.polymtl.ca
in the whole-spine
data repository, in the mb/air-tissue-labels
branch (also see the pull request: https://data.neuro.polymtl.ca/datasets/whole-spine/pulls/5). I'm currently running the simulation on some manually corrected body masks re: https://github.com/shimming-toolbox/b0-fieldmap-realistic-simulation/issues/14, and they'll be uploaded once I'm done all the manual corrections & all simulated (currently finished 20 of or 31). But, 29 subjects weren't in need of manual corrections, so Sebastien could pull those to compare right now
Maybe other things to check:
This is strange ....
Do you have additional AMU subjects to compare with? Did you simulate the AMU field map yourself? Are there any differences between the way the AMU field map and your own simulated field map are generated?
I did simulate the AMU field myself, recently using the same command that I used for the other simulations.
The current versions of the subjects I've simulated are all on data.neuro.polymtl.ca in the whole-spine data repository, in the mb/air-tissue-labels branch (also see the pull request: https://data.neuro.polymtl.ca/datasets/whole-spine/pulls/5). I'm currently running the simulation on some manually corrected body masks re: https://github.com/shimming-toolbox/b0-fieldmap-realistic-simulation/issues/14, and they'll be uploaded once I'm done all the manual corrections & all simulated (currently finished 20 of or 31). But, 29 subjects weren't in need of manual corrections, so Sebastien could pull those to compare right now
I don't know If I have access, still awaiting confirmation. This would take some time though, I have to create the masks for the demodulation and metric extraction...
Maybe other things to check:
The label vs anatomical overlayed to ensure no major differences between the size/position of the labels and the actual anatomy
Just from a more abstract point-of-view, maybe simulate a very simple pair of geometry in a volume (eg two cylinders to simulate the trachea vs spinal canal) and play with the relative distance, diameter, and chi value to see what it takes for one to induce the change in the other that you're expecting around C6-T1 I'm not sure if this is related to the anatomy, because this is the same segmentations that I used before and I got the expected results. I could try looking for difference in the anatomy of the spine, but this would need to be something to upgrade on the Total Spine Seg model.
I am going to go back to the commit where I got the expected results and see if I get them again with brain, skull and no hump and follow up here. Do you think this is okay @mathieuboudreau ? Because as you mentioned before, the changes should not be affecting.
On a side note, can we use this AMU b0 values to do the optimization @evaalonsoortiz ? They are different anatomicals but It would be better than not being able to do the comparison.
Hello, @mathieuboudreau I don;t know what happened with my comment. It says edited?. Update - I can acces this dataset witht he VPN, thakns for the link. I somehow thought the acces was related to Duke - sorry about that!
Ahhh woops haha, I thought I had clicked reply but it must of clicked edit instead on my phone - let me restore it - sorry about that!
Here's my reply again,
I don't know If I have access,
Ahhh woops haha, I thought I had clicked reply but it must of clicked edit instead on my phone - let me restore it - sorry about that!
Hehe no worries! Going to start testing what I mentioned above!
@evaalonsoortiz @mathieuboudreau Here I selected 3 random subjects to plot thir FM across the spine:
Some features of the segmentations that I note, comparing with our db0_30 segmentations.
This is a screenshot of the FMs before getting demodulated in ppm, maybe here its possible to see something that might be giving us such different results?
@sriosq Would the dataset that Julien shared with you (acquired for the Verma paper) allow you to create a subject-averaged along-the-spinal-cord measured B0 curve? If so, then you could use the average measured B0 curve and an average simulated B0 curve (for the AMU dataset) to perform the optimization.
From the Verma paper data set, I have 6 patients where I have a Expiration fieldmap with similar anatomy and FOV:
I quickly checked the data available, there is not anatomical (that I could find). But, the way the code is set up for chi-optimization, we would need a mask that works on both the measured and the simulated.
Without anatomical I cannot create a segmentation to perform segmentation -> chi -> B0.
I am stuck trying to figure out what could be different between our simulations and AMU FM. Something I see different is that for AMU data set the Spinal Cord is -9.055 ppm whereas for us is -9.05.
Then, the lungs and trachea are very similar. The last thing I will check is the spine segmentation, because Mathieu used some tools from slicer to smooth the segmentations. I used this for skull, brain and lungs. I will give this a try and hope this works.
The last thing I will check is the spine segmentation, because Mathieu used some tools from slicer to smooth the segmentations.
I didn’t smooth the segmentations for the spine.
it would be a good idea though to compare the quality of those segmentations between a few subjects though, for a few key vertebrae
Something I see different is that for AMU data set the Spinal Cord is -9.055 ppm whereas for us is -9.05.
The disks are all set to 9.055 for me.
I only set these values because those are the ones that you defined in your tissue-to-mr repo, see:
Any reason why you went with 9.05 instead of those?
One other thing I noticed tonight. In my mosaic of b0 maps,
I noticed one of the subject's field map in the brain was kinda "split in half" which I thought odd,
I checked the segmentations and recalled that this is one of the subjects where the auto-skull segmentation that Nathan or Charles did was kinda cut off and I had flagged to try and rerun later,
Here's an added overlay between the two,
Maybe reflecting on what caused this B0 change could add insights for your situation (or maybe not!), just thought I'd share.
From the Verma paper data set, I have 6 patients where I have a Expiration fieldmap with similar anatomy and FOV:
I quickly checked the data available, there is not anatomical (that I could find). But, the way the code is set up for chi-optimization, we would need a mask that works on both the measured and the simulated.
Without anatomical I cannot create a segmentation to perform segmentation -> chi -> B0.
Assuming these fieldmaps come from Siemen's dual-echo GRE fieldmapping sequence, the output of that should be one phase difference image (Phase_TE2-Phase_TE1) and one magnitude image. The magnitude image could be used to segment the spinal cord -> generate a mask.
So out of the 3 AMU subjects, only one has the increased trend around C7 like you see in your measured data?
So out of the 3 AMU subjects, only one has the increased trend around C7 like you see in your measured data?
Hello Eva, yes. I could try to plot more subjects to see if the trend happens in more subjects.
In terms of the data from Verma paper: The subjects have 2 subfolders: expiration & inspiration. Inside there are 2 files TE1-TE2 and TE2-TE3. Inside this files I found magnitude and phase (theres around 20 files).
Hello team, @evaalonsoortiz @mathieuboudreau It is with great joy to say that the minimization algorithm is up and running. Hope its not to late!
Its been 10 minutes and it recognized that increasing the chi value upper than 0.4 is not usefull. Initially: It started to decrease the values:
It is still running, it went as far as -7.3 chi but quickly converged to a value closer to -4:
Still to verify with the plot and fm once it finishes running but exciting to get it working properly finally!! (p.s. I don't know why iteration number is not updating - will fix soon)
Update - apparently the minimize function does not keep track of the previous values used? Luckily I found a way to save them.
Heres the plot with the final values:
And a comparison between this final value vs different chi values close to the literature we've seen before:
I think this preliminary results is interesting. I am personally so glad to see it worked :) I will now further optimize the trachea and lung segmentations and optimize both values because with different values they will perhaps be an even better fit :)
To note that the differece starts at 177 when chi is 0.4 and goes to 132 after all the 46 iterations. This is also makes me think of other metrics or other algorithms to test. Fun!
Update - running the optimization with 2 variables i.e. trachea and lungs:
Update - finished running.
As expected, the algorithm is using unrealistic values to try to minimize as best as it can:
FM with range [-800 to 800] bwr
For a realistic result, will switch to optimize.minimize
for it has optional arguments of boundaries without before finishing the smoother and realistic segmentations for the trachea and the lungs.
Plot along spine
After finding that the fieldmap generated wednesday 30th was different to the fieldmaps generated as of today friday 1st, I have found the exact commit at which the "before" fieldmap was created - hash: 40acdc9ceac7eb3da1061f6f389d25f0eb62feb7.
As I remember, the "before" fieldmap was calculated on a hard-coded version that worked only for [even,even,odd] dimensions.
Screenshot of output from git diff 40acdc9ceac7eb3da1061f6f389d25f0eb62feb7..d9f785b082fb145d547ff03ae53f23f1564ccc38
With this, to procede with chi optimization code: The fix implemented should only make it so the dimensions work properly so I don't understand why theres such a big difference in the fieldmap. Maybe, like Mathieu, said it could be due to the hump of the back.
Appreciate feedback and final decision moving on with the project - @evaalonsoortiz @mathieuboudreau