Cardiac-MR-Group-Lund / segment-open

Segment Source Distribution
Other
72 stars 31 forks source link

Siemens 4D Flow (latest development version - WIP785A) does not import correctly #2

Closed fyrdahl closed 6 years ago

fyrdahl commented 6 years ago

Steps to reproduce

4D flow data acquired with Siemens WIP785A does not import into segment if "Average Magnitude Images" is selected under the Special card.

Expected behavior

Each image should have a unique spacetime position.

Actual behavior

Averaging of magnitude datasets seems to produce duplicate spacetime positions, possibly due to rounding (presumably during recon).

Details

Version/commit: R6069 (7025568ebe87bf18f5e7f0e6874afc02e8562fdd) Link to data set used: https://www.dropbox.com/s/6p8co63abpamb9z/4D_785A.zip

johannestoger commented 6 years ago

Hi, thanks for adding this as an issue! Happy to see that the issue system is being used.

A few questions/thoughts:

Please let me know what you think.

johannestoger commented 6 years ago

Another question: what is the expected output from the "average magnitude images" setting?

fyrdahl commented 6 years ago

In the product flow sequence and the previous WIP versions, the magnitude images were reconstructed from the velocity encoded dataset. This version provides the option to output the averaged magnitude images from all the velocity encoding sets to improve the image SNR.

johannestoger commented 6 years ago

OK, will have a look at this in January when I'm back from holidays.

johannestoger commented 6 years ago

Does the error disappear when you turn off "Average Magnitude Images"? Then you can compute the average as a post-processing step.

fyrdahl commented 6 years ago

Yes, this is our current workaround.

johannestoger commented 6 years ago

OK, I think I get the picture now. I don't think it's feasible to patch the standard openfile code to accept datasets with inconsistent spacetime positions, since they're technically not "correct" so to speak. Changing this would also be quite a lot of work and would influence all the clinical versions of Segment as well. @EinarHeiberg - what do you think?

An alternative is to build a loading routine for your files as a Segment plugin. I could write a "bare-bones" version to show you the basics, but making it feature-complete and bug-free would be up to you then. Would this be an OK way forward?

johannestoger commented 6 years ago

Just checked the data set linked in the issue description. Verified that is has the same issue as the data set shared privately before.

fyrdahl commented 6 years ago

Your alternative approach sounds like a plausible way forward. I will also raise this issue with Siemens since it does not make sense to have duplicate time positions in an image series, even if it's averaged from acquisitions with slightly different heart rates.

I believe we can close this issue for now?

johannestoger commented 6 years ago

OK.

I think we should close it when we have the workaround implemented and working for your data.

EinarHeiberg commented 6 years ago

I have not yet taken much time to reflect on the issue yet, but here are my initial thoughts.

I have not yet looked into the details in I pressume that the duplication occurs in the temporal domain and as Johannes writes it is thus likely not easy to make a simple fix.

The DICOM reader is likely one of the most complicated parts of Segment and one where I am least inclined to tinker with as everything you do can have unexpected side effects as there are so many variants of DICOM. Segment's DICOM reader is likely one of the most picky readers around as I want to be able to trust the geometry of the loaded data.

I have previously thought that there would be good to write a dedicated reader for 4D data sets as it could drop lots of assumptions that needs to be part of the current reader (for instance the current reader do not assume that all files thrown at it actually is one image stack). A new reader could also be written to be less picky. The question is what level of ambition should one have for the new reader. For instance there are now at least 3 flavors of Siemens images ("North Western", "standard", "averaged magnitudes"). A new reader should also presumably read GE 4D datasets, and of course Philips (standard and enhanced multiframe MR, for which the current reader is too slow). The good thing is that there are data sets and previous code to re-use for this project.

The question is who has the competence, time, and stamina to perform the work? It is likely really ungrateful work as it can not be published, nor part of reasonable grants.

johannestoger commented 6 years ago

I agree with you, Einar. I don't want to change anything in the DICOM reader unless I have to. The best of all worlds would be if Siemens could just fix their export, but I suspect that this will be slow. So we should find another solution, I think.

I don't think we should write a general 4D flow loader tool, I think the scope of that tool would explode quickly (as you say - competence, time and stamina is needed).

What I was suggesting was to just write a simple one-off loader for these specific Siemens datasets. I would do the basics to show how it's done in Segment and then @fyrdahl would do the heavy lifting. Here I'm assuming that @fyrdahl et al. have the programming competence and the motivation to get this done. =)

johannestoger commented 6 years ago

We could also write a tool to patch the DICOM files maybe? Then the files can be loaded by the standard DICOM reader.

EinarHeiberg commented 6 years ago

Yes, that is certainly a possibility (and maybe even the easiest option). However, then there will be two different tools to patch Siemens files (one North-Western style, and one for the average data sets). If you are processing a lot of 4D flow it all adds upp in time. Ideally one would like a tool that can batch load 4D data sets and also perform background correction etc. It all comes down to how much time is it going to write it versus how much time will be saved in the end I guess.

Sincerely

Einar

Den 2018-01-02 kl. 12:48, skrev Johannes Töger:

We could also write a tool to patch the DICOM files maybe?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Cardiac-MR-Group-Lund/segment-open/issues/2#issuecomment-354750878, or mute the thread https://github.com/notifications/unsubscribe-auth/AbsLvTQxBdcbNQ0FMXQcnoJOD4OOp4uMks5tGheegaJpZM4RKuVR.

johannestoger commented 6 years ago

https://xkcd.com/1205/

fyrdahl commented 6 years ago

I'm open to pursuing whichever solution you see most suitable. I do believe I have the necessary competence and perseverance, the limitation would be finding the time.

fyrdahl commented 6 years ago

Importing works fine with the workaround I discussed with @johannestoger in Barcelona, see pseudo-code below;

firstInSlab = find([true diff(pos(1,:))~=0 true]);
tmpPos = pos(:,1:firstInSlab(2)-1);
tmpPos = repmat(tmpPos,[1 numel(firstInSlab)-1]);
pos_fix = reshape(tmpPos,[size(tmpPos,1),1,size(tmpPos,2)]);
johannestoger commented 6 years ago

Great to hear that it works now. Is this part of a script you have written to pre-process the DICOM files? Would you be OK with sharing the full script?

fyrdahl commented 6 years ago

It's not as robust as I would want, but it does the job; https://github.com/fyrdahl/WIP785A-DICOM-Fix

johannestoger commented 6 years ago

OK, thanks. I'll add it to the repository. Do you think this resolves the issue, i.e. can we close this ticket?

fyrdahl commented 6 years ago

Let me verify with the ones using it.

fyrdahl commented 6 years ago

We have tried the fix on real data, and it seems to work.

johannestoger commented 6 years ago

Happy to hear that it seems to be working. Please let me know if there is anything else we can help with.

EinarHeiberg commented 6 years ago

Great!

Sincerely

Einar Heiberg, founder Medviso AB

13 mars 2018 kl. 15:43 skrev Alexander Fyrdahl notifications@github.com:

We have tried the fix on real data, and it seems to work.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.