Closed slimnsour closed 3 years ago
Hello @slimnsour, Thank you for updating!
Cheers! There are no style issues detected in this Pull Request. :beers: To test for issues locally, pip install flake8
and then run flake8 dmriprep
.
Okay, I've sent some more concrete suggestions to slimnsour/dmriprep#1 - please review and merge if you agree.
EDIT BTW, after you merge my PR, I'd still like to check on a couple of things. The most relevant is the whether Eddy writes out the correction parameters. If so, this PR should also include the post processing (namely, decomposing the affines and getting separate HMC and ECC parameters).
BTW - we will need to also tinker with the parameters so that we can run a fast (and inaccurate) eddy process on Circle (i.e., with the --sloppy
flag)
Sorry for the delay on this. I figured out that the issue with the tests was that there were outdated python2 print statements in code used by eddy_correct
. I created an updated file that lets eddy_correct
run correctly and it is used to overwrite the base FSL one through our .docker
folder.
And as for the --sloppy
implementation, I'm not sure what else can be done with the limited eddy_correct
parameters to make it faster. I'd be happy to implement any solutions if you guys have any (maybe a parameter that skips eddy_correct
altogether?)
I believe the current build fails are related to current issues with svgutils' new version (see https://github.com/nipreps/niworkflows/issues/595) messing with niworkflows, it has the same AttributeError: 'numpy.int64' object has no attribute 'value'
message. I'm investigating further for potential fixes.
Yup, can you check whether the version of svgutils being installed on the docker build step of CircleCI has changed here w.r.t. the latest successful master?
From sshing into the circleci build I can confirm that it has the problematic svgutils, 0.3.2. Also, going back to 0.3.1 no longer causes the int64 issues. Would the best move here be blocking 0.3.2 from being installed until a fix is put in?
If you want to be a good citizen, you probably want to report this in their repo, pin svgutils<0.3.2, and then undo the pinning if it's not a bug and rather a misuse no longer allowed after 0.3.1.
Thanks for mentioning the issue. It seems it was fixed in https://github.com/btel/svg_utils/issues/57#issuecomment-757552573 (which I have also tested and can confirm it works) but a new release isn't out yet. Either way I'll exclude the problematic release.
Thanks @slimnsour! I just checked the QC reports. ds000206 looks good but the mask in ds001771 looks wonky. Should we try rerunning that workflow?
Good call, not sure what happened there. I'll try rerunning. Edit: I just checked my own test output and it seemed fine too.
Rerun finished and mask looks good! Let me know if there's anything else to do, otherwise I think we can merge this.
It seems the ds000206 dataset doesn't have the necessary info for running get_trt
(it reaches the end of the function here https://github.com/nipreps/sdcflows/blob/4ce58c0907accc1b92770caa602e2a11a0962825/sdcflows/utils/epimanip.py#L206 without returning anything).
Do you guys have any suggestions on how we can go about this? Perhaps a parameter that allows for a default value to be used if no trt is calculated would work here.
For reference, the in_meta
for ds000206 is:
in_meta = {'EchoTime': 0.092, 'FlipAngle': 90, 'MagneticFieldStrength': 3, 'Manufacturer': 'Philips', 'ManufacturersModelName': 'Achieva', 'PhaseEncodingDirection': 'j', 'RepetitionTime': 6.96372}
I wonder we could drop a dummy value and raise a warning in these cases.
Reading the "Does it matter what I put into my --acqp file?" section here, it seems like it might work in many cases.
Could you give it a try with this dataset? Maybe we can look at the outputs to see whether it does something crazy?
Thanks for the suggestion @arokem, I think that's a fair addition. I have implemented this now with a backup value of 0.05
and the outputs look reasonable enough, as FSL's eddy page says. Let me know what you guys think, but otherwise everything looks to be finished here.
If the parameter has no influence over the outcome, then I think dropping a dummy value is fine.
However, if it does, then I would fix our test dataset, write the dummy value there, and let dMRIPrep break when people do not specify their dataset in full. That will force them to write the dummy value themselves, which hopefully will bring them to consider two things:
Let's take advantage of BIDS here!
Please rebase on upstream/master when #151 is merged :)
If the parameter has no influence over the outcome, then I think dropping a dummy value is fine.
Just chiming in to confirm that eddy does not use total readout time if we don't provide the fieldmap or topup arguments. I've tested this out with a few different dummy values in the past and would get the same result.
We may face an issue when attempting to apply a true fieldmap to the dwi data and total readout time can't be calculated. But this can just be resolved by sending a warning/erroring out.
Porting over from our old dmriprep project. I also had to install a full version of FSL in the Dockerfile in order for the pipeline to have access to eddy.