Open scott-huberty opened 3 months ago
The issue is more with measuring coverage during workflow runs. Instrumenting nipype to enable this during multiprocessed runs would be extremely difficult.
The approach we've taken in fmriprep is to flip switches to make sure the workflows can be built: https://github.com/nipreps/fmriprep/pull/3155
Fully on board with this - we need to regularly exercise each workflow branch to avoid introducing problems on patches. I like the fmriprep approach as it is a relatively quick and easy way to at least test for workflow structure errors (syntax errors, cyclic graph, invalid connections, etc). @scott-huberty does this sound like something you would want to take a crack at?
Thanks both! I agree the fmriprep structure sounds reasonable.
Definitely interested in helping implement this. The Ni[prep/pype] internals are still abstract to me, so I might have a lot of questions along the way. As long as this is okay. 🙂
A short summary of what you would like to see in NiBabies.
If the Codecov report is accurate, Nibabies coverage is low (33%).
As a start, maybe we can implement some additional smoke tests to make sure that obvious code regressions aren't introduced, e.g. that the pipeline continues to run with the various parameters ( #375 ), and other low hanging fruit like modules are properly being imported ( #373 #365 ).
Do you have any interest in helping implement the feature?
Yes!
Add any additional information or context about the request here.
I understand that there are likely computational challenges when it comes to testing this pipeline with CI. I'm interested in hearing what the current challenges are / alternative ideas.