Closed JakobAsslaender closed 2 years ago
Merging #34 (760805b) into master (fc99c35) will increase coverage by
0.14%
. The diff coverage is74.07%
.
@@ Coverage Diff @@
## master #34 +/- ##
==========================================
+ Coverage 65.00% 65.14% +0.14%
==========================================
Files 63 63
Lines 3080 3073 -7
==========================================
Hits 2002 2002
+ Misses 1078 1071 -7
Impacted Files | Coverage Δ | |
---|---|---|
src/Tools/CoilSensitivity.jl | 65.44% <74.07%> (+3.46%) |
:arrow_up: |
src/MRIReco.jl | 50.00% <0.00%> (-7.15%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update fc99c35...760805b. Read the comment docs.
This seems to work well, my only concern is: if the code crashes or interrupted half way, a global variable remains changed.
That can be easily achieved like this:
nblas = BLAS.get_num_threads()
try
BLAS.set_num_threads(1)
error()
finally
BLAS.set_num_threads(nblas)
end
The finally code will also be executed in case of an error.
I added the try/finally
statements. Further, I bumped the compat to Julia 1.6, as Polyester and some underlying packages require it (as will the NFFT package upon the next release).
lets rerun CI. For some reason I need to approve runs. Probably because our tests are so heavy.
The windows test give an error OutOfMemoryError()
. Not sure if this is actually related to this PR.
I don't think it is. It occurs during testReconstruction.jl
, not in the ESPIRiT test. Further, it already failed in the CompatHelper PRs, though, I somehow cannot access the log file of these runs.
Thanks Jakob for this contribution! Much appreciated!
Hi!
I did a bunch of modifications to the ESPIRiT code, mainly the following:
nmaps
sets, for whatever that might be useful...). The low level function call now always has an additional dimension. In the high-level call, I remove it ifnmaps=1
. My trade-off was not to break anything that uses the high level function, but to make the output consistent in the low-level call. Happy to change that though, if you have a different opinion.Threads.nthreads()
before doing the big SVD, and set it back to its original value thereafter. Similarly, I set it to1
before the parallel loop over all voxel and back to its original value thereafter. This seems to work well, my only concern is: if the code crashes or interrupted half way, a global variable remains changed. But I don't really know how else to do this, given that SVD calls don't allow for setting the number of threads...The biggest performance bottleneck right now is, IMHO, the memory of
kern2
. In my 3D use case, this is 140GB usingComplexF32
; and filling this array is zeros takes about a minute on our cluster. But a) that seems to compare to BART and b) I don't really see a way out, unless we would calculate the DFT for each voxel, one at a time. With calibration regions in the order of 6^3 in 3D, this does not seem completely nuts to me, but still...Let me know what you think about those changes!