Closed alexjaffray closed 1 year ago
I would expect composition to make shallow copies, i.e., to reuse existing object memory as much as possible. I haven't actually tested that with julia's base ∘
though...
Thanks @JeffFessler, that makes sense to me in the interest of minimizing allocations. You've convinced me that having a shallow copy isn't cause for concern, but after some more digging this morning I've come across something interesting (and still related to copying FieldmapNFFTOp, I've changed the title to reflect this).
If anyway we shallow or deepcopy, we should end up with a copy of the operator that uses at most the same amount of memory as the original operator, and ideally less if we are clever. It seems that at the moment, copied FieldmapNFFTOp structs grow with each subsequent copy.
Evaluating an extension to the above example (with N = 256):
F_fmap_nfft = FieldmapNFFTOp((N,N),tr,cmap,symmetrize=false)
F_fmap_nfft_copy = copy(F_fmap_nfft)
and running varinfo()
in the REPL shows that the copy is larger than the original Operator
varinfo()
name size summary
–––––––––––––––– ––––––––––– –––––––––––––––––––––––––––––––––––––––––––––
Base Module
Core Module
F_fmap_nfft 81.609 MiB FieldmapNFFTOp{Float64, Nothing, Function, 2}
F_fmap_nfft_copy 116.675 MiB FieldmapNFFTOp{Float64, Nothing, Function, 2}
If this behaviour is unexpected (and feel free to let me know if it isn't) then I will investigate a fix.
Interesting. Normally I would not expect a copy
to take more memory, though I see that deepcopy
is used often in this package so probably one of those calls does it:
https://github.com/search?q=repo%3AMagneticResonanceImaging%2FMRIReco.jl%20deepcopy&type=code
I did not have a deeper look into this issue so far. If I recall correctly, our copy
is deep, wich might sound counterintuitive on a first sight but the point is that we often copy these operators in order to execute them in parallel (multithreaded). If the state of the operator is not copied deeply, one will compute garbage.
Having said that, we might need to rename all of our copy
into deepcopy
. But for that we would first need to understand the exact semantics of both functions.
julia> A= rand(3)
3-element Vector{Float64}:
0.996877573163787
0.6833890929190753
0.7392402674660837
julia> B = copy(A)
3-element Vector{Float64}:
0.996877573163787
0.6833890929190753
0.7392402674660837
julia> A .= 0
3-element Vector{Float64}:
0.0
0.0
0.0
julia> B
3-element Vector{Float64}:
0.996877573163787
0.6833890929190753
0.7392402674660837
This looks deep. But its a call to copy
.
Here's an example that is more surprising (to me). I did not expect copy
to duplicate the arrays pointed to by an Any
vector like this.
julia> x = [zeros(2), "hi"]
2-element Vector{Any}:
[0.0, 0.0]
"hi"
julia> y = copy(x)
2-element Vector{Any}:
[0.0, 0.0]
"hi"
julia> y[1] = 7;
julia> y
2-element Vector{Any}:
7
"hi"
julia> x
2-element Vector{Any}:
[0.0, 0.0]
"hi"
@tknopp @JeffFessler I pushed a branch that contains a possible fix for this issue with the name fixFieldmapNFFTCopy
(at least the copies returned are the same size as the original op). Also verified with repeat calls to copy (nothing starts to grow).
Great, thanks for fixing @alexjaffray. So I am closing this issue for now. Please reopen if there are still issues.
@nbwuzhe, @mrikasper and I are debugging some intermittent segmentation faults (or mem leak, not sure) in our (GIRF+B0)-aware spiral reconstruction workflow, and in the process have stumbled across what is likely a bug. It seems as though composition of operators with
FieldmapNFFTOp
results in a shallow copying of at least theplans
field of theFieldmapNFFTOp
. This is illustrated in the following MWE:This example fails on the last assert
@tknopp would it be possible to do a true deepcopy of the FieldmapNFFTOp?