JuliaMath / NFFT.jl

Julia implementation of the Non-equidistant Fast Fourier Transform (NFFT)
Other
152 stars 28 forks source link

sdc not working for specific couple of kernelSize and oversampling in plan_nfft function #60

Closed aTrotier closed 2 years ago

aTrotier commented 2 years ago

Hi, I am playing a little bit with the module in order to reconstruct the brain datasets from the Reproducibility Challenge 1 - SENSE with arbitrary k-space trajectories

When I use a kernelSize >= 4 for plan_nfft, the results of sdc is 0.00

using HDF5, NFFT

##
N = 300
CH = 12

rawdata = HDF5.h5read("../data/rawdata_brain_radial_96proj_12ch.h5","rawdata")
trajectory = HDF5.h5read("../data/rawdata_brain_radial_96proj_12ch.h5","trajectory")

scale_traj = trajectory./(2*maximum(abs.(trajectory)))
scale_traj = reshape(scale_traj[:,:,1:2],size(scale_traj,1)*size(scale_traj,2),2)
scale_traj=permutedims(scale_traj,(2,1))

kernelSize = 4
oversampling = 2
p = NFFT.plan_nfft(scale_traj, (N,N),kernelSize,oversampling)
dcf = NFFT.sdc(p,iters = 10)

Return :

49152-element Vector{Float32}:
 0.0
 0.0
 0.0
 0.0
 0.0
 ⋮
 0.0
 0.0
 0.0
 0.0

and with kernelSize = 3 and oversampling = 2 it works and the resulting image after sum of square is good

49152-element Vector{Float32}:
 4.75889f-6
 5.3684266f-6
 8.760909f-6
 1.4971239f-5
 1.5593725f-5
 ⋮
 1.5593723f-5
 1.4971239f-5
 8.760909f-6
 5.368427f-6

and it also works with kernelSize = 4 and oversampling = 1.25

Not sure about that Weirdly it seems to work if I use the NFFTPlan created by the MIRT module and this one use a kernelSize = 4 and oversampling =2

I don't know if it is related to the trajectory in this datasets.

tknopp commented 2 years ago

thanks for the report. We need to investigate, I did not write that code but just merged it. One question is if you have seen

https://github.com/MagneticResonanceImaging/ISMRM_RRSG

and why it worked their?

aTrotier commented 2 years ago

Actually for MIRT I am not sure it is working I have to do more test

For your example I have check the implementation : in MRIReco the parameters for the plan_nufft are not the default ones : kernelSize = 3 and oversampling = 1.25

##################
# sampling weights
##################
"""
    samplingDensity(acqData::AcquisitionData,shape::Tuple)
returns the sampling density for all trajectories contained in `acqData`.
"""
function samplingDensity(acqData::AcquisitionData,shape::Tuple)
  numContr = numContrasts(acqData)
  weights = Array{Vector{ComplexF64}}(undef,numContr)
  for echo=1:numContr
    tr = trajectory(acqData,echo)
    if isCartesian(tr)
      nodes = kspaceNodes(tr)[:,acqData.subsampleIndices[echo]]
    else
      nodes = kspaceNodes(tr)
    end
    plan = plan_nfft(nodes, shape,3, 1.25)
    weights[echo] = sqrt.(sdc(plan, iters=3))
  end
  return weights
end
tknopp commented 2 years ago

Ah okay, thanks! It was just that I was curious about that. Plus, if you have seen that repository.

aTrotier commented 2 years ago

Ok after more test it also failed with MIRT. I have to change the nfft_sigma value (which correspond to the oversampling).

tknopp commented 2 years ago

I can confirm the bug.

tknopp commented 2 years ago

It happens for kernel sizes > 3. It seems that there is some instability in the sdc algorithm. Its quite strange that it depends on the kernelSize.

tknopp commented 2 years ago

This seems to "fix" it: p = NFFT.plan_nfft(Float64.(scale_traj), (N,N),5,oversampling) But still the result depends strongly on the kernel size. That seems not good.

tknopp commented 2 years ago

@JeffFessler: I need to ping you on this issue since I never looked deeper into the question howto find the density compensation weights.

I don't want to look deeper into the method/implementation if the algorithm presented there is actually outdated. It fits nice here, because it has no dependencies, i.e. it needs no voronoi diagrams.

tknopp commented 2 years ago

@aTrotier: I added a smaller workaround that seems to allow for kernel sizes up to about 6. Could you test this?

Edit: master is right now a little bit difficult to install. So probably this can wait some days until AbstractNFFTs is released (should be tomorrow)

tknopp commented 2 years ago

ok, I think I now understand what the algorithm does and it makes sense. Added a test, which actually validates that the algorithm does what it is supposed to do. I therefore close this issue.

aTrotier commented 2 years ago

@JeffFessler: I need to ping you on this issue since I never looked deeper into the question howto find the density compensation weights.

I don't want to look deeper into the method/implementation if the algorithm presented there is actually outdated. It fits nice here, because it has no dependencies, i.e. it needs no voronoi diagrams.

I think the last paper on density compensation function is : https://onlinelibrary.wiley.com/doi/10.1002/mrm.23041 with a C implementation (+ matlab wrapper)

DCF seems not really useful for 2D acquisition or 3D+stack but it is mandatory for 3D acquisition like UTE (at least in BART). Without the dcf I wasn't able to converge to a good solution.

tknopp commented 2 years ago

Ok thanks for that. Will you require DCF in the future?

aTrotier commented 2 years ago

I am just starting experimenting with Julia but I was using the DCF with my reconstruction pipeline in Matlab / Bart. If I switch to Julia I am pretty sure I will need it.

Anyway, it is still a good idea to keep that function for :

I don't think it required a lot of maintenance, the function is short, your test might be enough to detect most of the issues

tknopp commented 2 years ago

Great, if you want to come up with an implementation this would be appreciated since I currently focus on other things. But this can also be done when you have decided to switch.

tknopp commented 2 years ago

And thank you very much for the bug report! Much appreciated.

aTrotier commented 2 years ago

It's implementation seems to be like yours (except the scaling you just added)

I have to dig a little bit in the paper to see the differences between your implementation (from Pipe paper) and the one from @nckz. I think it is mostly related to the kernel size / design

/* ITERATION LOOP
     *
     *  1) grid: I = ( W * C ) G
     *      
     *  2) degrid: D = I * C
     *
     *  3) invert density to get weights: W = 1/D
     */
    for(j=0;j<numIter;j++)
    {
        if(verbose) printf("iteration: %d\n",j);

        /* grid using default kernel, out -> grid_tmp */
        if(verbose) printf("\t\tgrid\n");
        grid3 ( grid_tmp, coords_in, out, kernelTable, norm_rfp, winLen ); 

        /* degrid using default kernel, grid_tmp -> weights */     
        if(verbose) printf("\t\tdegrid\n");
        degrid3 ( weights_tmp, grid_tmp, coords_in, kernelTable, norm_rfp, winLen );

        /* invert the density to get weights, weights -> out */
        if(verbose) printf("\t\tinvert\n");
        for(i=0;i<out->num_elem;i++)
            if( weights_tmp->data[i] == 0. ) out->data[i] = 0.;
            else out->data[i] /= weights_tmp->data[i];
    }
tknopp commented 2 years ago

Note that the scaling does not change what the algorithm does mathematically if we would have infinite precision. The scaling is corrected in the later step anyway.