Closed aTrotier closed 2 years ago
thanks for the report. We need to investigate, I did not write that code but just merged it. One question is if you have seen
https://github.com/MagneticResonanceImaging/ISMRM_RRSG
and why it worked their?
Actually for MIRT I am not sure it is working I have to do more test
For your example I have check the implementation : in MRIReco the parameters for the plan_nufft are not the default ones : kernelSize = 3 and oversampling = 1.25
##################
# sampling weights
##################
"""
samplingDensity(acqData::AcquisitionData,shape::Tuple)
returns the sampling density for all trajectories contained in `acqData`.
"""
function samplingDensity(acqData::AcquisitionData,shape::Tuple)
numContr = numContrasts(acqData)
weights = Array{Vector{ComplexF64}}(undef,numContr)
for echo=1:numContr
tr = trajectory(acqData,echo)
if isCartesian(tr)
nodes = kspaceNodes(tr)[:,acqData.subsampleIndices[echo]]
else
nodes = kspaceNodes(tr)
end
plan = plan_nfft(nodes, shape,3, 1.25)
weights[echo] = sqrt.(sdc(plan, iters=3))
end
return weights
end
Ah okay, thanks! It was just that I was curious about that. Plus, if you have seen that repository.
Ok after more test it also failed with MIRT. I have to change the nfft_sigma value (which correspond to the oversampling).
I can confirm the bug.
It happens for kernel sizes > 3. It seems that there is some instability in the sdc
algorithm. Its quite strange that it depends on the kernelSize.
This seems to "fix" it:
p = NFFT.plan_nfft(Float64.(scale_traj), (N,N),5,oversampling)
But still the result depends strongly on the kernel size. That seems not good.
@JeffFessler: I need to ping you on this issue since I never looked deeper into the question howto find the density compensation weights.
I don't want to look deeper into the method/implementation if the algorithm presented there is actually outdated. It fits nice here, because it has no dependencies, i.e. it needs no voronoi diagrams.
@aTrotier: I added a smaller workaround that seems to allow for kernel sizes up to about 6. Could you test this?
Edit: master is right now a little bit difficult to install. So probably this can wait some days until AbstractNFFTs
is released (should be tomorrow)
ok, I think I now understand what the algorithm does and it makes sense. Added a test, which actually validates that the algorithm does what it is supposed to do. I therefore close this issue.
@JeffFessler: I need to ping you on this issue since I never looked deeper into the question howto find the density compensation weights.
- is the method implemented here https://github.com/tknopp/NFFT.jl/blob/master/src/samplingDensity.jl#L3 actually state-of-the art?
- or do you have an alternative implementation that is numerically stable?
I don't want to look deeper into the method/implementation if the algorithm presented there is actually outdated. It fits nice here, because it has no dependencies, i.e. it needs no voronoi diagrams.
I think the last paper on density compensation function is : https://onlinelibrary.wiley.com/doi/10.1002/mrm.23041 with a C implementation (+ matlab wrapper)
DCF seems not really useful for 2D acquisition or 3D+stack but it is mandatory for 3D acquisition like UTE (at least in BART). Without the dcf I wasn't able to converge to a good solution.
Ok thanks for that. Will you require DCF in the future?
I am just starting experimenting with Julia but I was using the DCF with my reconstruction pipeline in Matlab / Bart. If I switch to Julia I am pretty sure I will need it.
Anyway, it is still a good idea to keep that function for :
I don't think it required a lot of maintenance, the function is short, your test might be enough to detect most of the issues
Great, if you want to come up with an implementation this would be appreciated since I currently focus on other things. But this can also be done when you have decided to switch.
And thank you very much for the bug report! Much appreciated.
It's implementation seems to be like yours (except the scaling you just added)
I have to dig a little bit in the paper to see the differences between your implementation (from Pipe paper) and the one from @nckz. I think it is mostly related to the kernel size / design
/* ITERATION LOOP
*
* 1) grid: I = ( W * C ) G
*
* 2) degrid: D = I * C
*
* 3) invert density to get weights: W = 1/D
*/
for(j=0;j<numIter;j++)
{
if(verbose) printf("iteration: %d\n",j);
/* grid using default kernel, out -> grid_tmp */
if(verbose) printf("\t\tgrid\n");
grid3 ( grid_tmp, coords_in, out, kernelTable, norm_rfp, winLen );
/* degrid using default kernel, grid_tmp -> weights */
if(verbose) printf("\t\tdegrid\n");
degrid3 ( weights_tmp, grid_tmp, coords_in, kernelTable, norm_rfp, winLen );
/* invert the density to get weights, weights -> out */
if(verbose) printf("\t\tinvert\n");
for(i=0;i<out->num_elem;i++)
if( weights_tmp->data[i] == 0. ) out->data[i] = 0.;
else out->data[i] /= weights_tmp->data[i];
}
Note that the scaling does not change what the algorithm does mathematically if we would have infinite precision. The scaling is corrected in the later step anyway.
Hi, I am playing a little bit with the module in order to reconstruct the brain datasets from the Reproducibility Challenge 1 - SENSE with arbitrary k-space trajectories
When I use a kernelSize >= 4 for plan_nfft, the results of sdc is 0.00
Return :
and with kernelSize = 3 and oversampling = 2 it works and the resulting image after sum of square is good
and it also works with kernelSize = 4 and oversampling = 1.25
Not sure about that
Weirdly it seems to work if I use the NFFTPlan created by the MIRT module and this one use a kernelSize = 4 and oversampling =2I don't know if it is related to the trajectory in this datasets.