mind-inria / mri-nufft

Doing non-Cartesian MR Imaging has never been so easy.
https://mind-inria.github.io/mri-nufft/
BSD 3-Clause "New" or "Revised" License
51 stars 10 forks source link

Distortion correction with a known fieldmap using mri-nufft #203

Open wenshangwang opened 1 month ago

wenshangwang commented 1 month ago

Hello

I am wondering if I can discuss a problem I encounter right now by going through what I have and what I did with you. I would like to see if there are step I done incorrectly.

I have SE-EPI kspace data where N/2 ghosting correction and regriding has been performed and reverse readout lines are time reversed. So my kspace is currently a uniformed sampled 64x64 Cartesian grid for each slice. So now the phase error caused by inhomogeneity is the only thing that making the kspace non-uniformed.

Since I have acquired both PE directions so I used TOPUP and generate a fieldmap (unit: Hz) to use in this reconstruction. I had to generate the k-trajectory and sample time array on my own because there isn't a choice for cartesian. I generate k_trajectory that start from [-kmax, -kmax] and end at [kmax, kmax]. So basically from bottom line to top line. Am I doing this k-trajectory correctly?

I then generate the nufft operator using finufft and the off-resonace correction nufft operator using the nufft operator, the fieldmap and sample time matrix. Since I have the kspace data, I perform the adjoin_op using the off-resonance correct nufft operator. I perform the correction on each channel's data and combined them using root of sum of square (same as the scanner). It is fine to directly use adjoint_op on k-space correct? I don't need to first perform orc_nufft.op to distorted image data to get kspace.

The correction looks good on some of the slice where there isn't a large field at the temporal region. image It can be seen that the distortion correction is done according to the field effect.

However, the correct looks quite strange for those slice where a large field presented at the temporal. image

Since it does feel like that the correction is done according to the induced phase effect from the fieldmap. Would you think the fieldmap from TOPUP is not a suitable choice? What are the rule for making the k-trajectory and the sample time array. Since those are the two thing I generate myself so there could be a chance I made mistakes. Or maybe for a more complete image reconstruction, I should formulate my problem in the inverse problem manner to solve it instead of just relying on the nufft operator

Sorry for the long paragraphs and multiple questions. I just started my graduate research on MRI and image processing and am excited to learn from anybody. Please let me know if there are clarification needed for anything above.

Best Wenshang

paquiteau commented 1 month ago

Hi Wenshang, Thanks for opening the issue. Could you share an MWE? Also, the generation of your EPI could indeed be the culprit. In MRI-NUFFT, we do have an EPI-like trajectory (see https://mind-inria.github.io/mri-nufft/generated/autoexamples/example_3D_trajectories.html#turbine) that can be adapted (or at least check the source code for adapting it to your use case). MRI-NUFFT uses the [-0.5;0.5) convention (and adapts for each backend internally)

Your B0 map seems quite correct. As a first step, I would check that the labels of your bottom row are correct; silly mistakes happen to everyone.

On another note, since you are using Topup, why do you not use the corrected image provided by Topup directly? What does it look like compared to your manual correction?

chaithyagr commented 1 month ago

Thank you for your interest in our work @wenshangwang . I strongly suspect that the time series you have generated isnt in the right scale. Did you provide the readout time in "seconds"? Can you point to a short example of your code to help you better?

wenshangwang commented 1 month ago

Thanks for you all's reply.

@paquiteau What is a MWE? Sorry this is the first time I hear about it. And what does it mean by the labels of my bottom row?

Yeah, so for the reason of not using TOPUP, is we have a time series of image (similar to those of fmri) where the signal decay pattern is the most important. When using TOPUP for all of the time frames, there is a unexpect change of decay pattern across time for those area where the field is large. We would like to see if doing it in the reconstruction way will make a different. So I am not totally ruling out TOPUP yet. Just need to do more validation and comparison.

@chaithyagr Thank you for your reply. Since the correction is doing the expected work so I am also guessing if the time array was not generated correctly or not in correct format. But I do have all the time in unit of second. Here is the snippet of my time array generation. image And I used it as: image

Looking forward for your reply,

Wenshang

wenshangwang commented 1 month ago

For more information, I also include my trajectory generation code here. As I mentioned, the odd lines are already time reversed. image

paquiteau commented 1 month ago

Please insert your code as text (in a ```python ``` block so that we can copy it easily. MWE = Minimal working example (you provided your code, that's what we wanted).

After more thought, I don't quite understand your need for a NUFFT operator here. You could still use the decomposition method, but use a regular FFT, since your sampling pattern is Cartesian. In this case, you should not time reverse the kspace-lines.

PS: Shameless plug, but for reconstruction of 4D volumes you have the reconstruction methods in pysap-fmri

wenshangwang commented 1 month ago

Sorry and here are the code:

def initialize_cartisian_traj(num_samples, kmax):
    """
    Generate k-space locations for a single-shot EPI sampling.

    Parameters:
    - num_samples (int): Number of samples in k-space for both kx and ky directions.
    - kmax (float): Maximum k-space value in both kx and ky directions.

    Returns:
    - kspace_locations (ndarray): A numpy array of shape [1, num_samples*num_samples, 2]
      representing the k-space coordinates for each sample. 
    """
    # Create kx linearly spaced samples from -kmax to kmax
    kx = np.linspace(-kmax, kmax, num_samples)

    # Create ky linearly spaced samples from -kmax to kmax
    ky = np.linspace(-kmax, kmax, num_samples)

    # Create a 2D meshgrid of kx and ky coordinates
    kx_grid, ky_grid = np.meshgrid(kx, ky)

    # Stack the kx and ky grids along a new axis to get (num_samples, num_samples, 2)
    k_traj = np.stack([kx_grid, ky_grid], axis=-1)
    #k_traj = np.flipud(k_traj)

    # Reshape the grid to have the desired shape [1, num_samples*num_samples, 2]
    k_traj = k_traj.reshape(1, num_samples * num_samples, 2)

    return k_traj

def create_time_array(N, start_time, adc_duraion, esp):
    # Step 1: Create the bottom row 
    bottom_row = np.linspace(start_time, start_time+adc_duraion, N)

    # Step 2: Create the first NxN matrix by repeating the bottom row
    first_matrix = np.tile(bottom_row, (N, 1))

    # Step 3: Create the second NxN matrix with values to add
    # Values to add should increase by esp for each row above
    values_to_add = np.arange(0, esp * N, esp)[:, None]  # Column vector to be added to each row

    # Step 4: Create the final matrix by adding both matrices
    final_matrix = first_matrix + values_to_add
    final_matrix = np.flipud(final_matrix)

    return final_matrix

kmax = 0.5
num_samples = 64
echoSpacing = 0.54e-3
adc_startTime  = 33e-6
adc_duration = 473.6e-6

k_traj = initialize_cartisian_traj(num_samples, kmax)
t_ro = create_time_array(num_samples, adc_startTime, adc_duration, echoSpacing)

The reason for me using NUFFT operator, is mainly because of the off-resonance field correction ability included in tools like yours such that the distortion effect from off-resonance field can be corrected on the fly with reconstruction. Yes, I don't have to use NUFFT but it is nice that we have things that already available. I would be happy to hear other available method.

Thank you very much! Wenshang

chaithyagr commented 1 month ago

Sorry I'm on my phone so I might be really brief here. Looking at your code, i think the readout time is wrong. What mri-nufft expects is the time since each rf pulse. So you don't need to add anything but rather the first part in your code is enough.

This is why @paquiteau 's point that you don't need mri-nufft, just using TOPUP method should be enough for you ideally.

chaithyagr commented 1 month ago

I mean just the first_matrix is enough. Basically you will not have increasing phase when u go from one sampling shot to other. In other words the phase varies sorta similarly within each echo.

wenshangwang commented 1 month ago

@chaithyagr Thank you! But my sequence is a single shot EPI, where after each rf pulse excitation we sampled the whole 2D slice. That is why I increment the sample time each line.

chaithyagr commented 1 month ago

Ahh my bad i missed that. In that case in step 3, shouldn't the constant be added in slipped manner for each opposite line?

wenshangwang commented 1 month ago

@chaithyagr

So because the N/2 ghosting was performed before the reconstruction to the kspace signal. What was done were using a non-phased encoded even line and odd line pair reference scan. First reverse the opposite direction odd line, then find the phase difference between the two lines. Then for the actual data, time reverse all the odd lines, then minus this phase difference to all the even lines such that now we have all the lines without the phase difference (only phase encoding induced phase). In this way we eliminate the N/2 ghosting. Since the time reversal is done for all the odd line. the kspace is no longer those zig-zag pattern but a regular cartesian sample pattern. Here is how the trajectory I meant look like.

image

wenshangwang commented 1 month ago

Dear @chaithyagr and @paquiteau

I feel like it is the time array I create that is causing the issue. It might be not correct. This result look like too much correction is been done. When I reduce the additional amount of time added to each line, reasonable correction are found. This following result is when I cut the echo spacing to half of the original. image I would like to know a little more on how to create this time array. I understand it as an array that record the time each kspace point is sampled. What is the t=0 moment? Is it right after the rf pulse or the start of the ADC sampling. I was originally treating it as the start of the adc sampling. I get the time sampled at the first line, then add echo spacing to these time for the following lines. All these values are found from my sequence protocol. But the phase accumulation from field is there since the protons start processing. So I would like to know what would work for this program. Thanks!

I think if this can be solved and I created the appropriate time array my issue will be resolved.

Wenshang

wenshangwang commented 1 month ago

I think after going through the source code of off-resonance operator I had a better understanding of how should I do things. I should still formed the sample time as how it was acquired in. But change the n_time_segments to the number of PE lines such that the time for each segment generated in get_interpolators_from_fieldmap will represent the sample time for that PE lines. This makes sense especially the distortion is mainly on PE direction. This give me corrected imaged that make a lot sense. I think I should read those paper you provided in the reference for off-resonance correction for deeper understanding.

Thanks Wenshang

chaithyagr commented 1 month ago

Sorry I was a bit busy to answer you earlier, but I guess you have figured it out. We shall close this issue for now, but feel free to reopen if you still face issues.

wenshangwang commented 5 days ago

Dear Chaithyagr and Pierre

I am wondering if we can discuss further issue I have currently.

I got things working with the fieldmap correction on my EPI data. Now I need to bring the the forward NUFFT-off resonance corrected operator into appropriate inverse problem to complete the reconstruction. Due to vastly available optimization solver and deep learning potential from pytorch. I would love to generate my nufft operator such that it can take in and output array compatible with pytorch.

However, when I select the backend for MRIFourierCorrected as "torch". I encountered the error of unsupported backend. image image

Could you please provide some insight on what I might have done wrong?

Wenshang

paquiteau commented 5 days ago

Hello Wenshang We are happy that you get things working on your side.

What you describe is a bug, and luckily I have a fix for it, see #212 . Can you check if this branch fixes your bug (there might be some more down the road, we will address them as well in this PR) ?