CERN / TIGRE

TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox
BSD 3-Clause "New" or "Revised" License
535 stars 180 forks source link

About optional parameters <init = "multigrid"> #441

Closed Kwong-Yam-Lie closed 1 year ago

Kwong-Yam-Lie commented 1 year ago

Hi, What does this initialization(init = "multigrid") do? I tried the case with this parameter, but it's too slow...

imgsart = algs.ossart(prosart,x.geo,x.angles,niter = 1, init = "multigrid", verbose = False)

I have read related source code. https://github.com/CERN/TIGRE/blob/c55a6bb3dc215e33d2b64e970c15ea0b71a115df/Python/tigre/utilities/init_multigrid.py#L7-L52

Is the number of iterations set to 100 (niter = 100)too large? https://github.com/CERN/TIGRE/blob/c55a6bb3dc215e33d2b64e970c15ea0b71a115df/Python/tigre/utilities/init_multigrid.py#L23

I'm curious what's the difference between the <init = "multigrid"> and the <init = "FDK">. In theory, do these two iteration methods have any effect on iteration speed and accuracy?

Can you give me a brief explanation or recommend related articles? Thanks!

AnderBiguri commented 1 year ago

Hi @Kwong-Yam-Lie Admitedly I have not used multigrid a lot, it was an idea from quite long ago that has not been explored much. The idea is to recosntruct in a smaller image size (bigger voxel size) in steps until you have the size of the real problem. I agree, 100 seems too large for the initialization.

FDK initializes the algorithm with the FDK image as initial, instead of all zeroes. This is useful when you don't have a ton of time for the recon, but often the FDK artifacts are still there after several iterations, if initialized like this.

Kwong-Yam-Lie commented 1 year ago

I understand. At first, I thought you were doing this to speed up the reconstruction or improve the accuracy of the reconstruction results, maybe there is some mathematical conclusion that I don't know. But after a simple verification, I found that there was no good result.

I focused on this mainly because I had an idea: we could reconstruct pixels in the center of an object while maintaining voxel size, and then gradually spread towards the object boundary. Equivalently, we first take the local data of the detector and then extend it to the global data of the probe.

This is similar to what you think, but it's not clear to me if you've already done this in backprojection (Atb). If you have already tried, can you tell me the feasibility of this scheme?

AnderBiguri commented 1 year ago

@Kwong-Yam-Lie In theory, it should speed up recon, in the sense that you'd need less iterations to get to the same results, because your initial image should be a good approximation to the solution.

Your idea, if I understand correctly, is to have some sort of mixed-scale mesh to recosntruct, where pixels are small in the center and bigger outside? Or something else?

Atb is quite simple, it just updates each voxel with the information in the detector in the corresponding location. The pixels used are the same as in the desired image, nothing else.

Kwong-Yam-Lie commented 1 year ago

In my idea, pixels are equal in center and outside. It's just that each iteration starts with some rays from the center of the detector, and then uses more until all rays are used. The purpose is to try to speed up the convergence of pixels outside.

My problem has been solved, thanks!