LLNL / LEAP

comprehensive library of 3D transmission Computed Tomography (CT) algorithms with Python API and fully integrated with PyTorch
https://leapct.readthedocs.io
MIT License
74 stars 8 forks source link

Will the LEAP add more iterative algorithms like TIGRE? #43

Closed Starbucksmax closed 3 weeks ago

Starbucksmax commented 1 month ago

Thanks for your work. I just wondering if LEAP will support more iterative algorithms like OSART-TV and the following in the future.

Iterative algorithms

Gradient-based algorithms (SART, OS-SART, SIRT, ASD-POCS, OS-ASD-POCS, B-ASD-POCS-β, PCSD, AwPCSD, Aw-ASD-POCS) with multiple tuning parameters (Nesterov acceleration, initialization, parameter reduction, ...)

Krylov subspace algorithms (CGLS, LSQR, hybrid LSQR, LSMR, IRN-TV-CGLS, hybrid-fLSQR-TV, AB/BA-GMRES)

Statistical reconstruction (MLEM)

Variational methods (FISTA, OSSART-TV)

kylechampley commented 1 month ago

The short answer to your question is that more iterative algorithms will be added if users give a compelling reason why adding a specific algorithm will improve LEAP. We are interested in implementing the BEST analytic, iterative, and AI/ML/DL algorithms in LEAP. We definitely welcome input and/or requests from our users. Is there a specific algorithm that you have in mind?

LEAP does have an OS-SART+TV reconstruction algorithm, it is called ASDPOCS. Have you tried it?

Note that LEAP does already have a large collection of iterative reconstruction algorithms; see here. In addition, the usage of the filterSequence provides a very flexible format to include various regularization functionals. Some examples of what you can do with this are in the following demo script: d29_filter_sequence.py.

It also depends on what you define as separate/distinct algorithms. One can run different reconstruction algorithms by provding different arguments. For example, LEAP can do iterative FBP (IFBP) reconstruction, but you will find no "IFBP" in LEAP. To run this algorithm, you just run RWLS with the argument preconditioner='SARR'. You can also run a PICCS reconstruction by properly defining the filterSequence.

Most of those algorithm in TIGRE are also in LEAP, but have different names. For example, what they call CGLS is called LS in LEAP. The main iterative algorithms that are in TIGRE, but not in LEAP are those variations on ASDPOCS (e.g., B-ASD-POCS-β, PCSD, AwPCSD, Aw-ASD-POCS). From my limited knowledge, these are variations on the type of regularization that is used and different methods to automatically choose parameters. But you should ask Ander for the various use cases of these algorithms.

Starbucksmax commented 1 month ago

I didn't realize that ASDPOCS is the same as OSSART-TV. I've tried using it before, but the results weren't good with the default settings. Now, I'm planning to use the latest version, but I've encountered the following issue ''' ModuleNotFoundError: No module named 'leap_filter_sequence' ''' I was using your complied dll in my windows. And already copied the leapctype ,leaptorch and dll three files to the demo_leapctype folder.

kylechampley commented 1 month ago

Oops, I forgot to update the manual install script. You can find a new one here. No need to update the whole repository, just grab this file and re-run it.

Perhaps I misspoke about ASDPOCS. It is not the same thing as OS-SART+TV; it is very similar but much better. ASDPOCS performs an OS-SART step followed by several steps of TV denoising, but when it combines these two steps it does it in a clever way such that the TV step does not cause the solution to diverge. This modification is very important. ASDPOCS is an effective method for spare view or few view CT. But if you are more interested in noise reduction (and you have enough projections), then I think RWLS is much better.

A good example of how to run ASDPOCS is in this script: d29_filter_sequence.py. See the section under whichMethod == 3. I would leave out the line that says: "filters.append(MedianFilter(leapct, 0.0, 5))".

Starbucksmax commented 1 month ago

Thanks, it words now. Actually, I'm more interested on the limited-angle CT reconstruction. And I want to combine the iterative function to the diffusion model. I'll study your d29 code let you know the result ASDPOCS in my case. Thanks!

Starbucksmax commented 1 month ago

Hi I tried ASDPOCS in my case. It indeed have good performance. But I found one bug in the manual install script. It works in my windows system, but in my Linux server, the following line should dst_folder = site.getsitepackages()[0] be https://github.com/LLNL/LEAP/blob/14be4672dd111c7ee3fa7eaf57d3087bdb9c9795/manual_install.py#L37C42-L37C43 And I have two more questions:

  1. Is there anyway to disable iteration progress updates? Cause as mentioned if I'm incorporate this iteration into my neural network, I don't want these log message as following. image
  2. You mentioned that leap can avoid costly CPU-to-GPU data transfers by performing operations on data already on a GPU. But when I run the following code will raise the errors TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
    CT_img = np.load("./test_data/patient106.npy") (256,256)
    img = CT_img [None,...]
    tensor_cpu = torch.from_numpy(img)
    tensor_gpu = tensor_cpu.to('cuda')
    proj = leapct.allocateProjections()
    leapct.project(proj,tensor_gpu )
kylechampley commented 1 month ago

I'm working on disabling those messages for the next release. This will be done with the logging utility. Next release will come out this coming weekend.

Thanks for the warning about the site-packages folder. I looked into it and for Linux, it might be best to use site.getusersitepackages(). What do you think?

For your last issue, I think you just forgot to copy "proj" to the GPU. You can either do this yourself, or use proj = leapct.copy_to_device(proj)

Starbucksmax commented 1 month ago

For the site-package folder. Maybe is not a good idea to remove the [0], cause I will run into the following issue. TypeError: stat: path should be string,bytes,os.PathLike or integer,not list For the GPU yeah, thank for your advice it now runs smooth. Hope for your next release to disabling those messages

kylechampley commented 4 weeks ago

@Starbucksmax, a new version of has released, v1.13.

I updated the manual_install.py. Please let me know if this works for you.

To disable all print statements except those that are fatal erorrs, run the following command: leapct.set_log_error()

Starbucksmax commented 4 weeks ago

Yes, the new manual_install.py works for me both on windows and Linux. And the log_error works too. Thank you very much!

Starbucksmax commented 3 weeks ago

About the ASDPOCS, could you please help me figure out which parameter to control the TV? For example usually is the lambda. I try to change the delta and p. But the final image make no difference.

filters.append(TV(leapct, delta=0.01/100.0, p=1.0))

image

kylechampley commented 3 weeks ago

There is no weight (lambda) on the regularization term in ASDPOCS. The cost function you wrote above is RLS (Regularized Least Squares) and for this cost function this is how you set lambda and run the RLS reconstruction: filters = filterSequence(lambda) filters.append(TV(leapct, ...)) RLS(g,f, numIter, filters, ...)

The ... is just there to say you put your own parameters there.

Starbucksmax commented 3 weeks ago

I remember you mentioned that ASDPOCS is somewhat like SART combined with TV. Why isn't there a lambda parameter to control the strength of the TV regularization, considering how important this parameter is?

Also, regarding RLS, the typical range for lambda values should be between 0 and 1.5, right? However, I noticed that using values like 0.1 and 1.2 in RLS reconstruction seems to make no noticeable difference. Could you explain why this might be?

kylechampley commented 3 weeks ago

I'm working on a new LEAP release where I expand the documentation of the algorithms, so hopefully this will make things more clear. This will likely get released tomorrow.

ASDPOCS is a different algorithm than most others. The level of denoising is controlled by the balance of the numSubsets and numTV parameters. If you want stronger regularization with ASDPOCS, make numTV bigger.

The regularization strength parameter is quite insensitive and for some applications can be a very large number. My suggestion is to start by testing powers of 10 like this: 1e-3, 1e-2, ... 1e3.

Starbucksmax commented 3 weeks ago

Thanks, that help a lot!

kylechampley commented 3 weeks ago

A new LEAP release is available which improves documentation improvements. Specifically, see documentation pages:

Iterative Reconstruction

and Filter Sequence

Let me know if you have any further questions.