Closed Starbucksmax closed 3 weeks ago
The short answer to your question is that more iterative algorithms will be added if users give a compelling reason why adding a specific algorithm will improve LEAP. We are interested in implementing the BEST analytic, iterative, and AI/ML/DL algorithms in LEAP. We definitely welcome input and/or requests from our users. Is there a specific algorithm that you have in mind?
LEAP does have an OS-SART+TV reconstruction algorithm, it is called ASDPOCS. Have you tried it?
Note that LEAP does already have a large collection of iterative reconstruction algorithms; see here. In addition, the usage of the filterSequence provides a very flexible format to include various regularization functionals. Some examples of what you can do with this are in the following demo script: d29_filter_sequence.py.
It also depends on what you define as separate/distinct algorithms. One can run different reconstruction algorithms by provding different arguments. For example, LEAP can do iterative FBP (IFBP) reconstruction, but you will find no "IFBP" in LEAP. To run this algorithm, you just run RWLS with the argument preconditioner='SARR'. You can also run a PICCS reconstruction by properly defining the filterSequence.
Most of those algorithm in TIGRE are also in LEAP, but have different names. For example, what they call CGLS is called LS in LEAP. The main iterative algorithms that are in TIGRE, but not in LEAP are those variations on ASDPOCS (e.g., B-ASD-POCS-β, PCSD, AwPCSD, Aw-ASD-POCS). From my limited knowledge, these are variations on the type of regularization that is used and different methods to automatically choose parameters. But you should ask Ander for the various use cases of these algorithms.
I didn't realize that ASDPOCS is the same as OSSART-TV. I've tried using it before, but the results weren't good with the default settings. Now, I'm planning to use the latest version, but I've encountered the following issue ''' ModuleNotFoundError: No module named 'leap_filter_sequence' ''' I was using your complied dll in my windows. And already copied the leapctype ,leaptorch and dll three files to the demo_leapctype folder.
Oops, I forgot to update the manual install script. You can find a new one here. No need to update the whole repository, just grab this file and re-run it.
Perhaps I misspoke about ASDPOCS. It is not the same thing as OS-SART+TV; it is very similar but much better. ASDPOCS performs an OS-SART step followed by several steps of TV denoising, but when it combines these two steps it does it in a clever way such that the TV step does not cause the solution to diverge. This modification is very important. ASDPOCS is an effective method for spare view or few view CT. But if you are more interested in noise reduction (and you have enough projections), then I think RWLS is much better.
A good example of how to run ASDPOCS is in this script: d29_filter_sequence.py. See the section under whichMethod == 3. I would leave out the line that says: "filters.append(MedianFilter(leapct, 0.0, 5))".
Thanks, it words now. Actually, I'm more interested on the limited-angle CT reconstruction. And I want to combine the iterative function to the diffusion model. I'll study your d29 code let you know the result ASDPOCS in my case. Thanks!
Hi I tried ASDPOCS in my case. It indeed have good performance. But I found one bug in the manual install script. It works in my windows system, but in my Linux server, the following line should dst_folder = site.getsitepackages()[0] be https://github.com/LLNL/LEAP/blob/14be4672dd111c7ee3fa7eaf57d3087bdb9c9795/manual_install.py#L37C42-L37C43 And I have two more questions:
CT_img = np.load("./test_data/patient106.npy") (256,256)
img = CT_img [None,...]
tensor_cpu = torch.from_numpy(img)
tensor_gpu = tensor_cpu.to('cuda')
proj = leapct.allocateProjections()
leapct.project(proj,tensor_gpu )
I'm working on disabling those messages for the next release. This will be done with the logging utility. Next release will come out this coming weekend.
Thanks for the warning about the site-packages folder. I looked into it and for Linux, it might be best to use site.getusersitepackages(). What do you think?
For your last issue, I think you just forgot to copy "proj" to the GPU. You can either do this yourself, or use proj = leapct.copy_to_device(proj)
For the site-package folder. Maybe is not a good idea to remove the [0], cause I will run into the following issue. TypeError: stat: path should be string,bytes,os.PathLike or integer,not list For the GPU yeah, thank for your advice it now runs smooth. Hope for your next release to disabling those messages
@Starbucksmax, a new version of has released, v1.13.
I updated the manual_install.py. Please let me know if this works for you.
To disable all print statements except those that are fatal erorrs, run the following command: leapct.set_log_error()
Yes, the new manual_install.py works for me both on windows and Linux. And the log_error works too. Thank you very much!
About the ASDPOCS, could you please help me figure out which parameter to control the TV? For example usually is the lambda. I try to change the delta and p. But the final image make no difference.
filters.append(TV(leapct, delta=0.01/100.0, p=1.0))
There is no weight (lambda) on the regularization term in ASDPOCS. The cost function you wrote above is RLS (Regularized Least Squares) and for this cost function this is how you set lambda and run the RLS reconstruction: filters = filterSequence(lambda) filters.append(TV(leapct, ...)) RLS(g,f, numIter, filters, ...)
The ... is just there to say you put your own parameters there.
I remember you mentioned that ASDPOCS is somewhat like SART combined with TV. Why isn't there a lambda parameter to control the strength of the TV regularization, considering how important this parameter is?
Also, regarding RLS, the typical range for lambda values should be between 0 and 1.5, right? However, I noticed that using values like 0.1 and 1.2 in RLS reconstruction seems to make no noticeable difference. Could you explain why this might be?
I'm working on a new LEAP release where I expand the documentation of the algorithms, so hopefully this will make things more clear. This will likely get released tomorrow.
ASDPOCS is a different algorithm than most others. The level of denoising is controlled by the balance of the numSubsets and numTV parameters. If you want stronger regularization with ASDPOCS, make numTV bigger.
The regularization strength parameter is quite insensitive and for some applications can be a very large number. My suggestion is to start by testing powers of 10 like this: 1e-3, 1e-2, ... 1e3.
Thanks, that help a lot!
A new LEAP release is available which improves documentation improvements. Specifically, see documentation pages:
and Filter Sequence
Let me know if you have any further questions.
Thanks for your work. I just wondering if LEAP will support more iterative algorithms like OSART-TV and the following in the future.
Iterative algorithms
Gradient-based algorithms (SART, OS-SART, SIRT, ASD-POCS, OS-ASD-POCS, B-ASD-POCS-β, PCSD, AwPCSD, Aw-ASD-POCS) with multiple tuning parameters (Nesterov acceleration, initialization, parameter reduction, ...)
Krylov subspace algorithms (CGLS, LSQR, hybrid LSQR, LSMR, IRN-TV-CGLS, hybrid-fLSQR-TV, AB/BA-GMRES)
Statistical reconstruction (MLEM)
Variational methods (FISTA, OSSART-TV)