Closed chenyang9526 closed 4 years ago
So, I am assuming the following, as it has been the case before for me, correct me if wrong: SART with lamba=1 actually does a good job, even if it converges early. If this is not the truth, try SIRT or FBP, as you may have geometric mistakes in the variables.
With that assumption and general behavior I say: lambda
0.01 is very very very low. You are moving towards your solution 1% of the amount that the gradient asks you, this is very tiny. A common number in tomography is 1, and in fact, in my 5 years of experience, I never needed to change it to a different number. Which brings us to SART_TV. In SART TV, TVlambda
and TViter
control how much TV you need in relationship to SART. But you are doing so little SART, that the TV regularization absolutely dominates the algorithm, as if there is almost no SART. I suggest increasing lambda
to a value close to 1, and then playing with the other two parameters.
Note also that the TV algorithms are very good, but are generally used for cases where noise is very high, often due to very noisy projections, but mostly due to limited datasets. You have 576 angular samples. Its rare that you need TV in this cases, and if you do, you need very little amount of it. Lower TViter
a lot. In any case, its always very hard to choose TV parameters, they are very dataset dependant and there is no general heuristic to do so. It is an open problem in tomography.
Appreciate your help. Now I'm examining my geometry with your FBP function. It is strange that some streaking artifacts always appear in FBP results. The code used for testing is basically the same as your demo 16 except I changed radial sampling number to 1000. Do you have any clue on it?
Could you post the full code just in case, so I can run it in my machine without assumptions?
On Mon, 3 Feb 2020, 07:04 chenyang9526, notifications@github.com wrote:
Appreciate your help. Now I'm examining my geometry with your FBP function. It is strange that some streaking artifacts always appear in FBP results. The code used for testing is basically the same as your demo 16 except I changed radial sampling number to 1000. Do you have any clue on it? [image: image] https://user-images.githubusercontent.com/60500476/73629407-e5a91400-4607-11ea-8d93-cd137ccca522.png [image: image] https://user-images.githubusercontent.com/60500476/73629438-f6f22080-4607-11ea-9dd2-4759e29c6329.png
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/CERN/TIGRE/issues/150?email_source=notifications&email_token=AC2OENGBR455WMPCG77RLTDRA6XXJA5CNFSM4KOCQE4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKSTPSQ#issuecomment-581253066, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC2OENCPXWKQJNERKMKALL3RA6XXJANCNFSM4KOCQE4A .
Sure, here you go
clear; close all; geo.DSD = 1536; % Distance Source Detector (mm) geo.DSO = 1000; % Distance Source Origin (mm) geo.nVoxel=[256;256;2]; % number of voxels (vx) geo.sVoxel=[256;256;2]; % total size of the image (mm) geo.dVoxel=geo.sVoxel./geo.nVoxel; % size of each voxel (mm)
geo.nDetector=[512; 2]; % number of pixels (px)
geo.dDetector=[0.8; geo.dVoxel(3)]; % size of each pixel (mm)
geo.sDetector=geo.nDetector.geo.dDetector; % total size of the detector (mm)
geo.offOrigin =[0;0;0]; % Offset of image from origin (mm)
geo.offDetector=[0; 0]; % Offset of Detector (mm)
geo.accuracy=0.5; % Accuracy of FWD proj (vx/sample)
geo.mode='parallel';
angles=linspace(0,2pi,1000);
phatom=single(phantom('Modified Shepp-Logan',geo.nVoxel(1)));
phatom=cat(3,phatom,phatom);
projections=Ax(phatom,geo,angles,'interpolated');
imgFBP = FBP(projections,geo,angles);
I think you are getting the standard artefacts from fbp. Your images look OK, but looks like you exaggerated the viewing window so the artefacts show more (which is a very good way of looking at them), but nothing out of the ordinary for this type of algorithm (fbp)
On Mon, 3 Feb 2020, 09:09 chenyang9526, notifications@github.com wrote:
Sure, here you go
clear; close all; geo.DSD = 1536; % Distance Source Detector (mm) geo.DSO = 1000; % Distance Source Origin (mm) geo.nVoxel=[256;256;2]; % number of voxels (vx) geo.sVoxel=[256;256;2]; % total size of the image (mm) geo.dVoxel=geo.sVoxel./geo.nVoxel; % size of each voxel (mm)
geo.nDetector=[512; 2]; % number of pixels (px) geo.dDetector=[0.8; geo.dVoxel(3)]; % size of each pixel (mm) geo.sDetector=geo.nDetector.
geo.dDetector; % total size of the detector (mm) geo.offOrigin =[0;0;0]; % Offset of image from origin (mm) geo.offDetector=[0; 0]; % Offset of Detector (mm) geo.accuracy=0.5; % Accuracy of FWD proj (vx/sample) geo.mode='parallel'; angles=linspace(0,2pi,1000); phatom=single(phantom('Modified Shepp-Logan',geo.nVoxel(1))); phatom=cat(3,phatom,phatom); projections=Ax(phatom,geo,angles,'interpolated'); imgFBP = FBP(projections,geo,angles);
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/CERN/TIGRE/issues/150?email_source=notifications&email_token=AC2OENAMD4W6X6V6H7IJL2LRA7GMLA5CNFSM4KOCQE4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKS337I#issuecomment-581287421, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC2OENCEJBQ6OIN3ETW6HN3RA7GMLANCNFSM4KOCQE4A .
Maybe, but if you use higher resolution phantom(512-by-512). The streaking will show up very clearly in image. BTW, what is the reasonable value range of TVlambda? Thanks
I will try it later when i get home and test it. But it makes sense, if you increase the image size but not the detector size, artifacts will appear.
As said before, TV parameters are very hard to tune. It severely depends on the data. There is no intuitive way to select them...
On Mon, 3 Feb 2020, 09:30 chenyang9526, notifications@github.com wrote:
Maybe, but if you use higher resolution phantom(512-by-512). The streaking will show up very clearly in image. BTW, what is the reasonable value range of TVlambda? Thanks
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/CERN/TIGRE/issues/150?email_source=notifications&email_token=AC2OENECZMMIFIQHCU5MK6TRA7IYXA5CNFSM4KOCQE4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKS5VLA#issuecomment-581294764, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC2OENFDQYL4AETWNTGVGV3RA7IYXANCNFSM4KOCQE4A .
I changed detector size accordingly to make sure it covers the image. Ok, I see. Thanks
@chenyang9526 how is this going? did you reduce the errors?
Yes. There are some negative values in my sinogram data. I guess that is why SART_TV cannot handle. BTW, is your SART_TV relevant to MBIR_TV (model based iterative reconstruction)? MBIR takes X-Ray physics, geometry, statistical property. I can tell you have taken geometry into account, but not sure about others. I'm trying to compare denoising effect among different low dose CT reconstruction methods, and most people said MBIR_TV is one of the state-of-art method, that's why I'm asking. Thanks
Hi @chenyang9526 . Yes, negative values in the sinogram are physically impossible (you detected negative amount of photons) so they are an artifact in your data that you need to clean up before reconstruction. They often appear due to a wrong linearization of the sinogram, i.e. a bad "max" value in the Beer-Lambrt law.
MBIR is a bit of an ambiguous term. I have seen it used indistinctly with iterative reconstruction at some places too. But answering your question: no, standard iterative reconstruction assumes ideal infinitesimal width monochromatic non-scattered photons for reconstruction.
However, most of these effects can be somehow approximated as linear operators that can be either multiplicatively or additively combined with the result of the forward/backward projection, so adding them to the algorithms here should be quite easy. Say for example that you have some photon scattering probabilty map. That will translate to some higher and lower values detected in the detector. If you know by how much, then you can just go to Ax.m
, and after the forward projection, you just add those deviations to the result before returning. This is of course the easy part, estimating scattering is quite hard and computationally expensive....
Thanks for answering!
Expected Behavior
Actual Behavior
I'm using SART and SART_TV on MATLAB to reconstruct CT images with parallel 2D geometry from my own sinogram data. For SART, it converges in about 10 iterations if default lambda = 1 is used, and if I set lambda = 0.01 it converges in 70 iterations. Both works fine, while the latter setting results in a better image. However, when I use SART_TV, no matter how I change the TVlambda and TViter, the reconstruction converges in 2 iteration and results in a very poor result. I'm wondering why would this happen? What is a reasonable pick or range of lambda, TVlambda, TViter?
Code to reproduce the problem (If applicable)
Specifications