CERN / TIGRE

TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox
BSD 3-Clause "New" or "Revised" License
561 stars 184 forks source link

Partial recon results does not make sense with SART #133

Closed xliu68 closed 5 years ago

xliu68 commented 5 years ago

Expected Behavior

For acceleration, we can specify only a volume of interest in the geometry config to be reconstructed. This works perfectly fine with: FDK, SIRT, OS-SART, CGLS, OS-ASD-POCS, and FISTA.

Actual Behavior

But it does not work with: SART, ASD-POCS, beta-ASD-POCS, and SART-TV. So essentially it's SART-related. When it does not work, the result is only a few streaks as the following image.

sart_partial_recon

Current Workaround

However, if we specify the recon volume to cover the whole FOV, SART gives the expected results. Of course it takes much longer recon time to cover that much volume.

Question

Could you advise whether this is an implementation issue or an inherent thing of the SART algorithm?

AnderBiguri commented 5 years ago

@xliu68 hum interesting behavior, it is not expected, but I had something similar in one project, and it was related to numerical edge cases on the computation of V and W. Could you share the geometry settings that lead to this behavior so I can look at it further?

xliu68 commented 5 years ago

Thanks for the quick response. Please refer to the .mat file in the zip package. The geo_full config works fine. The geo_partial config gives streaks.

example_geo.mat.zip

AnderBiguri commented 5 years ago

@xliu68 This is caused by extreme geometries and numerical precision issues with the matrix W.

Adding limite=0.1;W(W>limit)=limit; to line 89 on SART stabilizes this matrix a bit and allows for reconstruction (while still ugly). I suggest either testing reducing that limit, or using non-SART based algorithms.

xliu68 commented 5 years ago

@AnderBiguri Thank you for the suggestion. I will test what you said. BTW, the geometry setting is 1/4 of the whole FOV and it is hard to imagine that it is extreme. Do you have a rule of thumb?

Besides, other than giving an upper limit to W, would increasing the resolution of geoaux (right now only 8 voxels) help?

AnderBiguri commented 5 years ago

@xliu68 You are right, extreme was not the right word. And in fact, I realized that I have already tried to fix this issue in the past, I just didn't use the correct method. Change line 82 to: geoaux.sVoxel([1 2])=geo.sDetector([1])*1.1;. This should remove the numerical instabilities.

I will experiment and possibly add it to TIGRE, thanks for poitning this out. Please do not close the issue, I will close it myself after I tested if the new version is robust in my test cases.

xliu68 commented 5 years ago

@AnderBiguri, I tested your method and had more observations.

First, by applying geoaux.sVoxel([1 2])=geo.sDetector([1])*1.1; only, I still get some streaks at the corner. So I also added geoaux.offOrigin = [0;0;0]; which removes the streaks at the corner.

Second, the update iteration over random ordered angles seems a bit problematic. Here I summarize the relevant part. Previously, it was

[alphablocks,orig_index]=order_subsets(angles,blocksize,OrderStrategy);
index_angles=cell2mat(orig_index);
angles_reorder=cell2mat(alphablocks);
W=Ax(ones(geoaux.nVoxel','single'),geoaux,angles,'ray-voxel'); 
V=computeV(geo,angles,alphablocks);

for jj=index_angles
   res=res+lambda* bsxfun(@times,1./V(:,:,jj),Atb(W(:,:,jj).*(proj(:,:,index_angles(:,jj))-Ax(res,geo,angles_reorder(:,jj))),geo,angles_reorder(:,jj)));
end

The meaning of jj seems inconsistent here, which makes the slicing of V, W, proj and angles_reorder do not correspond to the same angle. I guess one possible way is

[alphablocks,orig_index]=order_subsets(angles,blocksize,OrderStrategy);
index_angles=cell2mat(orig_index);
angles_reorder=cell2mat(alphablocks);
W=Ax(ones(geoaux.nVoxel','single'),geoaux,angles,'ray-voxel'); 
V=computeV(geo,angles,alphablocks);

for jj = 1:length(index_angles)
   res=res+lambda* bsxfun(@times,1./V(:,:,jj),Atb(W(:,:,index_angles(:,jj)).*(proj(:,:,index_angles(:,jj))-Ax(res,geo,angles_reorder(:,jj))),geo,angles_reorder(:,jj)));
end

This is similar to what it is in OS_SART. With this, even with a smaller geoaux, the partially reconstructed image starts to make sense. Though still heavily corrupted by dark streaks from the four corners, it is no longer purely streaks. However, with any size of geoaux, the best partially reconstructed images are still affected by abnormally bright corners and edges, and a global "foggy" effect. I was wondering ideally which conditions should a good W satisfy?

This may take a bit more testing but I am really interested because if it works better it will significantly help our research here. Thank you for your help very much!

AnderBiguri commented 5 years ago

@xliu68 About the angles: you are absolutely right, that is a bug and I will get on to fix it immediately.

About W: Its purpose its to normalize the data. Each pixel in the projections should have the same influence in the update. For that, we need to normalize it for the ray-length. What W does is store 1/raylength for each pixel. The initial problem you say was because you have an image that is quite off-center. That makes sometimes, for some rays in the corners, have a very very tiny raylength, because the image is only touched by a bit. That generates instability in the division and causes mayor errors due to floating point arithmetic.

The solution I proposed is to compute full lengths. Assume that the image geoaux is very wide, much wider than the actual image. That would make sure that no tiny raylengths happen in the image area.

I think then that you should not remove the offset from W as you did. When I test my way with your changes on the index angles on the update, I get a good results (only 30 projections so the image looks a bit bad, but only bad as expected):

image

In nay case, which one is better? No idea. In theory you should normalize the projections, but truly, if you dont, you will still get a good reconstruction. You can even remove W entirely. In any case, unless there are severe numerical faults like the one you reported in the beginning, it should make a very small difference to use any of the variations of W, so use whichever seem to work best for you.

xliu68 commented 5 years ago

@AnderBiguri Thank you for your detailed explanation. It makes much more sense to me for future testing. One quick question: do you have suggestions regarding have objects in other parts of the FOV in this case of partial reconstruction? The reason is that on our site we have several animals in the bore for one scan and then we need to reconstruct separate images for each of them. This works perfectly fine with FDK which is expected. I am still not sure whether this is possible with SART. If we are bound to reconstruct the whole FOV that has objects other than air, it will take ~30mins for each reconstruction. However, if we reconstruct only for a single animal, it takes ~1min. Not to mention the storage size difference. So more imitative testing case would be to have 4 brains in each quadrant but only reconstruct for one; or to have a big brain but to reconstruct only a quarter of it.

AnderBiguri commented 5 years ago

@xliu68 it seems that you are ignoring a very important thing on that setup you describe. Even if you reconstruct a small area of the image, you can not decouple different animal's data from the projections!

ROI reconstruction is not that straigtforward. You can test this on TIGRE. Create projections first using no offset and the entire image. Then, change the reconstruction geometry to something like you are doing now. You will see severe artifacts in the reconstruction, as there is signal from things that are not there, but the reconstruction needs to put them somewhere, so it will put them on top of your image. I am surprised it works with FDK, as it really should not, unless I completely misunderstood what you are doing.

xliu68 commented 5 years ago

Hi @AnderBiguri , Thank you for confirming this. It was my sense that an iterative reconstruction cannot work in that way but just wanted to check if there was a workaround. I still think it works with FDK because it is essentially filtered back projection and it just filters and adds the relevant part of the projection back to the image. Without forward projection, the signal from other animals will not confuse with the other projections, unlike in iterative reconstructions.

We actually can do this separate animal recon on our vendor equipped software. We can assign a ROI for a specific animal, and reconstruct at higher resolution or speed. The vendor does not disclose recon methods. But I compared the vendor-recon results with TIGRE FDK, and they look extremely alike. I think that it indicates that it is basically FDK.

AnderBiguri commented 5 years ago

Certainly the vendor will just do FDK, almost no comercial software does any other algorithm.

In any case, are your images that big? In theory you should be able to reconstruct entire volumes with TIGRE. In any case, if you try the ROI reconstruction, let me know how it goes, maybe I am being overly pessimistic.

On Mon, 29 Jul 2019, 16:40 SugaryChiu, notifications@github.com wrote:

Hi @AnderBiguri https://github.com/AnderBiguri , Thank you for confirming this. It was my sense that an iterative reconstruction cannot work in that way but just wanted to check if there was a workaround. I still think it works with FDK because it is essentially filtered back projection and it just filters and adds the relevant part of the projection back to the image. Without forward projection, the signal from other animals will not confuse with the other projections, unlike in iterative reconstructions.

We actually can do this separate animal recon on our vendor equipped software. We can assign a ROI for a specific animal, and reconstruct at higher resolution or speed. The vendor does not disclose recon methods. But I compared the vendor-recon results with TIGRE FDK, and they look extremely alike. I think that it indicates that it is basically FDK.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CERN/TIGRE/issues/133?email_source=notifications&email_token=AC2OENBSISBJTTOVO3OSALDQB4FN3A5CNFSM4IFHKMQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3BDRKA#issuecomment-516044968, or mute the thread https://github.com/notifications/unsubscribe-auth/AC2OENCYDZP7OQLTGXZAOODQB4FN3ANCNFSM4IFHKMQQ .

xliu68 commented 5 years ago

Thanks. After some testing I am more sure it is hard to get ROI recon with IR. I could reconstruct the whole image but it takes 20-30mins. This at least works but is not convenient both for research and for our busy scanner. I will try using larger accuracy values if time becomes critical. I will let you know if there are other surprising issues. Thank you for your help.