Closed hws203 closed 2 weeks ago
There is a typo in the demo script. Line 214 should be: leapct.convert_to_modularbeam()
Anyway, this is just an estimation algorithm and the authors of the paper from which I got the algorithm say that the cost function is only locally convex, so if your initial guess is not close to the true solution, one may only converge to a local minimum.
Also for this code to work the projections must not be truncated. It also helps to correct for flux modulation. A flux modulation correction method can be found in the makeAttenuationRadiographs function in leap_preprocessing_algorithms.py See the "ROI" argument in this function.
I found the bug point at your d12_geometric_calibration.py at the case of tilt angle bigger than 0.1.
Please check below picture.
I used only the tilt angle to rotate detector on optical axis. then both bottom and center slices are ok for me. So I think that your test script needs to be checked for this case too, the main problem is to set some wrong values at demo test script for centers.
I wonder what is the mean of those values as like res.x[0]=-5.33 or resx[1]=-49.69.
I commented out the first step of the estimation algorithm(costFcn_rc). And run only the second costFcn_rct minimize for tilt angle. The first step minimizing is not effective currently owing to the res.x[0] and res.x[1] issue.
@kylechampley How about using the value from your leap.find_centerCol() 's result for the initial guess param for the optimizing of your consistency-cost function. then you may focus on the narrow valley area(local) for finding the min-values of res.x[0~2]. Your find_centerCol() function is robust against almost all my sample cases.
That sample script is a demonstration of a capability in LEAP, rather than a prescription of how to perform geometric calibration on an arbitrary dataset. All CT systems are unique and may require different methods. Thus, the best I can do is provide users with tools to use, but they will have to find the right way to use those tools to get the best results.
ok, test script usage method is user's side role, but I wonder there is no needs to check the res.x[0] or res.x[1] ?
Those are the estimates provided by the optimization algorithm. They might not be correct, but they are values that reduce the cost function. I suggest that these be verified by reconstruction. Single slice reconstructions are often sufficient for this verification. You can also use the parameter_sweep function which I personally find very useful.
Regardless, the "proper" way to do CT is to scan some calibration phantom, like the so-called "ball phantom". These scans usually provide good results.
Thanks for your recommendation. I will try it, Today.
Yes, parameter_sweep is useful, and I found that my cylinder type sample has a special position which is lower positioned from center line of row in detector. So as you can see below if I use the centerRow+100 position, then your inconsistency recon shows very clear tilt angle which is best. But at the center of row. it shows broad area of cost values which may bring some incorrect res.x[0] or res.x[1].
@kylechampley I did check the deltaCol and deltaRow with my new centerRow position(+100). And I could see that this new row position can not cover the weakness of res.x[0] or res.x[1] at consistency-cost. Anyway your old inconsistency-recon shows the sharp finding of tilt angle(res.x[2]) of my cylinder sample. So I will use your parameter-sweep for tilt and find_centerCol for centerCol until a new auto-cal feature.
I have checked your new v1.13 auto-cal function with my 4680 battery projection. And I can see that there is some issue at that case.
I will send my full test script code and my sample image location by your e-mail. I guess that the issue is related to modular-beam converting action because the if np.abs(res.x[2]) > 0.1: condition matched and go into below sub-routine, then this issue happens.
Best regards.