Open TDHumphries opened 3 years ago
Hello, Just wondering if anyone can shed some light on this topic -- please let me know if so! Best, TH
I also have probelm about how should I scale my data! I have projection data and corresponding CT images but when I use FBP or adjoint, I will get something weird as the result! I set the src_rad and det_rad to be equal to what I had in my real implementation for fan-beam geometry implementation but I am not sure what else I should change for my implementation.
`size = 512
reco_space = odl.uniform_discr([-256, -256], [256, 256], [size, size], dtype='float32')
angle_partition = odl.uniform_partition(0, 2*np.pi, 1152)
detector_partition = odl.uniform_partition(-368, 368, 736)
geometry = odl.tomo.FanBeamGeometry(angle_partition, detector_partition,src_radius=595, det_radius=490.6)`
Hi, I am trying to use ODL to simulate some CT data and I am a little unsure of how the pixel values in the sinogram are scaled relative to the pixel values of the phantom. Specifically, I have the following code:
size = 512
space = odl.uniform_discr([-256, -256], [256, 256], [size, size], dtype='float32')
geometry = odl.tomo.parallel_beam_geometry(space, num_angles)
operator = odl.tomo.RayTransform(space, geometry)
data = operator(phantom)
phantom is a 512x512 CT image with units of cm^{-1} (mu-values). My intuition is that by discretizing my space from -256 to 256 with 512 pixels, I should have a 'pixel size' of 1, so that the ray transform should essentially be summing the values in each pixel of phantom along each line. This is obviously not the case though:
(Pdb) np.max(data)
0.45102981
(Pdb) np.max(phantom)
0.44404224
I would generally expect that max of data to be about two orders of magnitude larger than that, since we are summing values over several hundred pixels. This is what ASTRA does if you specify a det_width of 1 in the parallel beam geometry. Out of curiosity I changed the second line of code to
space = odl.uniform_discr([-1, -1], [1, 1], [size, size], dtype='float32')
and found that the max value in data decreased by about a factor of 4, which again was a little confusing to me. Could someone explain how the values in the sinogram are scaled? Apologies if this is mentioned in the documentation, but I was not able to find it.