Closed AnderBiguri closed 3 years ago
Good idea.
Before going further, I think the files to define mex functions such as minTV.cpp, tvDenoise.cpp and projections.cpp should be moved to MATLAB/Utilities or somewhere under MATLAB folder.
My suggestion:
Have the main repo have an extra folder /Source
or /CUDA
. I think this makes more sense than to have python build against MATLAB/Source
(or vice versa)
Then, I see a couple of options:
cpp
files in /MATLAB/Source
./MATLAB/source
folder and have a /MATLAB/Utilities/Source
folder with the cpp
filesAny other proposal that we can consider?
Almost agree. Let me comment two points:
CUDA
would be better than Source
. Or if you think the .cu files are the core of this project or going to add other GPGPU technology such as OpenCL, Core
, Common
, Common/CUDA
or Core/CUDA
might be possible.MATLAB/Utilities
folder, there are IO
and GPU
folders that have their mex files inside. So, the destination of mex cpp files can be /MATLAB/Utilities
instead of /MATLAB/Utilities/Source
.I agree.
We have Python/MATLAB folders, the other one should be CUDA. If TIGRE ends up using some other GPU tech, then we can make another folder.
Similarly, I agree with the name source there. Maybe something like MATLAB/Utilities/mex_interface
or ./cuda_interface
Python\tigre\Source
and MATLAB\Source
are compared.
Only in Python => Python/tigre/utilities/cuda_interface
__init__.py
_Atb.pyx
_AwminTV.pyx
_Ax.pyx
_gpuUtils.pxd
_gpuUtils.pyx
_minTV.pyx
_tvdenoising.pyx
_types.pxd
errors.hpp
projection.cpp
projection.hpp
Only in Matlab => MATLAB/Utilities/cuda_interface
Atb_mex.cpp
AwminTV.cpp
Ax_mex.cpp
minTV.cpp
(older version is in python)tvDenoise.cpp
(older version is in python)In both and equivalent=> Common/CUDA
GpuIds.cpp
GpuIds.hpp
gpuUtils.cu
gpuUtils.hpp
POCS_TV.cu
POCS_TV.hpp
POCS_TV2.cu
POCS_TV2.hpp
ray_interpolated_projection.cu
ray_interpolated_projection.hpp
ray_interpolated_projection_parallel.cu
ray_interpolated_projection_parallel.hpp
Siddon_projection.cu
Siddon_projection.hpp
Siddon_projection_parallel.cu
Siddon_projection_parallel.hpp
TIGRE_common.cpp
TIGRE_common.hpp
tvdenoising.cu
tvdenoising.hpp
types_TIGRE.hpp
voxel_backprojection.cu
voxel_backprojection.hpp
voxel_backprojection2.cu
voxel_backprojection2.hpp
voxel_backprojection_parallel.cu
voxel_backprojection_parallel.hpp
The reason why I introduced Common
folder is that (python and matlab) and CUDA are not siblings and I thought it would be better to present the hierarchy of the layer 1 and 2 of the fig.2 of your paper. The word "common" is to be discussed because English is not my native language.
What do you think?
@genusn sounds good!
I suspect that projection.*pp
in python are redundant actually, need to check that. similarly those old mex files are just there from old dev times, as we merged the python branch because the lead developer moved to other things.
If CUDA folder goes out the Matlab folder, users become not to able to see the CUDA codes from the MATLAB IDE. Is it acceptable?
@genusn yes I think so. MATLAB IDE is quite terrible at showing CUDA code anyways, and 99% of people who use TIGRE don't want to even look at those codes. Its not that in Common they are very hidden either. So I think its good.
I think I can do this. May I tackle this?
That would be amazing!
Just important info for this topic:
Its likely that some time on the future, a version of separate MATLAB-specific source will be added. Someone has written some code to have TIGRE/MATLAB run fully inside the GPUs (as long as it can hold the memory of the entire algorithm within). Of course, this source is quite different to current TIGRE, mostly the .cpp files, but will necessarily also be different for CUDA files at some level.
My intention currently is to have them added at Compilation time, i.e. you can choose to compile either TIGRE with multi-GPU support, or TIGRE with single GPU full algorithm GPU usage if you think your images fit on your GPU. But its likely that this source will not go into "common", as its likely MATLAB specific.
This is not happening "soon", but FYI, on the topic.
That's interesting. Can I ask why it is likely to be MATLAB specific?
It uses the parallel computing toolbox ability to put everything on the GPU. I am sure we can do a similar thing with numba etc, but in MALTAB, it happens without any change in the MATLAB source, you just need to define the image and the projections as something that resides in the GPU, and then it will just put everything that operates with them into the GPU without asking.
Of course, this needs the CUDA source to not do the CPU-GPU operations, to just use the input as a device array.
Maybe there is a way to do the same thing in python, the code I have however, does not (it was coded by someone else).
Thank you for the explanation. I'm looking forward to see that code.
@AnderBiguri
I suspect that projection.*pp in python are redundant actually, need to check that.
Two functions maxDistanceCubeXY
and rollPitchYaw
are defined there. These are also defined in .cu
files and projection.hpp
is not included by any file other than projection.cpp
. We can safely remove these files, but I think we will soon have to make them revive for refactoring, with other name.
Up to you then! Happy with anything that will make TIGRE better! Leave them be if they will help with refactoring.
@AnderBiguri Quick question. There are two definitions of rollPitchYaw
function. In Siddon_projection.cu
,
and in ray_interpolated_projection.cu
,
https://github.com/CERN/TIGRE/blob/d9d619ef1b4da04d3f2f37a42aaca6d81ce9bff2/MATLAB/Source/ray_interpolated_projection.cu#L793-L795
The latter is correct, isn't it?
Hum, I'd say the former is incorrect. Yes I agree, should be i
and not 1
.
@AnderBiguri Another quick question. Just for confirmation. In voxel_backprojection.cu
and voxel_backprojection2.cu
, splitCTbackprojection
functions are defined. They are almost the same except the following lines:
https://github.com/CERN/TIGRE/blob/d9d619ef1b4da04d3f2f37a42aaca6d81ce9bff2/MATLAB/Source/voxel_backprojection.cu#L667-L668
and
https://github.com/CERN/TIGRE/blob/d9d619ef1b4da04d3f2f37a42aaca6d81ce9bff2/MATLAB/Source/voxel_backprojection2.cu#L743-L744
In most cases, the definition in voxel_backprojection2.cu
is not compiled because of #ifndef...
, so I think voxel_backprojection.cu
should be correct....
@genusn that is correct, the first one is the right one, the second one can fail in some edge cases.
ah, this is why I want to unify all the source! At some point maybe I should put all these functions in some common backprojection file. I seem to remember having issues compiling cuda code in a separate file, thus the repetition, but that was in 2015....
@AnderBiguri Thank you for your answer. That's right. There are many functions that can be shared between .cu files. For example, POCS_TV.cu
and POCS_TV2.cu
has common functions such as
A new .cu
files is necessary for them and the file name that popped up in my mind was POCS_TV_common.cu
. "Common" again.... Do you have any good ideas?
@genusn lets leave that to a separate issue. I am not sure if gradient
is the same for both, but all the rest can be put into a cuda_utils.cu
file, as they are operations on arrays (I am sure also that some CUDA library has them implemented and that my implementations are redundant, but well). In the past there was an issue with compiling several .cu files with shared code. CPU code may be good, but GPU code I think its not. Maybe that has changed, or there are some flags that I did not investigate....
better leave it for a different issue, we can refer to these comments there
@genusn lets leave that to a separate issue.
Agree. No problem.
Now that unifying the source folders is completed in my refactor_unify-source
branch, I will make PR in a few days after some tests.
I am not sure if gradient is the same for both
Yes, they are, and there is gradientTV
only in POCS_TV2.cu
.
Ah, a third issue would be to rename all these functions to something more sensible, including POCS_TV.cu
, as it has nothing to do with POCS itself... Many ghosts from the past hehe
We have 2 folders with source, and it seems that we could work something out to just have a single folder and build python or matlab against it, halfing our job when improving it, and minimizing future bugs.