Closed dkazanc closed 6 years ago
The main issue is that you suppose that everyone will have the cuda compiled version. This could be true is you distribute the binaries, even if a machine lacks a CUDA GPU. That's not true if you compile the softare now. And that's why I kept them separated. Actually the GPU ones aren't even compiled (nor wrapped) if there's no CUDA on the actual machine.
I guess we could compile with cuda even without a GPU? I need to investigate that.
My point of view about wrapping is now come to this:
1) C/C++/CUDA code 2) minimal Python wrapping in form of Python functions or bare classes 3) more elaborate Python interface
Step 2 works on NumPy arrays, while step 3 could be more into the PythonFramework spirit, or packaged as a Savu plugin.
In this case you could unify everything in the regularizers.py
(step 3) where you could do something along these lines
from ccpi.filters.cpu_regularizers_cython import CPU_ROF
from ccpi.filters.gpu_regularizers_cython import GPU_ROF
def ROF(device='cpu'):
if device == 'cpu':
CPU_TV
else:
GPU_TV
The users will use the regularizers simply by
from ccpi.filters.regularizers import ROF
ROF(device='gpu', input_DataSet)
closed with #40
In order to keep things in one place while Cythonizing, i.e. just have regularizers.pyx Pros: 1) Easy to manage interfaces to CPU/GPU functions and also do parameters check for both cores. 2) A possibility of adding a 'CPU' / 'GPU' switch in order to have just one name for the regularizer, e.g. _FGPTV(blabla, 'GPU') 3) More compact representation
Cons: A bit more challenging debugging
P.S. As a trial I can do it for ROF or FGP.