gpufit / Gpufit

GPU-accelerated Levenberg-Marquardt curve fitting in CUDA
MIT License
309 stars 91 forks source link

Compartmental model fitting for medical imaging using Gpufit #27

Closed mscipio closed 6 years ago

mscipio commented 6 years ago

Hi everybody, I tried to do a similar thing to what you did here using just PyCuda to make python able to talk with the GPU, avoiding C++ coding. I just made the repo public after submitting a paper using it, so if you want you can have a look.

What I did there was to implement Compartmental Model fitting for dynamic PET imaging using CUDA parallelization and adding a spatial regularization term to the LevMar update (I wanted to fit time series of 3D images, so I add voxel neighborhood to exploit after synchronizing the fitting across voxels).

I would like to test the CUDA kernel to compute model and derivatives I developed for my tool in Gpufit. I've already read the documentation page about extending the library with new models. I think I could do it.

I have a question, though. Compartment models are, basically, a convolution of two biexponential functions.
The current version of my kernel uses an analytic form to compute the model update and, above all, its derivatives. Do you have any reference to how I can implement a numeric convolution in a CUDA kernel, and how I can derive it to update the gradient?

jkfindeisen commented 6 years ago

That's a good point. This library assumes that each data point can be computed independently and in parallel. With a convolution it's much more beneficial to do that in several steps. If you perform the convolution via an FFT, one could use cuFFT to implement it on the GPU, but the current framework of this library would not allow that. It would need to be considerably changed.

mscipio commented 6 years ago

Thanks for the answer! While I think more about it, I have started importing one of my kernels into Gpufit. As I was saying, for now, I can use this analytic solution of the convolution (basically I already have a function which gives me the mathematical form of the final model, and its derivative) so I don't need to worry about implementing the convolution. Anyway, I will still need a few more inputs for the model function:

device void calculate_myModel( float const parameters, int const n_fits, // voxel number int const n_points, // time point number float value, //output float derivative, // output int const point_index, // current time point idx int const fit_index, // current voxel idx int const chunk_index, float times, float IFpar, float IFvalue, char * user_info, // I have an not-evenly sampled time vector, maybe I can import it here std::size_t const user_info_size)

Doing this, anyway, I will break the standard structure of _device void calculatemodel() defined in models.cuh. Do you think I can somehow use the user_info variable to load all 3 arrays?

Thanks!

superchromix commented 6 years ago

You can pass any type of data via the user_info parameter, so you should not have to change the interface to calculate_model().

mscipio commented 6 years ago

I am not totally confident in C++ programming (I am just starting now, coming from Python). Can you point me to an example of how I can use the user_info parameter to load 2 or more array structures? I think I got how I can use it to load the time vector, but if I want to load another additional one, how can I do?

superchromix commented 6 years ago

As described in the documentation, user_info is simply a pointer to a block of memory. What is stored in this memory space, and how it is organized, is completely up to the user. The user_info memory is available to the model functions and the estimators on the GPU.

In the case when you wanted to pass two arrays in the user info memory, the simplest thing to do would be to concatenate them back to back. Also, the first bytes of the user_info memory could contain, e.g., the size of the arrays, so that the model functions will know how to access any particular element of the arrays.

As said, the size, content, and organization of the user_info memory block is completely up to the user and the model function which is designed to make use of it.