serizba / cppflow

Run TensorFlow models in C++ without installation and without Bazel
https://serizba.github.io/cppflow/
MIT License
774 stars 177 forks source link

Transferring data on the GPU #244

Open Zhaoli2042 opened 1 year ago

Zhaoli2042 commented 1 year ago

Hi!

I find cppflow very useful; however, I have some small questions for now (I may have more in the future :D).

I can use cppflow in a CUDA/C++ program, and cppflow can find my GPUs.

Since the model is making the predictions on the GPU, and all my data is stored on the GPU, is there a way to let the model read data directly from the device without transferring and preparing the data on the host?

And I am having issues when I try to put cppflow::model in std::vector. The program is able to run and make correct predictions, but it generates a "Segmentation fault" when it finishes. Is there a way to avoid this?

Thanks! I appreciate any advice you can give.

serizba commented 1 year ago

Hi @Zhaoli2042

Can you write here the code you are trying to run?

Zhaoli2042 commented 1 year ago

Hi @serizba ,

Thanks for your reply. Here is a simple example that I tested. cppflow_cuda_example_ASK.tar.gz

I am using the nvidia-hpc compiler (nvc++), the version is 22.5

cjmcclellan commented 6 months ago

Hi @Zhaoli2042, were you able to make this work? I'm also interested in having a TF model sit inside a custom GPU pipeline.

yury-lysogorskiy commented 6 months ago

I also have similar issue (feed input from GPU) right now and I'm also very interested.