Closed sturfee-petrl closed 6 years ago
Sorry for the very long latency for the reply :) This code is given as a demo for inference, since there has been a gap on this aspect for a long time. You probably don't care about the answer now, but a separate C++ thread pool is probably the solution if anyone else ends up reading this issue :)
@flx42 It's still more then actual for me :simple_smile: Thank you for your opinion!
ah well :) In this case, I would either not use Go to avoid having tons of thread being created (pure C++ or maybe Rust?).
Or a separate C++ thread pool that holds the OpenGL contexts.
TLDR
Is it possible use GRE for OpenGL+Cuda scaling?
Should we use Vulkan instead of OpenGL if we want to use multiply GPUs parallel and handle result in gpu memory of the same device by cuda or some cnn as caffe?
Summary
We are looking for a vertical scaling solution our OpenGL+Cuda. Recently I used GRE for our Caffe. It works great! Thanks!
Now my team require me to bring GRE to our OpenGL+Cuda server. But I know that it will not work.
If you initialize an OpenGL instance in one thread and then call it from a different thread it will not work. OpenGL work only in the same thread.
I used OpenGL+Cuda with golang long time before I knew about GRE. I had a big problem with an asynchronization. I solved it via this pattern.
It worked fine from go but later we moved our OpenGL module to CPP.
I want to keep my implementation which allow to scale OpenGL. But I can't proof to my team that it's not possible to use the GRE approach for OpenGL+Cuda.
Thanks a lot! Don't hesitate to ask me details