Open nonchip opened 6 years ago
My layman's understanding of cuda/cudnn dll distribution is that it requires click through consent for the users or they need to download load it from source (requires Dev registration). Also given that cuda is Nvidia only this will lock out part of your users from higher performance.
With that in mind, if your model can run inference using CPU at acceptable speed, the easiest approach is to use that version. If on the other hand you do require GPU level performance, you'll need to figure out how to properly distribute cuda/cudnn dlls or as you suggested, use a cloud backend.
There are certainly ways to detect available GPUs and cross reference/enable capabilities and just ask pip to download the correct tensorflow, but that won't solve the distribution issue. This plugin can't help you with that issue, only clarification/distribution from Nvidia can.
See https://docs.nvidia.com/deeplearning/sdk/cudnn-sla/index.html#distribution for details.
when making a packaged build for end user release we can't really predict whether or not they have a compatible gpu, so how would I go about a procedure like:
the first one would be rather easy (check hardware info and show links to or provide the installers), but would i have to package 2 builds for the different versions, or could the plugin be made to automatically fall back?
EDIT: i guess this would also require multiplatform support which as far as i can see isn't provided yet? so I'd probably be better off not using this plugin directly and rather either provide a "cloud" solution or custom external worker started by the game that then gets connected to using the normal ue4 tcp socket stuff?