Open mdinoulis opened 4 years ago
There are two ways you can work around the embedded python build to use the latest python setup of your choice:
1) Change UseThirdPartyPython
https://github.com/getnamo/UnrealEnginePython/blob/master/Source/UnrealEnginePython/UnrealEnginePython.Build.cs#L13 to false and specify your python install location in e.g. https://github.com/getnamo/UnrealEnginePython/blob/master/Source/UnrealEnginePython/UnrealEnginePython.Build.cs#L28 and compile. This will use your specified python installation instead of the one included in the plugin, which can have whatever setup you're can run locally.
2) Use the remote variant of the plugin: https://github.com/getnamo/machine-learning-remote-ue4 which can communicate to a python installation anywhere (local folder, or remote server). Your python installation just has to support: https://github.com/getnamo/ml-remote-server wrapper (e.g. python 3.7)
Most likely reason for current blocker on the embedded build is the raw embedded python 3.6 zip which doesn't appear to support newer Tensorflow than r1.12 (https://github.com/getnamo/tensorflow-ue4/blob/master/Content/Scripts/upymodule.json). An update to this to support e.g. TF2.0 out of the box would be the ideal fix and add support for newer cuda.
I'm getting the following error loading TensorFlow with the GPU version, but it works ok with the CPU version. I'm running with CUDA 10.0 and cuDNN v7.6.5 on Windows 10 with an RTX Titan GPU. TensorFlow is running fine elsewhere on my machine under Anaconda
Is it the CUDA 10.0 that is the problem? I note that your spec suggests CUDA 9.0. If so, will there be a GPU version available that runs on CUDA 10.x ?