Closed Echolink50 closed 3 months ago
The between that onnx model , you run it on CPU only and you need compare with main Live Portrait in CPU as the same RAM not GPU , Also , we preparing for release TensorRT as well as aware than onnx
Sorry, I didn't understand that. What do I need to compare? Do I need to run it with -r for real time or something?
@Echolink50 Wait me update new model with handle GPU onnx, if you can't run it, it seem like you install onnxruntime-gpu don't fit with your cuda and cudnn
Follow this issues #2 : If you run onnxruntime successfully on GPU it'll raise error like this
@Echolink50 not sure if you had the same problem, but for me:
conda activate ELivePortrait
pip uninstall onnxruntime
pip uninstall onnxruntime-gpu
conda install conda-forge::vs2015_runtime
pip install onnxruntime-gpu
python
>>> import onnxruntime as rt
>>> rt.get_device()
I did do pip uninstall onnxruntime and pip install onnxruntime but i didn't do the other stuff you mentioned. I will give it a try but from what people have been saying it's still not faster than the main repo. Seems trying to go straight to tensorRT is more worthwhile.
@Echolink50 TensorRT is faster than onnx , so the onnx model have some problem with speed inference since few year ago at many another repo, so just use it for fun or you can use with tensorRT for get faster
WIndows 10, RTX 2060 12GB. Install had an error with torchaudio so I looked in the issues and deleted torchaudio from the requirements.txt. Installed fine. Runs fine but no GPU and estimated time is 10 minutes for 3 second 78 frame example file. The Liveportrait main takes about 10 seconds for a 3 second video. Not sure what the issue is.