Open ameen-roayan opened 1 year ago
After much reading, boosted initial starting speed to 24it/s but it drops back to 1.5-1.9 it/s immediately, the GPU utilization would be at 30% at that point.
How did you manage to increase the iterations, im having the same issue on a 3080 12gb
Yes, i would like to know too. I spent days on trying to get my RTX3080 fully used but stuck at 20% max with 1.5 - 2.5 it/s. Very frustrating.
From what i recall in that session, when checking which device is being used it was always opting for the CPU, so had to remove onnx and install the gpu version, but that sent me down a rabbithole of other things that needed to be replaced, cant recall the order of things really it was quite a process, but at the end it still is as everyone else capped at 3 max even if it starts at 20
Just check out this repo : https://github.com/mike9251/simswap-inference-pytorch
Just check out this repo : https://github.com/mike9251/simswap-inference-pytorch
Yeah, did that as well, same results unfortunately.
Ok i managed to make it work :)
I suggest you to uninstall everything and start with a fresh clean install of anaconda.
Here is what I did to make it work (I am only using the repo : https://github.com/mike9251/simswap-inference-pytorch)
Install lastest graphic drivers Install Cuda 11.6 Install Cudnn 8.6.0.163 (extract folders into Cuda folders) Add windows environment variables paths C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI\include C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI\lib64
conda create -n simswapgpu python=3.9
conda activate simswapgpu
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio===0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
pip install onnxruntime-gpu==1.11.1
Replace lines in :
Anaconda3\envs\simswap\Lib\site-packages\insightface\model_zoo\model_zoo.py
class ModelRouter: def init(self, onnx_file): self.onnx_file = onnx_file
def get_model(self):
session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
input_cfg = session.get_inputs()[0]
I am not sure as to what exactly is causing this, I did go through some threads that were written recently but there was no specific method to ensure that it is actually using the GPU.
on a 3090ti