Closed AndroYD84 closed 4 years ago
Thank you for checking out the video and code! The entire code is designed to use and run on GPU by default wherever necessary. Please check the following:
After this, it is just overlaying the generated video and audio, and it does not need GPU. Please give me some statistics on the inference speed on your end after making sure of the above.
Thanks so much for the help! Now it works perfectly! For anyone having the same problem, I suggest to install Dlib from source, turns out that if you install it from pip or conda and it doesn't detect/compile CUDA for ANY reason (in my case, I was using gcc 7 which was incompatible, had to switch to gcc 6), it will automatically fallback to the CPU only version without even telling you anything (it MAY tell you if you had the "-v" option active, I didn't try), if you install from source it will always tell you immediately if there's been any problem to install the CUDA version. Before with CPU it would take more than 1 hour, now the same file (17 seconds audio, 720 x 480 video file of 1 minute) will only take SECONDS on a GTX 1070 with 8GB VRAM, a little less than a minute, it's lighting fast!
Inference is very slow for me, I can see that only the CPU is being used at the beginning (when the two progress bars are loading), only after that, during the second stage that things get a lot faster as the GPU starts kicking in and generate the final results, would it be possible to use the GPU the entire time (especially at the beginning) or that first stage can be done only on the CPU? I'm pretty sure I installed everything correctly. Thanks for sharing this amazing repo, I'm really impressed with the results!