nomadkaraoke / python-audio-separator

Easy to use vocal separation from CLI or as a python package, using a variety of amazing pre-trained models (primarily from UVR)
MIT License
407 stars 67 forks source link

GPU not being used on Colab #43

Closed ajayarora1235 closed 6 months ago

ajayarora1235 commented 7 months ago

Seems like my separation pipeline is running in CPU mode on colab, even after reinstalling torch -- a 3 minute track takes 5 minutes to separate using Kim Vocal 2.

Steps used to install: !pip install "audio-separator[gpu]==0.14.5" !pip uninstall torch onnxruntime-gpu !pip cache purge !pip install --force-reinstall torch==2.1.0 torchvision torchaudio !pip install --force-reinstall onnxruntime-gpu

And the info from the relevant run: "INFO:audio_separator.separator.separator:Separator version 0.14.5 instantiating with output_dir: None, output_format: FLAC INFO:audio_separator.separator.separator:Operating System: Linux #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023 INFO:audio_separator.separator.separator:System: Linux Node: 47eab71b5093 Release: 6.1.58+ Machine: x86_64 Proc: x86_64 INFO:audio_separator.separator.separator:Python Version: 3.10.12 INFO:audio_separator.separator.separator:FFmpeg installed: ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers DEBUG:audio_separator.separator.separator:Python package: onnxruntime-silicon not installed DEBUG:audio_separator.separator.separator:Python package: onnxruntime not installed INFO:audio_separator.separator.separator:ONNX Runtime GPU package installed with version: 1.17.0 INFO:audio_separator.separator.separator:CUDA is available in Torch, setting Torch device to CUDA INFO:audio_separator.separator.separator:ONNXruntime has CUDAExecutionProvider available, enabling acceleration INFO:audio_separator.separator.separator:Loading model Kim_Vocal_2.onnx..."

beveradb commented 7 months ago

Hmm, do you know what CUDA version it's using, and what version(s) are installed?

I believe Colab switched to CUDA 12 recently ish ( https://github.com/googlecolab/colabtools/issues/4214 ), which caused a lot of issues for folks who had been using things which only supported CUDA 11.8.

ONNX Runtime was unfortunately one of those things which wasn't working on CUDA 12 till recently, but the latest version (1.17.0) is supposed to add support ( https://github.com/microsoft/onnxruntime/issues/18850#issuecomment-1892977993 ).

However, when I tried to test it with CUDA 12 on RunPod (https://www.runpod.io), I found the CUDA 11.8 runtime still needed to be installed as well as the CUDA 12 one, for some reason the onnxruntime binaries still seem to link dynamic library versions from CUDA 11 πŸ€·β€β™‚οΈ ; only once I had both installed was inferencing able to use the GPU.

So, that would be my first guess - but I'm surprised you didn't see any errors! Are you able to test anything else which uses ONNX Runtime?

Oh, and in case it helps, the PTH models don't use ONNX Runtime so you might want to try one of those, e.g. "2_HP-UVR.pth". You can list all supported models by running audio-separator --list_models πŸ˜„

beveradb commented 7 months ago

Just tested it myself on Colab and yep, that was the issue! Pretty sure you should have been seeing an error (highlighted in red and yellow) similar to the one below:

Audio-Separator GPU Testing - Colaboratory 2024-02-19 23-00-13

Basically, even though the Colab runtime now uses CUDA 12, which you can verify with nvidia-smi:

image

ONNX Runtime is compiled with references to CUDA 11.8 too, so you need to install some of those libraries for it to work.

I fixed it by installing the CUDA 11.8 runtime with apt update; apt install nvidia-cuda-toolkit, and it then ran without errors and inferenced quickly:

image

I've updated the README with instructions addressing this specific issue: https://github.com/karaokenerds/python-audio-separator?tab=readme-ov-file#multiple-cuda-library-versions-may-be-needed

Hope this helps and resolves your issue! If not feel free to ping me here or on whatsapp and I'll do my best to get things working for you πŸ˜„

beveradb commented 6 months ago

Hey @ajayarora1235 did my comment about resolve this for you? πŸ˜„

beveradb commented 6 months ago

Received confirmation from Ajay that this is resolved, closing! πŸŽ‰