w-okada / voice-changer

リアルタイムボイスチェンジャー Realtime Voice Changer
Other
15.72k stars 1.69k forks source link

[Knowledge] ROCm (AMD GPU) Support on Linux Guide #868

Open LuisArtDavila opened 11 months ago

LuisArtDavila commented 11 months ago

Description

Hello,

I managed to get my GPU to display within the web interface by executing the following commands after setting up my Conda environment. Your mileage will vary, as this was only tested on a RDNA 2/Navi 2 AMD GPU and more specifically, it was tested with my 6700 XT. I would love to know if this works for anyone else, so please let me know so that I may open a pull request.

This was tested on Arch Linux but might work on other distributions. If not, you can always try with distrobox.

Before running any commands, make sure that you are cd'd into the cloned repository and have the environment activated, i.e. via conda activate vcclient-dev:

$ pip install fairseq pyworld
$ export HSA_OVERRIDE_GFX_VERSION=10.3.0
$ pip install torch==2.0.1+rocm5.4.2 torchvision==0.15.2+rocm5.4.2 --index-url https://download.pytorch.org/whl/rocm5.4.2
$ cd server

If you are running a 7000 series GPUs, the last pip install command will look like this instead:

pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm5.6

and if you are on the older Navi (5000 series) cards, it will be this:

pip install torch==1.13.1+rocm5.2 torchvision==0.14.1+rocm5.2 --index-url https://download.pytorch.org/whl/rocm5.2

Make sure to only run one of the pip install commands - the one that is for your particular GPU. Running it will uninstall the previous version of torch and some other modules and replace them with the ROCm ones.

Then finally, run as normal:

python3 MMVCServerSIO.py -p 18888 --https true \
    --content_vec_500 pretrain/checkpoint_best_legacy_500.pt  \
    --content_vec_500_onnx pretrain/content_vec_500.onnx \
    --content_vec_500_onnx_on true \
    --hubert_base pretrain/hubert_base.pt \
    --hubert_base_jp pretrain/rinna_hubert_base_jp.pt \
    --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt \
    --nsf_hifigan pretrain/nsf_hifigan/model \
    --crepe_onnx_full pretrain/crepe_onnx_full.onnx \
    --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx \
    --rmvpe pretrain/rmvpe.pt \
    --model_dir model_dir \
    --samples samples.json

And if you want to pipe your audio through a virtual audio device with PipeWire, you can create a "virtual audio cable" with the following command:

pw-loopback \
  --capture-props='media.class=Audio/Sink node.name=al_speaker node.description="Audiolink Speaker' \
  --playback-props='media.class=Audio/Source node.name=al_mic node.description="Audiolink Mic"' \
  &

This will create "Audiolink Speaker" and "Audolink Mic" devices. You will pipe the voice-changer audio through the speaker and set the microphone as the input device in your application, e.g. Discord.

Credits

stable-diffusion-webui for pip install ROCm commands audiolink

LuisArtDavila commented 11 months ago

I forgot to mention, but make sure that you run:

$ export HSA_OVERRIDE_GFX_VERSION=10.3.0

Before starting the server every time or it will crash and be unresponsive.

w-okada commented 11 months ago

Great work! There were others who also wanted to use ROCm, so I believe this information is very beneficial. I would appreciate it if you could make a pull request.

TheTrustedComputer commented 11 months ago

I can confirm this setup works with my 5500 XT on Linux; however, this card uses an older GFX version (10.1.x), which this build of PyTorch apparently doesn't like.

"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"

LuisArtDavila commented 11 months ago

I can confirm this setup works with my 5500 XT on Linux; however, this card uses an older GFX version (10.1.x), which this build of PyTorch apparently doesn't like.

"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"

Hmm, I'm a little confused about what you mean by this. It works if you run all the commands (e.g. the export and pip install for your 5000 series card, meaning you install torch 1.13.1) but normally it wouldn't?

TheTrustedComputer commented 10 months ago

I can confirm this setup works with my 5500 XT on Linux; however, this card uses an older GFX version (10.1.x), which this build of PyTorch apparently doesn't like. "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"

Hmm, I'm a little confused about what you mean by this. It works if you run all the commands (e.g. the export and pip install for your 5000 series card, meaning you install torch 1.13.1) but normally it wouldn't?

What I mean is that by modifying the environment variable HSA_OVERRIDE_GFX_VERSION to 10.1.0 or 10.1.2, I get this error message when starting the voice changer. Changing it to 10.2.0 removes the error, but I don't see any of my cards listed. 10.3.0 works, and the output from radeontop shows my card is being utilized (I have a dual GPU setup btw). I hope this clears up any confusion you have.

Furthermore, when activating the voice changer from the terminal and starting it for the first time from the web browser, not just simply stopping and restarting it, I get this warning:

MIOpen(HIP): Warning [SQLiteBase] Missing system database file: gfx1030_11.kdb Performance may degrade. Please follow instructions to install: https://github.com/ROCmSoftwarePlatform/MIOpen#installing-miopen-kernels-package

However, my package manager (pacman) says it appears to be installed system-wide. I don't know how to remove the warning, as the instructions target Ubuntu and Ubuntu-based distributions. Arch does have this package, and installing it doesn't remove the warning.

extra/miopen-hip 5.6.1-1 [installed]
    AMD's Machine Intelligence Library (HIP backend)
GatienDoesStuff commented 10 months ago

I've been in touch with TheTrustedComputer, and I might have been misleading due to my lack of prior research. I did some more digging though, and here's the deal :

The issue with AMD's compute stack (Which might just be a packaging issue) is that they tend to build binaries that only target some cards, but can only use a given GPU if both pytorch & the local ROCm libraries were built for it.

As an example, whatever Arch ships with for ROCBLAS (one of the ROCm libraries) doesn't have that many targets, meaning that for most cards you have to override to the closest target the library's been built for. I suppose this is the case for more packages.

On my setup, with a gfx1035 card, I can't run pytorch as either my local installation of the ROCm libraries or pytorch didn't build for it. They did build for gfx1030 though, and since my card is close enough to it, HSA_OVERRIDE_GFX_VERSION=10.3.0 just works

This is why the override is required, most installations don't ship "fat" binaries that support all targets, unlike CUDA which has a different mechanism allowing it to support all of it's targets easier.

Finding the right override can be a pain though, I'm not sure how to document it well

Edit: It's weird that the "Unable to find code object" errors only show up in some cases, and only segfaults in others, but when AMD_LOG_LEVEL=1 is set, the logging does tell you what the issue is, and also give you the targets the software you are running have been built for

EDIT: It seems like the torch ROCm package is self-sufficient, and that the host libraries don't have much to do with it

Xanderman27 commented 10 months ago

Is this process similar to windows? I would love to get ROCm working on my 7800xt. Would a translation layer be necessary?

GatienDoesStuff commented 10 months ago

Is this process similar to windows? I would love to get ROCm working on my 7800xt. Would a translation layer be necessary?

You need to wait for a pytorch ROCm backend to land for windows.

Most of the ROCm libraries are already ported, but there's still some left (I think there's MIOpen?) before pytorch can work there

ProFFs commented 10 months ago

Where me find Mmvcserversio.py?

LuisArtDavila commented 10 months ago

Is this process similar to windows? I would love to get ROCm working on my 7800xt. Would a translation layer be necessary?

You might be able to use WSL to get this to work but I have not tried it myself. Let me know how it goes if you give it a shot - I might be able to help if you run into any issues.

Where me find Mmvcserversio.py?

It is located within the "server" folder.

ProFFs commented 10 months ago

Is this process similar to windows? I would love to get ROCm working on my 7800xt. Would a translation layer be necessary?

You might be able to use WSL to get this to work but I have not tried it myself. Let me know how it goes if you give it a shot - I might be able to help if you run into any issues.

Where me find Mmvcserversio.py?

It is located within the "server" folder.

This folder is located in Windows version?

ALEX5402 commented 10 months ago

can you tell me what version should i use for vega 8 gpu

Xanderman27 commented 10 months ago

https://www.phoronix.com/news/RX-7900-XTX-ROCm-PyTorch

EmiliaTheGoddess commented 9 months ago

For Polaris users(RX 580, RX 590, etc.) on Arch Linux, HIP binaries provided by official repositories don't work. Here's a small guide for users of older cards:

Note: onnxruntime may give out errors like Failed to load library libonnxruntime_providers_cuda.so. Don't worry about that, it still works. Please correct me if I have any mistakes.

YourSandwich commented 9 months ago

Thank you, this worked, although I had to install some python modules manually since the current requirements.txt file will downlaod nvidia stuff.

Also on ArchLinux i compiled Python3.10 since 3.11.5 is incompatible with onix. I am using an RX7900XT

YourSandwich commented 9 months ago

I can't record the mic successful because of this issue: NotSupportedError: AudioContext.createMediaStreamSource: Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported.

YourSandwich commented 9 months ago

I have an input sample-rate of 48k and the models are 40k, do i miss some modules?

YourSandwich commented 9 months ago

It was an Firefox issue

ChenXingLing commented 1 week ago

It was an Firefox issue

Thank you god!!!!!! I succeeded in Chrome!!

kerriganx commented 21 hours ago

Radeon RX 6900 XT

    Booting PHASE :__main__
    PYTHON:3.10.14 (main, Aug 18 2024, 03:43:04) [GCC 14.2.1 20240805]
    Activating the Voice Changer.
[Voice Changer] download sample catalog. samples_0004_t.json
[Voice Changer] download sample catalog. samples_0004_o.json
[Voice Changer] download sample catalog. samples_0004_d.json
[Voice Changer] model_dir is already exists. skip download samples.
    Internal_Port:18888
    protocol: HTTP
    -- ---- -- 
    Please open the following URL in your browser.
    http://<IP>:<PORT>/
    In many cases, it will launch when you access any of the following URLs.
    http://localhost:18888/
    Booting PHASE :__mp_main__
    The server process is starting up.
    Booting PHASE :MMVCServerSIO
[Voice Changer] VoiceChangerManager initializing...
[Voice Changer] model slot is changed -1 -> 11
................RVC
Process SpawnProcess-1:1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/uvicorn/server.py", line 59, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/uvicorn/server.py", line 66, in serve
    config.load()
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/uvicorn/config.py", line 471, in load
    self.loaded_app = import_from_string(self.app)
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/user/Downloads/voicechanger/voice-changer/server/MMVCServerSIO.py", line 142, in <module>
    voiceChangerManager = VoiceChangerManager.get_instance(voiceChangerParams)
  File "/home/user/Downloads/voicechanger/voice-changer/server/voice_changer/VoiceChangerManager.py", line 131, in get_instance
    cls._instance = cls(params)
  File "/home/user/Downloads/voicechanger/voice-changer/server/voice_changer/VoiceChangerManager.py", line 94, in __init__
    self.update_settings("modelSlotIndex", self.stored_setting["modelSlotIndex"])
  File "/home/user/Downloads/voicechanger/voice-changer/server/voice_changer/VoiceChangerManager.py", line 348, in update_settings
    self.generateVoiceChanger(newVal)
  File "/home/user/Downloads/voicechanger/voice-changer/server/voice_changer/VoiceChangerManager.py", line 256, in generateVoiceChanger
    from voice_changer.RVC.RVCr2 import RVCr2
  File "/home/user/Downloads/voicechanger/voice-changer/server/voice_changer/RVC/RVCr2.py", line 21, in <module>
    from voice_changer.RVC.pitchExtractor.PitchExtractorManager import PitchExtractorManager
  File "/home/user/Downloads/voicechanger/voice-changer/server/voice_changer/RVC/pitchExtractor/PitchExtractorManager.py", line 11, in <module>
    from voice_changer.RVC.pitchExtractor.FcpePitchExtractor import FcpePitchExtractor
  File "/home/user/Downloads/voicechanger/voice-changer/server/voice_changer/RVC/pitchExtractor/FcpePitchExtractor.py", line 5, in <module>
    import torchfcpe
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/torchfcpe/__init__.py", line 1, in <module>
    from .tools import (
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/torchfcpe/tools.py", line 2, in <module>
    from .mel_extractor import Wav2Mel
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/torchfcpe/mel_extractor.py", line 5, in <module>
    from torchaudio.transforms import Resample
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/torchaudio/__init__.py", line 1, in <module>
    from torchaudio import (  # noqa: F401
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/torchaudio/_extension/__init__.py", line 43, in <module>
    _load_lib("libtorchaudio")
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/torchaudio/_extension/utils.py", line 61, in _load_lib
    torch.ops.load_library(path)
  File "/home/user/Downloads/voicechanger/myenv/lib/python3.10/site-packages/torch/_ops.py", line 643, in load_library
    ctypes.CDLL(path)
  File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: libtorch_cuda.so: cannot open shared object file: No such file or directory
Kuuko-fokkusugaru commented 19 hours ago

@kerriganx please open a new issue instead and add all the details requested in the form.