Closed Brodski closed 6 months ago
Please share relevant GPU, OS, and ROCm version information. You may not need the gfx1010
version with newer AMD GPUs.
According to the official PyTorch documentation, this command should work:
pip install torch==1.13.1+rocm5.2 torchvision==0.14.1+rocm5.2 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/rocm5.2
Please note that 1.13.1+rocm5.2 is outdated and I am only using it with ROCm 5.2 due to a Regression in rocm 5.3 and newer for gfx1010
By the looks of it, from that official PyTorch doc, the pip install
only works on linux. I'm on windows and I have not set up a VM to use the host's GPU.
I instead tried to use ZLUDA via this command: .\zluda.exe -- C:\Users\...\insanely-fast\venv\Scripts\insanely-fast-whisper.exe --model openai/whisper-base --file-name myaudio.opus
It partially works, but another problems occurs which I believe is unrelated to this repos code. Probably this other problem is b/c ROCm and HIP are not perfect yet: https://github.com/vosen/ZLUDA/issues/128 and https://github.com/vosen/ZLUDA/issues/158
Error message:
File "C:\Users\BrodskiTheGreat\Desktop\desktop\Code\scraper-dl-vids\insanely-fast\venv\lib\site-packages\torch\nn\modules\conv.py", line 306, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You probably can close this one if you wish. But if you want more info from me I could provide it.
Oh yes, PyTorch currently does not support ROCm on Windows. Sorry I didn't see c:\
in your logs the first time. For Windows users, I would recommend torch-directml which should be a drop-in replacement for torch although I'm not sure if the transformers library recognizes it or not.
When I run this command, i get an error message.
Command:
$ pip install -r requirements-gfx1010.txt --extra-index-url https://download.pytorch.org/whl/rocm5.2
Error:
I dont have a nvidia gpu and was hoping to play around with this AMD fix :/