C0untFloyd / roop-unleashed

Evolved Fork of roop with Web Server and lots of additions
GNU Affero General Public License v3.0
2.2k stars 509 forks source link

AMD GPU Vega 64 #70

Closed SoulCZ closed 1 year ago

SoulCZ commented 1 year ago

Hi, I would like to ask someone who has experience with this so I don't have to bother the project creator :).. Could someone here advise me how to connect Vega 64 ( AMD ) ? I've tried the tutorials but I still can't get to it ... Is there any video tutorial ? Or a well described tutorial ? Thank you in advance for the answer ( used translator )

C0untFloyd commented 1 year ago

I wouldn't know anyway, I don't own a AMD GPU 🤷

SoulCZ commented 1 year ago

Anyway, thank you for being active :) it's rare to see someone actually responding

phineas-pta commented 1 year ago

it depends on which os, windows is easier linux need technical skills

SoulCZ commented 1 year ago

Hi, thanks for the reply .. I'm currently using windows :)

phineas-pta commented 1 year ago

then replace onnxruntime-gpu with onnxruntime-directml

add to lauch command: --execution-provider dml --execution-threads 1

ghost commented 1 year ago

Hi I similarly was trying to use an amd card on windows (6700xt) but came across an issue as seen below

image

I'm like 99.99% sure I've installed onnxruntime-directml and used the right launch functions so I'm not sure we're to go (I'm inexperienced with python)

phineas-pta commented 1 year ago

it's not onnxruntime fault

it's because default torch (in requirements) uses cuda backend

1st remove it

pip unistall torch torchvision

then reinstall

pip install torch torchvision

keep in mind that torch will use cpu so gfpgan / codeformer is very slow

ghost commented 1 year ago

That doesn't seem to have changed the error messages as far as I can tell, also a distribution error warning appears during the torchvision installation (I don't know what it means)

image

phineas-pta commented 1 year ago

something off with your installation im afraid 🤔

my advice is remove anything python and start again

this time edit requirements: remove 1st line and anything +cu118

lysxelapsed commented 1 year ago

when re-installing a component like onnxruntime-gpu or torchvision, I recommend adding the --force-reinstall argument. if you don't, chances are that it will install the very same thing as before from the cache. so with phineas-pta's suggestion, that would be:

I still don't understand, why it works that way with python. In my opinion, it should ask at least, if you want to use the one from the cache or download and install anew.

lysxelapsed commented 1 year ago

That doesn't seem to have changed the error messages as far as I can tell, also a distribution error warning appears during the torchvision installation (I don't know what it means)

image

did you install python from the windows store? if so, try uninstalling that one and install a release from python.org. just a guess, but I read recommendations not to install python from the windows store more than once now.

Sidharthiam commented 1 year ago

then replace onnxruntime-gpu with onnxruntime-directml

add to lauch command: --execution-provider dml --execution-threads 1

Hi, sorry for being a noob, im also having this issue, where should i replace "onnxruntime-gpu with onnxruntime-directml"?

Rompope123 commented 1 year ago

luego reemplace onnxruntime-gpu con onnxruntime-directml agregar al comando de inicio:--execution-provider dml --execution-threads 1

Hola, perdón por ser un novato, también tengo este problema, ¿dónde debo reemplazar "onnxruntime-gpu con onnxruntime-directml"?

Hello, how nice it is to see that there are people helping, I have the same question, I can't find a document to add those commands, I hope you can tell us please. In my case I can run Roop but it only uses the cpu, I installed onnxruntime-directml and it still does not use the GPU (Rx 6700xt)

brucecolino commented 10 months ago

hi guys, little noob here wants to use directml for AMD gpu instead of cpu cause it's too low. can somebody explain wich file to modify, how and what should try to make it work in order? thank you so much

phineas-pta commented 10 months ago

@brucecolino for ease of mind, use cpu, support for amd gpu in AI is not easy for beginner

brucecolino commented 10 months ago

thanks for the advise! could be a nice thing have a directml preinstalled version one for people like me :)

idronbes commented 7 months ago

i am looking for a solution too, in the original roop, i could use the command --execution-provider dml but with roop-unleashed we cannot use this command ; it writes "No CLI args supported - use Settings Tab instead". But in the settings tab, there is cuda, tensorrt and cpu. I guess we can add directml but i dont know where to change the code to make it work as original roop (with dml command). Does anybody knows which file to edit to remove cuda and add directml ? thank you

idronbes commented 7 months ago

i found the solution. you need to do this in command prompt ; type the commands : pip uninstall onnxruntime and pip install onnxruntime-directml==1.15.1

then you will have to edit this file (at the begining, with the option "Flase" , the dml option was there but was not loading) C:\Users\YOURUSERNAME\AppData\Local\Programs\Python\Python39\Lib\site-packages\gradio\components\dropdown.py

allow_custom_value: bool = False to allow_custom_value: bool = True

After that you should be able to run roop-unleashed and set the provider in the settings tab to dml.

it just worked for me.

riccorohl commented 7 months ago

i found the solution. you need to do this in command prompt ; type the commands : pip uninstall onnxruntime and pip install onnxruntime-directml==1.15.1

then you will have to edit this file (at the begining, with the option "Flase" , the dml option was there but was not loading) C:\Users\YOURUSERNAME\AppData\Local\Programs\Python\Python39\Lib\site-packages\gradio\components\dropdown.py

allow_custom_value: bool = False to allow_custom_value: bool = True

After that you should be able to run roop-unleashed and set the provider in the settings tab to dml.

it just worked for me.

dml didn't, showed up after following this

riccorohl commented 7 months ago

then replace onnxruntime-gpu with onnxruntime-directml

add to lauch command: --execution-provider dml --execution-threads 1

Replace it where?

idronbes commented 6 months ago

on original roop when you launch it by entering in the command ;
"python run.py --execution-provider dml --execution-threads 1" but i think on unleashed you cannot add options to the command

idronbes commented 6 months ago

oops yes lol. since i updated the app last week, i can't get the dml option back in the list. maybe try to enable the args on the command

python run.py --execution-provider dml --execution-threads 1

original roop accepted these CLI args but not unleashed version, maybe it can be changed but idk where

idronbes commented 6 months ago

i think i got it. you need to uninstall torch and reinstall as said above


phineas-pta commented on Aug 15, 2023 it's not onnxruntime fault

it's because default torch (in requirements) uses cuda backend

1st remove it

pip uninstall torch torchvision then reinstall

pip install torch torchvision keep in mind that torch will use cpu so gfpgan / codeformer is very slow


cu118 means for cuda so i think the problem comes from there.

Successfully uninstalled torch-2.1.2+cu118

Successfully installed torch-2.2.2 torchvision-0.17.2

Now i have dml option in the list, and cpu option.

KillerBean1206 commented 6 months ago

actually i got it to work thanks very much

riccorohl commented 6 months ago

I managed to install the correct version of torch you mentioned, but it still doesn't show DML... and it also says 2.1.2 still even after I uninstalled and re-installed 2.2.2

idronbes commented 6 months ago

Did you update roop? Maybe try clone and reinstall with "windows_run.bat" from zero, then refollow the steps. At end i ran the command Pip install -r requirements.txt (in roop directory) Maybe there is something missing to get the 2 2.2