Closed SoulCZ closed 1 year ago
I wouldn't know anyway, I don't own a AMD GPU 🤷
Anyway, thank you for being active :) it's rare to see someone actually responding
it depends on which os, windows is easier linux need technical skills
Hi, thanks for the reply .. I'm currently using windows :)
then replace onnxruntime-gpu
with onnxruntime-directml
add to lauch command: --execution-provider dml --execution-threads 1
Hi I similarly was trying to use an amd card on windows (6700xt) but came across an issue as seen below
I'm like 99.99% sure I've installed onnxruntime-directml and used the right launch functions so I'm not sure we're to go (I'm inexperienced with python)
it's not onnxruntime fault
it's because default torch (in requirements) uses cuda backend
1st remove it
pip unistall torch torchvision
then reinstall
pip install torch torchvision
keep in mind that torch will use cpu so gfpgan / codeformer is very slow
That doesn't seem to have changed the error messages as far as I can tell, also a distribution error warning appears during the torchvision installation (I don't know what it means)
something off with your installation im afraid 🤔
my advice is remove anything python and start again
this time edit requirements: remove 1st line and anything +cu118
when re-installing a component like onnxruntime-gpu or torchvision, I recommend adding the --force-reinstall argument. if you don't, chances are that it will install the very same thing as before from the cache. so with phineas-pta's suggestion, that would be:
I still don't understand, why it works that way with python. In my opinion, it should ask at least, if you want to use the one from the cache or download and install anew.
That doesn't seem to have changed the error messages as far as I can tell, also a distribution error warning appears during the torchvision installation (I don't know what it means)
did you install python from the windows store? if so, try uninstalling that one and install a release from python.org. just a guess, but I read recommendations not to install python from the windows store more than once now.
then replace onnxruntime-gpu with onnxruntime-directml
add to lauch command:
--execution-provider dml --execution-threads 1
Hi, sorry for being a noob, im also having this issue, where should i replace "onnxruntime-gpu with onnxruntime-directml"?
luego reemplace onnxruntime-gpu con onnxruntime-directml agregar al comando de inicio:
--execution-provider dml --execution-threads 1
Hola, perdón por ser un novato, también tengo este problema, ¿dónde debo reemplazar "onnxruntime-gpu con onnxruntime-directml"?
Hello, how nice it is to see that there are people helping, I have the same question, I can't find a document to add those commands, I hope you can tell us please. In my case I can run Roop but it only uses the cpu, I installed onnxruntime-directml and it still does not use the GPU (Rx 6700xt)
hi guys, little noob here wants to use directml for AMD gpu instead of cpu cause it's too low. can somebody explain wich file to modify, how and what should try to make it work in order? thank you so much
@brucecolino for ease of mind, use cpu, support for amd gpu in AI is not easy for beginner
thanks for the advise! could be a nice thing have a directml preinstalled version one for people like me :)
i am looking for a solution too, in the original roop, i could use the command --execution-provider dml but with roop-unleashed we cannot use this command ; it writes "No CLI args supported - use Settings Tab instead". But in the settings tab, there is cuda, tensorrt and cpu. I guess we can add directml but i dont know where to change the code to make it work as original roop (with dml command). Does anybody knows which file to edit to remove cuda and add directml ? thank you
i found the solution. you need to do this in command prompt ; type the commands : pip uninstall onnxruntime and pip install onnxruntime-directml==1.15.1
then you will have to edit this file (at the begining, with the option "Flase" , the dml option was there but was not loading) C:\Users\YOURUSERNAME\AppData\Local\Programs\Python\Python39\Lib\site-packages\gradio\components\dropdown.py
allow_custom_value: bool = False to allow_custom_value: bool = True
After that you should be able to run roop-unleashed and set the provider in the settings tab to dml.
it just worked for me.
i found the solution. you need to do this in command prompt ; type the commands : pip uninstall onnxruntime and pip install onnxruntime-directml==1.15.1
then you will have to edit this file (at the begining, with the option "Flase" , the dml option was there but was not loading) C:\Users\YOURUSERNAME\AppData\Local\Programs\Python\Python39\Lib\site-packages\gradio\components\dropdown.py
allow_custom_value: bool = False to allow_custom_value: bool = True
After that you should be able to run roop-unleashed and set the provider in the settings tab to dml.
it just worked for me.
dml didn't, showed up after following this
then replace
onnxruntime-gpu
withonnxruntime-directml
add to lauch command:
--execution-provider dml --execution-threads 1
Replace it where?
on original roop when you launch it by entering in the command ;
"python run.py --execution-provider dml --execution-threads 1"
but i think on unleashed you cannot add options to the command
oops yes lol. since i updated the app last week, i can't get the dml option back in the list. maybe try to enable the args on the command
python run.py --execution-provider dml --execution-threads 1
original roop accepted these CLI args but not unleashed version, maybe it can be changed but idk where
i think i got it. you need to uninstall torch and reinstall as said above
phineas-pta commented on Aug 15, 2023 it's not onnxruntime fault
it's because default torch (in requirements) uses cuda backend
1st remove it
pip uninstall torch torchvision then reinstall
pip install torch torchvision keep in mind that torch will use cpu so gfpgan / codeformer is very slow
cu118 means for cuda so i think the problem comes from there.
Successfully uninstalled torch-2.1.2+cu118
Successfully installed torch-2.2.2 torchvision-0.17.2
Now i have dml option in the list, and cpu option.
actually i got it to work thanks very much
I managed to install the correct version of torch you mentioned, but it still doesn't show DML... and it also says 2.1.2 still even after I uninstalled and re-installed 2.2.2
Did you update roop? Maybe try clone and reinstall with "windows_run.bat" from zero, then refollow the steps. At end i ran the command Pip install -r requirements.txt (in roop directory) Maybe there is something missing to get the 2 2.2
Hi, I would like to ask someone who has experience with this so I don't have to bother the project creator :).. Could someone here advise me how to connect Vega 64 ( AMD ) ? I've tried the tutorials but I still can't get to it ... Is there any video tutorial ? Or a well described tutorial ? Thank you in advance for the answer ( used translator )