Closed fxbeaulieu closed 11 months ago
I got it to work by editing refiner.py and replacing every mention of "cuda" by "mps".
I am not good enough with Python to write the proper fix myself, but I guess you would need to somehow verify the device to use at the start of the script and then use the result of that check to load the model.
I replaced 'cuda'
with devices.device
, it should now use the correct device, whatever it is.
Works A1 ! Thanks again man 💯
Do you know if this should be able to run on Macs with Apple Silicon chips or do I need CUDA ?
WebUI is being launched with the following settings :
export COMMANDLINE_ARGS="--opt-sub-quad-attention --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interogate --listen --port 64640 --api --enable-insecure-extension-access"
When I activate the Refiner extension in txt2img, I get the following output in the shell at some point during the generation :