Open raisindetre opened 1 year ago
@raisindetre it seems a bug in webui, can you open a issue in https://github.com/Mikubill/sd-webui-controlnet
This is because you can't use CPU preprocessing on a Mac computer. To fix this, you can open the file located at /stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/clipvision/init.py using a text editor or IDE (such as VSCode or PyCharm).
1、Navigate to line 81 and locate the line: clip_vision_h_uc = torch.load(clip_vision_h_uc)['uc']. 2、Modify this line to: clip_vision_h_uc = torch.load(clip_vision_h_uc, map_location=torch.device('cpu'))['uc']. 3、Save your changes and exit the editor. 4、Run your program again. This should prevent any CUDA-related errors. However, even after this change, you may still encounter errors when generating images.
On Apple Silicon, change cpu
to mps
seems also works.
This is because you can't use CPU preprocessing on a Mac computer. To fix this, you can open the file located at /stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/clipvision/init.py using a text editor or IDE (such as VSCode or PyCharm).
1、Navigate to line 81 and locate the line: clip_vision_h_uc = torch.load(clip_vision_h_uc)['uc']. 2、Modify this line to: clip_vision_h_uc = torch.load(clip_vision_h_uc, map_location=torch.device('cpu'))['uc']. 3、Save your changes and exit the editor. 4、Run your program again. This should prevent any CUDA-related errors. However, even after this change, you may still encounter errors when generating images.
Still doesn't work. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Still doesn't work. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Have you set "--no-half" in your start-up flags? Mine works with that set.
This is because you can't use CPU preprocessing on a Mac computer. To fix this, you can open the file located at /stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/clipvision/init.py using a text editor or IDE (such as VSCode or PyCharm).
1、Navigate to line 81 and locate the line: clip_vision_h_uc = torch.load(clip_vision_h_uc)['uc']. 2、Modify this line to: clip_vision_h_uc = torch.load(clip_vision_h_uc, map_location=torch.device('cpu'))['uc']. 3、Save your changes and exit the editor. 4、Run your program again. This should prevent any CUDA-related errors. However, even after this change, you may still encounter errors when generating images.
THANK YOU! Work here after this fix! (rx6650xt)
Still doesn't work. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
Have you set "--no-half" in your start-up flags? Mine works with that set.
I tried it today. It works! Thx ;)
This line change "clip_vision_h_uc = torch.load(clip_vision_h_uc, map_location=torch.device('cpu'))['uc']." and "--no-half" fixed the problem for me on Mac Studio M2 Ultra. All ip-adapters functional!
Thanks!
setting --no-half and that line change does fix the issue, HOWEVER performance when setting --no-half drops about 25% (1:50 -> 2:26 for same image), comparing same image with flag set and not set control net not enabled.
Using the SD1.5 IP-Adaptor models in v1.6.0 Automatic1111 environment (python: 3.10.13, torch: 2.0.1 ) with latest ControlNet on Apple ARM architecture generates a random image and produces console runtime error below. COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate".