-
# the code as follows:
```
CUDA_VISIBLE_DEVICES=1,2 accelerate launch train_flux_deepspeed_controlnet.py --config "train_configs/test_canny_controlnet.yaml"
```
# ERROE
```
The following values …
-
2024-02-17 00:54:29,657 - ControlNet - [0;32mINFO[0m - Preview Resolution = 512
Traceback (most recent call last):
File "C:\sd-webui-aki-v4.2\python\lib\site-packages\gradio\routes.py", line 488…
-
**anytest_v3** is the most popular model in Japan.
It can automatically recognize and apply various inputs, such as sketches, OpenPose, depth, and more.
In anime-style illustrations, it has hi…
-
First of all, thank you for building this node pack. This has literally transformed my entire workflow as a retoucher.
That being said, I’d love to tweak some adjustments inside Comfy without havin…
-
Received the following error. Looks like somebody updated something, and now the code doesn't work anymore. Any ideas on how to fix it? Seems like this happens ever three or four months
WARNING[XFO…
-
How much vram needed for controlnet training?
-
SDXL-base works perfectly on Inf2 chips. Different SDXL pipelines (inpaint, img2img ) are also working perfectly. But, as far as I read/try, there is no support for ControlNet and IPAdapter. Are these…
-
![Screenshot1](https://github.com/user-attachments/assets/41a0d22d-4a4c-4d20-954d-70cf83b58e2c)
The canny is not even working at all. Please find the workflow attached, Am I doing something wrong?
…
LiJT updated
3 months ago
-
**Is your feature request related to a problem? Please describe.**
The `StableDiffusionControlNetPipeline`/`StableDiffusionXLControlNetPipeline` allow you to specify a list of control nets (i.e. dept…
-
**Describe the bug**
Great work! I get a lot of speedup running the standard text2imgPipeline(30%~40%). But when I run img2imgControlnetPipeline, the speedup is small (less than 10%) because control…