comfyanonymous / ComfyUI

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
GNU General Public License v3.0
41.38k stars 4.4k forks source link

I hope comfyui will support sdxs soon #3147

Open wibur0620 opened 3 months ago

wibur0620 commented 3 months ago

https://idkiro.github.io/sdxs/

This model can achieve a speed of 100FPS on a single GPU

comfyanonymous commented 3 months ago

workflow_sdxs_0.9.json

The unet file is: https://huggingface.co/IDKiro/sdxs-512-0.9/tree/main/unet

eliganim commented 3 months ago

@comfyanonymous I get this error. Any idea what I'm doing wrong here?

Screenshot 2024-03-28 at 10 14 32

wibur0620 commented 3 months ago

workflow_sdxs_0.9.json

The unet file is: https://huggingface.co/IDKiro/sdxs-512-0.9/tree/main/unet

Thank you very much.

fogostudio commented 3 months ago

the workflow requieres clip_h.safetensors, any idea where to find this ? tnx!

comfyanonymous commented 3 months ago

@comfyanonymous I get this error. Any idea what I'm doing wrong here?

Update ComfyUI.

the workflow requieres clip_h.safetensors, any idea where to find this ? tnx!

https://huggingface.co/IDKiro/sdxs-512-0.9/blob/main/text_encoder/model.safetensors

MaraScott commented 3 months ago

I got that error either with 512 width/height/resolution or 1024

Error occurred when executing KSampler Adv. (Efficient):
'Downsample' object has no attribute 'emb_layers'

when connecting this piece of workflow

image

with that workflow (the AnyBus I use is not supporting the GetSet Node yet)

tw-bbq-wf-sdsx

if I deactivate this node, everything is fine

image


blender image to load

tw-bbq-depthmap


Is there a better place to ask help on that ?

unphased commented 3 months ago

with the clues from here I was able to get SDXS running in comfy but I noticed that only good results come from using 1.5 VAE's, which is a bit strange to me.

Wraithnaut commented 3 months ago

with the clues from here I was able to get SDXS running in comfy but I noticed that only good results come from using 1.5 VAE's, which is a bit strange to me.

The SDXS model they released was trained on a 512 resolution, which is the same resolution SD1.5 has. There is a 1024 resolution version but it hasn't been released yet.

Source: https://huggingface.co/IDKiro/sdxs-512-0.9#sdxs-512-09

SDXS-512-0.9 is a old version of SDXS-512. For some reasons, we are only releasing this version for the time being, and will gradually release other versions.

edwardsdigital commented 3 months ago

with the clues from here I was able to get SDXS running in comfy but I noticed that only good results come from using 1.5 VAE's, which is a bit strange to me.

The SDXS model they released was trained on a 512 resolution, which is the same resolution SD1.5 has. There is a 1024 resolution version but it hasn't been released yet.

Source: https://huggingface.co/IDKiro/sdxs-512-0.9#sdxs-512-09

SDXS-512-0.9 is a old version of SDXS-512. For some reasons, we are only releasing this version for the time being, and will gradually release other versions.

I noticed the fairly poor quality of the images when I was playing with it and for some reason it never even crossed my mind to run a 1.5 vae due to the resolution, which I did have set to 512 in my simple test workflow.

IDKiro commented 2 months ago

I am the author of SDXS. A new version of SDXS-512 is uploaded. Maybe try it.

https://huggingface.co/IDKiro/sdxs-512-dreamshaper

KaruroChori commented 2 months ago

I was able to run the UNET of 0.9, but the one from SDXS-512-Dreamshaper does not work.

Error occurred when executing UNETLoader:

ERROR: Could not detect model type of: /archive/shared/comfyui-krita/ComfyUI/models/unet/sdxs-0.9-deamshaper.safetensors

  File "/archive/shared/comfyui-krita/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/nodes.py", line 814, in load_unet
    model = comfy.sd.load_unet(unet_path)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/archive/shared/comfyui-krita/ComfyUI/comfy/sd.py", line 600, in load_unet
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(unet_path))

The file downloaded is this one.

comfyanonymous commented 2 months ago

https://github.com/comfyanonymous/ComfyUI/commit/58812ab8ca601cc2dd9dbe64c1f3ffd4929fd0ca

That new model should work now. Just a note that this one needs clip_l instead of clip_h, you can download clip_l from: https://huggingface.co/IDKiro/sdxs-512-dreamshaper/blob/main/text_encoder/model.safetensors but other than that the above workflow should work now.

KaruroChori commented 2 months ago

Thanks!

KaruroChori commented 2 months ago

Just a quick word of feedback for others interested in this model:

ComfyUI_temp_cptza_00273_ ComfyUI_temp_cptza_00305_

possible.

image image

IDKiro commented 2 months ago

Yes, I adjusted the training hyperparameters in order to improve the quality of the image generation, but this also resulted in a lower diversity of image generation. We will be updating later this month with a version that allows for multi-step sampling, which will effectively improve the diversity of the images generated. If the open source application is approved, we will release the finetune training code, and then hope to provide SDXS with different tendencies (diversity, quality, style) through the community.

eliganim commented 2 months ago

@IDKiro thanks a lot for your generosity in making this open-source and available for everyone to use ❤️ I use this when I teach SD and ComfyUI, to quickly generate images and explain image generation concepts.

halr9000 commented 2 months ago

Here, I wrapped up all of this thread into something a bit easier to understand, with some other features just for fun: https://openart.ai/workflows/-/-/fUxFDJrPkuSshjFyTl7F

Mego13 commented 2 weeks ago

@comfyanonymous I get this error. Any idea what I'm doing wrong here?

Screenshot 2024-03-28 at 10 14 32

how u fixed this error

Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 42, 64] to have 4 channels, but got 8 channels instead

1345