taabata / LCM_Inpaint_Outpaint_Comfy

ComfyUI custom nodes for inpainting/outpainting using the new latent consistency model (LCM)
244 stars 17 forks source link

LCM_outpaint_promptless.json issue #9

Open danzelus opened 10 months ago

danzelus commented 10 months ago

image

My comfy (portable) path: B:\!Comfyui\ComfyUI My cnet path: B:\!Comfyui\ComfyUI\models\controlnet

win 11

taabata commented 10 months ago

For the loader model path, its for the LCM model not controlnet model. The model is automatically loaded once placed in models/diffusers folder (look at the readme for more details) or you can paste the model's full path where it says model_path in the loader node. On another note, this node is only for inpainting with reference and no controlnet. To use controlnet, there is another node 'LCMGenerate_inpaintv2', for that you can inpaint with controlnet. keep in mind you need to have the controlnet model with its config file in a folder inside the models/controlnet folder in your ComfyUI directory (model should be a folder containing the diffusion_pytorch_model.bin and the config.json). Also, at the moment you need to have a T2I adapter also connected to the node. I suggest using the workflows provided since the nodes and their inputs are not organized at the moment due to me being lazy. Sorry for any inconveniences.

danzelus commented 10 months ago

image I tried to update diffusers (pip install --upgrade diffusers[torch], pip install --upgrade diffusers[flax], pip install diffusers==0.23.0) but still not working

cardenluo commented 10 months ago

image the same issue ,Can you give me some suggestions.thanks

taabata commented 10 months ago

image the same issue ,Can you give me some suggestions.thanks

I suggest you download all of the files and folders found here https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7/tree/main (not the .safetensors file) and place them inside a folder named 'LCM_Dreamshaper_v7' in the same directory you have the safetensors file at right now. Download this file https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy/blob/main/preprocessor_config.json and place it in the folder as well.

alexanderdutton commented 10 months ago

Same issue.

I think I got everything in the right place. I tried manually filling in the model_path with the diffusers folder, the controlnet folder, and every other possible thing I could think of, but couldn't get the 'controlnet_model' to display anything other than 'cn_canny' when it first loads, and then 'undefined' when I try to refresh/cycle the options.

Fully perplexed.

Quite looking forward to working with this tool -- thanks for any help you might provide. :)

image

KimMatt commented 10 months ago

After reading the readme it's unclear to me what you mean by "put the diffusers version in" I see a lot of config.jsons for the different folders on huggingface (vae, vae_decoder, etc.) but am unsure which one of those you mean to put in?

alexanderdutton commented 10 months ago

It wants the entire hugging face diffusers folder in that subdirectory, as in clone the full folder, not simply grab a particular set of files.

But even after I did that, I was running into other errors, which I recall seemed to be because the system has hard filename expectations instead of using modular inputs.

taabata commented 10 months ago

You can use the newer inpaint nodes using LCM Lora. Instructions:

Clone the github repository into the custom_nodes folder in your ComfyUI directory

Run the setup script for the CanvasTool

Install any sd 1.5 based model in format that runs with diffusers (like this one https://huggingface.co/stablediffusionapi/deliberate-v2/tree/main) and place in models/diffusers folder in your ComfyUI directory

Install LCM_lora from https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/tree/main and place in models/loras folder in your ComfyUI directory

Install controlnet inpaint model in format that runs with diffusers (from https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/tree/main) and place in models/controlnet folder in your ComfyUI directory

Install ip adapter models and image encoder and place in models/controlnet/IPAdapter (you have to create the folder) in your ComfyUI directory (optional; can use reference only instead)

Open the workflow in ComfyUI (https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy/blob/main/inpaint_LCMLORA_promptless.json)

Also for the 'Get image size' node, you need to git clone https://github.com/BadCafeCode/masquerade-nodes-comfyui in your custom_nodes folder

Image resize node from https://github.com/WASasquatch/was-node-suite-comfyui

**meaning of 'in format that runs with diffusers': download the model's different component in their separate folders (vae,text encoder, unet, etc)