fofr / cog-comfyui

Run ComfyUI with an API
https://replicate.com/fofr/any-comfyui-workflow
MIT License
475 stars 107 forks source link

Request for weights and nodes #4

Closed jeanmychildartbook closed 7 months ago

jeanmychildartbook commented 8 months ago

Hi, Thanks a lot for putting this up it's a really great work :)

I would like to request for nodes and weights

Node

Weights

Here's the workflow I'm trying to run

{
  "3": {
    "inputs": {
      "seed": 54429869184980,
      "steps": 30,
      "cfg": 5.5,
      "sampler_name": "dpmpp_2m",
      "scheduler": "karras",
      "denoise": 1,
      "model": [
        "41",
        0
      ],
      "positive": [
        "10",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "71",
        0
      ]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  },
  "4": {
    "inputs": {
      "ckpt_name": "dreamlabsoil_V2_v2.safetensors"
    },
    "class_type": "CheckpointLoaderSimple",
    "_meta": {
      "title": "Load Checkpoint"
    }
  },
  "6": {
    "inputs": {
      "text": "boy",
      "clip": [
        "13",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "Positive prompt"
    }
  },
  "7": {
    "inputs": {
      "text": "text, watermark, distorted",
      "clip": [
        "13",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "Negative prompt"
    }
  },
  "8": {
    "inputs": {
      "samples": [
        "3",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "10": {
    "inputs": {
      "strength": 1,
      "conditioning": [
        "6",
        0
      ],
      "control_net": [
        "11",
        0
      ],
      "image": [
        "37",
        0
      ]
    },
    "class_type": "ControlNetApply",
    "_meta": {
      "title": "Apply ControlNet"
    }
  },
  "11": {
    "inputs": {
      "control_net_name": "control_v11p_sd15_inpaint.pth"
    },
    "class_type": "ControlNetLoader",
    "_meta": {
      "title": "Load ControlNet Model"
    }
  },
  "13": {
    "inputs": {
      "lora_name": "ip-adapter-faceid-plus_sd15_lora.safetensors",
      "strength_model": 1,
      "strength_clip": 1,
      "model": [
        "18",
        0
      ],
      "clip": [
        "18",
        1
      ]
    },
    "class_type": "LoraLoader",
    "_meta": {
      "title": "Load LoRA"
    }
  },
  "18": {
    "inputs": {
      "lora_name": "COOLKIDS_MERGE_V2.5.safetensors",
      "strength_model": 1,
      "strength_clip": 1,
      "model": [
        "4",
        0
      ],
      "clip": [
        "4",
        1
      ]
    },
    "class_type": "LoraLoader",
    "_meta": {
      "title": "Load LoRA"
    }
  },
  "37": {
    "inputs": {
      "image": [
        "38",
        0
      ],
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "InpaintPreprocessor",
    "_meta": {
      "title": "Inpaint Preprocessor"
    }
  },
  "38": {
    "inputs": {
      "image": "template.png",
      "upload": "image"
    },
    "class_type": "LoadImage",
    "_meta": {
      "title": "Input 1 (template image)"
    }
  },
  "41": {
    "inputs": {
      "weight": 1,
      "noise": 0,
      "weight_type": "original",
      "start_at": 0,
      "end_at": 1,
      "faceid_v2": false,
      "weight_v2": 1,
      "unfold_batch": false,
      "ipadapter": [
        "51",
        0
      ],
      "clip_vision": [
        "49",
        0
      ],
      "insightface": [
        "44",
        0
      ],
      "image": [
        "47",
        0
      ],
      "model": [
        "13",
        0
      ]
    },
    "class_type": "IPAdapterApplyFaceID",
    "_meta": {
      "title": "Apply IPAdapter FaceID"
    }
  },
  "44": {
    "inputs": {
      "provider": "CPU"
    },
    "class_type": "InsightFaceLoader",
    "_meta": {
      "title": "Load InsightFace"
    }
  },
  "47": {
    "inputs": {
      "image": "face.png",
      "upload": "image"
    },
    "class_type": "LoadImage",
    "_meta": {
      "title": "Input 3 (face image)"
    }
  },
  "49": {
    "inputs": {
      "clip_name": "IPAdapter_image_encoder_sd15.safetensors"
    },
    "class_type": "CLIPVisionLoader",
    "_meta": {
      "title": "Load CLIP Vision"
    }
  },
  "51": {
    "inputs": {
      "ipadapter_file": "ip-adapter-faceid-plus_sd15.bin"
    },
    "class_type": "IPAdapterModelLoader",
    "_meta": {
      "title": "Load IPAdapter Model"
    }
  },
  "68": {
    "inputs": {
      "pixels": [
        "38",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEEncode",
    "_meta": {
      "title": "VAE Encode"
    }
  },
  "71": {
    "inputs": {
      "samples": [
        "68",
        0
      ],
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "SetLatentNoiseMask",
    "_meta": {
      "title": "Set Latent Noise Mask"
    }
  },
  "74": {
    "inputs": {
      "seed": 862101002343922,
      "steps": 30,
      "cfg": 5.5,
      "sampler_name": "dpmpp_2m",
      "scheduler": "karras",
      "denoise": 0.35000000000000003,
      "model": [
        "41",
        0
      ],
      "positive": [
        "93",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "123",
        0
      ]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  },
  "75": {
    "inputs": {
      "pixels": [
        "8",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEEncode",
    "_meta": {
      "title": "VAE Encode"
    }
  },
  "76": {
    "inputs": {
      "samples": [
        "74",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "77": {
    "inputs": {
      "images": [
        "76",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview output"
    }
  },
  "93": {
    "inputs": {
      "strength": 1,
      "conditioning": [
        "104",
        0
      ],
      "control_net": [
        "97",
        0
      ],
      "image": [
        "94",
        0
      ]
    },
    "class_type": "ControlNetApply",
    "_meta": {
      "title": "Apply ControlNet"
    }
  },
  "94": {
    "inputs": {
      "detect_hand": "disable",
      "detect_body": "disable",
      "detect_face": "enable",
      "resolution": 512,
      "image": [
        "8",
        0
      ]
    },
    "class_type": "OpenposePreprocessor",
    "_meta": {
      "title": "OpenPose Pose"
    }
  },
  "96": {
    "inputs": {
      "images": [
        "94",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview openpose"
    }
  },
  "97": {
    "inputs": {
      "control_net_name": "control_v11p_sd15_openpose.pth"
    },
    "class_type": "ControlNetLoader",
    "_meta": {
      "title": "Load ControlNet Model"
    }
  },
  "104": {
    "inputs": {
      "text": "face, white shirt",
      "clip": [
        "13",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "Positive prompt"
    }
  },
  "109": {
    "inputs": {
      "image": "mask.png",
      "channel": "red",
      "upload": "image"
    },
    "class_type": "LoadImageMask",
    "_meta": {
      "title": "Input 2 (template mask)"
    }
  },
  "110": {
    "inputs": {
      "iterations": 16,
      "masks": [
        "109",
        0
      ]
    },
    "class_type": "Mask Dilate Region",
    "_meta": {
      "title": "Mask Dilate Region"
    }
  },
  "111": {
    "inputs": {
      "radius": 8,
      "masks": [
        "110",
        0
      ]
    },
    "class_type": "Mask Gaussian Region",
    "_meta": {
      "title": "Mask Gaussian Region"
    }
  },
  "112": {
    "inputs": {
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "MaskToImage",
    "_meta": {
      "title": "Convert Mask to Image"
    }
  },
  "113": {
    "inputs": {
      "blend_percentage": 0.5,
      "image_a": [
        "38",
        0
      ],
      "image_b": [
        "112",
        0
      ]
    },
    "class_type": "Image Blend",
    "_meta": {
      "title": "Image Blend"
    }
  },
  "120": {
    "inputs": {
      "images": [
        "113",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview image-mask blend"
    }
  },
  "122": {
    "inputs": {
      "images": [
        "8",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview output 1"
    }
  },
  "123": {
    "inputs": {
      "samples": [
        "75",
        0
      ],
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "SetLatentNoiseMask",
    "_meta": {
      "title": "Set Latent Noise Mask"
    }
  },
  "127": {
    "inputs": {
      "blend_percentage": 0.5,
      "image_a": [
        "8",
        0
      ],
      "image_b": [
        "112",
        0
      ]
    },
    "class_type": "Image Blend",
    "_meta": {
      "title": "Image Blend"
    }
  },
  "128": {
    "inputs": {
      "images": [
        "127",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview image-mask blend"
    }
  },
  "132": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": [
        "76",
        0
      ]
    },
    "class_type": "SaveImage",
    "_meta": {
      "title": "Output save image"
    }
  }
}

Thanks a lot :)

fofr commented 8 months ago

Sure, I've started by adding the weights:

I tried adding the masquerade custom nodes, but noticed your flow also uses some other missing nodes.

I think https://github.com/WASasquatch/was-node-suite-comfyui ?

Please can you confirm the full list.

jeanmychildartbook commented 8 months ago

Oh, Yes I really missed it @fofr Here's the full list of nodes

Thanks a bunch :)

WesleyKapow commented 7 months ago

Just chiming in with a bump for https://github.com/Gourieff/comfyui-reactor-node

fofr commented 7 months ago

I've got comfyui-reactor-node working in the prototype, but it's a bit flakey: https://replicate.com/p/j23xab3brhdw7gdlobmk2h2xqu

It seems that the onnxruntime doesn't like being run in a thread.

jeanmychildartbook commented 7 months ago

Thank you for the work @fofr Do you think it's working?

fofr commented 7 months ago

@jeanmychildartbook I think some of these custom nodes are too heavy to incorporate into the main model at the moment. However if you think you'll use this I can spin out a custom version for you with the nodes you need. Please let me know.

fofr commented 7 months ago

I've now added support for comfyui-reactor-node: https://replicate.com/fofr/any-comfyui-workflow

fofr commented 7 months ago

@jeanmychildartbook your full workflow should work now – can you try?

You'll need to use model.15.safetensors as the CLIPVision model name, rather than IPAdapter_image_encoder_sd15.safetensors (it's the same model just named differently)

Edit: I've just pushed a change so that IPAdapter_image_encoder_sd15.safetensors works too

jeanmychildartbook commented 7 months ago

@fofr Thank you so much for your support! I'll try it out :)

fofr commented 7 months ago

Closing this for now. @jeanmychildartbook feel free to reopen if you have issues.