Chaoses-Ib / ComfyScript

A Python frontend and library for ComfyUI
https://discord.gg/arqJbtEg7w
MIT License
429 stars 24 forks source link

Using CivitAICheckpointLoader #68

Closed the-dream-machine closed 2 months ago

the-dream-machine commented 2 months ago

How do I import the CivitAICheckpointLoader? Do I need to install it seperately?

Chaoses-Ib commented 2 months ago

It's included in the [default] of pip install -e ".[default]", so no need to install it separately.

And it'll be imported by from comfy_script.runtime.nodes import * like all other nodes.

the-dream-machine commented 2 months ago

Sorry, I'm new to python, and I'm trying to deploy comfyscript inside a container to run as a serverless function. I can't import wildcards in the serverless function, so I was importing all nodes individually. Here Is my code:

import modal
import os
import requests
import io
import subprocess

app = modal.App("pony-diffusion")

image = (
    modal.Image.debian_slim(python_version="3.12.5")
    .apt_install("git", "libglib2.0-0", "libsm6", "libxrender1", "libxext6", "ffmpeg", "libgl1")
    .pip_install("torch==2.4.0+cu121", "torchvision", extra_options="--index-url https://download.pytorch.org/whl/cu121")
    .pip_install("xformers==0.0.27.post2")
    .pip_install("git+https://github.com/hiddenswitch/ComfyUI.git", extra_options="--no-build-isolation")
    .run_commands("comfyui --create-directories")
    .pip_install("comfy-script[default]", extra_options="--upgrade")
)

@app.cls(gpu="T4", container_idle_timeout=120, image=image)
class Model:
    @modal.enter()
    def enter(self):
        print("โœ… Entering container...")
        from comfy_script.runtime import load
        load("comfyui")

    @modal.exit()
    def exit(self):
        print("๐Ÿงจ Exiting container...")

    @modal.method()
    def generate_image(self, prompt:str):
        print("๐ŸŽจ Generating image...")

        # Cannot import * here
        from comfy_script.runtime import Workflow
        from comfy_script.runtime.nodes import CheckpointLoaderSimple, CLIPTextEncode, EmptyLatentImage, KSampler, VAEDecode, SaveImage, CivitAICheckpointLoader

        with Workflow(wait=True):
            model, clip, vae = CivitAICheckpointLoader('https://civitai.com/models/101055?modelVersionId=128078')            
            conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
            conditioning2 = CLIPTextEncode('text, watermark', clip)
            latent = EmptyLatentImage(512, 512, 1)
            latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
            image = VAEDecode(latent, vae)
            SaveImage(image, 'ComfyUI')

@app.local_entrypoint()
def main(prompt: str):
    Model().generate_image.remote(prompt)

I get the error:

ImportError: cannot import name 'CivitAICheckpointLoader' from 'comfy_script.runtime.nodes' (/usr/local/lib/python3.12/site-packages/comfy_script/runtime/nodes.py)
Chaoses-Ib commented 2 months ago

Fixed in the latest commit. Thanks for your report. Cloud you use pip_install("comfy-script[default] @ git+https://github.com/Chaoses-Ib/ComfyScript.git") to test if it works?

the-dream-machine commented 2 months ago

The CivitAICheckpointLoader node is now being imported but I'm getting new errors:

Queue remaining: 0 Queue remaining: 0

Chaoses-Ib commented 2 months ago

The first one is because comfyui package adds a node with an unusual/wrong config. The official ComfyUI doesn't have it and this error doesn't affect using other nodes, so I'm probably not going to debug it until someone really needs to use it.

The second one works on my side (with a different model for convenience):

image

Maybe it's caused by the same problem caused #69? Using the official ComfyUI may have different results.

the-dream-machine commented 2 months ago

I was able to resolve this issue and #69 by using the official comfy-cli to install comfyUI. Now everything works! ๐ŸŽ‰

Here is my code in case anyone runs into the same issue while trying to run this in a container:

import subprocess
import modal

image = (
    modal.Image.debian_slim(python_version="3.12.5")
    .apt_install("git")
    .pip_install("comfy-cli==1.1.6")
    # use comfy-cli to install the ComfyUI repo and its dependencies
    .run_commands("comfy --skip-prompt install --nvidia")
    # download all models and custom nodes required in your workflow
    .run_commands(
        "comfy --skip-prompt model download --url https://civitai.com/api/download/models/290640 --relative-path models/checkpoints"
    )
    .run_commands(
        "cd /root/comfy/ComfyUI/custom_nodes && git clone https://github.com/Chaoses-Ib/ComfyScript.git",
        "cd /root/comfy/ComfyUI/custom_nodes/ComfyScript && python -m pip install -e '.[default]'",
    )
)

app = modal.App("pony_diffusion_2")

# Optional: serve the UI
@app.function(
    allow_concurrent_inputs=10,
    concurrency_limit=1,
    container_idle_timeout=30,
    timeout=1800,
    gpu="T4",
)
@modal.web_server(8000, startup_timeout=60)
def ui():
    _web_server = subprocess.Popen("comfy launch -- --listen 0.0.0.0 --port 8000", shell=True)

@app.cls(gpu="T4", container_idle_timeout=120, image=image)
class Model:
    @modal.build()
    def build(self):
        print("๐Ÿ› ๏ธ Building container...")

    @modal.enter()
    def enter(self):
        print("โœ… Entering container...")
        from comfy_script.runtime import load
        load()

    @modal.exit()
    def exit(self):
        print("๐Ÿงจ Exiting container...")

    @modal.method()
    def generate_image(self, prompt:str):
        print("๐ŸŽจ Generating image...")
        from comfy_script.runtime import Workflow
        from comfy_script.runtime.nodes import CheckpointLoaderSimple, CLIPTextEncode, EmptyLatentImage, KSampler, VAEDecode, SaveImage, CivitAICheckpointLoader

        with Workflow(wait=True):
            model, clip, vae = CivitAICheckpointLoader('https://civitai.com/models/101055?modelVersionId=128078')
            # model, clip, vae = CheckpointLoaderSimple("ponyDiffusionV6XL_v6StartWithThisOne.safetensors")
            conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
            conditioning2 = CLIPTextEncode('text, watermark', clip)
            latent = EmptyLatentImage(512, 512, 1)
            latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
            image = VAEDecode(latent, vae)
            result = SaveImage(image, 'ComfyUI')
            print("result", result)

@app.local_entrypoint()
def main(prompt: str):
    Model().generate_image.remote(prompt)

Maybe you should consider adding the official comfy-cli as a 3rd installation option?

Feel free to close these issues ๐Ÿ‘

Chaoses-Ib commented 2 months ago

v0.5.1 is released. Comfy-Cli and the Modal code is added to README. Thank you!

the-dream-machine commented 2 months ago

Great work @Chaoses-Ib. Thanks for putting together this amazing library!