Closed NicholasKao1029 closed 10 months ago
Wow, I didn't realize there was already that project...
It's possible. But there is one big problem: if we execute the nodes directly, then we can't reuse the previous results from the cache, which may slow down the workflow significantly.
I've been thinking about this idea for a while, and it does have some potential:
Optimizing the cache to run workflows faster.
When executing the nodes directly, we can't utilize ComfyUI's cache system. But if one carefully maintains the lifetime of variables, it's possible to run workflows faster than ComfyUI, since ComfyUI's cache system is a naive single-slot cache.
Doing ML research.
Making developing custom nodes easier.
Reuse custom nodes in other projects.
Besides research projects and commercial products, perhaps we can even integrate ComfyUI into sd-webui. This way, a feature can be implemented as a node once and then be used in both ComfyUI and sd-webui.
I plan to implement this as an optional execution mode. The interface will be kept as compatible as possible, but some differences may have to be introduced.
Excited for this! I think there is a lot of potential here.
It basically works now. The package layout may be changed though, I'm going to publish ComfyScript as a pip package with v0.3. Here is the docs:
In virtual mode, calling a node is not executing it. Instead, the entire workflow will only get executed when it is sent to ComfyUI's server, by generating workflow JSON from the workflow (wf.api_format_json()
).
In real mode, calling a node will execute it directly:
print(CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt'))
# (<comfy.model_patcher.ModelPatcher object at 0x000002198721ECB0>, <comfy.sd.CLIP object at 0x000002198721C250>, <comfy.sd.VAE object at 0x000002183B128EB0>)
Real mode is thus more flexible and powerful than virtual mode. It can be used to:
Doing ML research.
Reuse custom nodes in other projects.
Besides research projects and commercial products, it is also possible to integrate ComfyUI into sd-webui. This way, a feature can be implemented as a node once and then be used in both ComfyUI and sd-webui.
Making developing custom nodes easier.
Optimizing caching to run workflows faster.
Because real mode executes the nodes directly, it cannot utilize ComfyUI's cache system. But if the lifetime of variables are maintained carefully enough, it is possible to run workflows faster than ComfyUI, since ComfyUI's cache system uses a naive single-slot cache.
Differences from virtual mode:
Scripts cannot be executed through the API of a ComfyUI server.
However, it is still possible to run scripts on a remote machine without the API. For example, you can launching a Jupyter Server and connect to it remotely.
As mentioned above, nodes will not cache the output themselves. It is the user's responsibility to avoid re-executing nodes with the same inputs.
The outputs of output nodes (e.g. SaveImage
) is not converted to result classes (e.g. ImageBatchResult
).
This may be changed in future versions.
A complete example:
from script.runtime.real import *
load()
from script.runtime.real.nodes import *
# Or: with torch.inference_mode()
with Workflow():
model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')
print(model, clip, vae, sep='\n')
# <comfy.model_patcher.ModelPatcher object at 0x000002198721ECB0>
# <comfy.sd.CLIP object at 0x000002198721C250>
# <comfy.sd.VAE object at 0x000002183B128EB0>
conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)
conditioning2 = CLIPTextEncode('text, watermark', clip)
print(conditioning2)
# [[
# tensor([[
# [-0.3885, ..., 0.0674],
# ...,
# [-0.8676, ..., -0.0057]
# ]]),
# {'pooled_output': tensor([[-1.2670e+00, ..., -1.5058e-01]])}
# ]]
latent = EmptyLatentImage(512, 512, 1)
print(latent)
# {'samples': tensor([[
# [[0., ..., 0.],
# ...,
# [0., ..., 0.]],
# [[0., ..., 0.],
# ...,
# [0., ..., 0.]],
# [[0., ..., 0.],
# ...,
# [0., ..., 0.]],
# [[0., ..., 0.],
# ...,
# [0., ..., 0.]]
# ]])}
latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)
image = VAEDecode(latent, vae)
print(image)
# tensor([[
# [[0.3389, 0.3652, 0.3428],
# ...,
# [0.4277, 0.3789, 0.1445]],
# ...,
# [[0.6348, 0.5898, 0.5270],
# ...,
# [0.7012, 0.6680, 0.5952]]
# ]])
print(SaveImage(image, 'ComfyUI'))
# {'ui': {'images': [
# {'filename': 'ComfyUI_00001_.png',
# 'subfolder': '',
# 'type': 'output'}
# ]}}
If you have ever gotten to know the internals of ComfyUI, you will realize that real mode is not completely real. Some changes were made to nodes to improve the development experience and keep the code compatible with virtual mode. If you want the real real mode, you can enable naked mode by load(naked=True)
.
In naked mode, ComfyScript will not execute any code after load()
(except Workflow()
, with can be basically replaced with torch.inference_mode()
).
An example:
import random
from script.runtime.real import *
load(naked=True)
from script.runtime.real.nodes import *
# Or: with torch.inference_mode()
with Workflow():
checkpointloadersimple = CheckpointLoaderSimple()
checkpointloadersimple_4 = checkpointloadersimple.load_checkpoint(
ckpt_name="sd_xl_base_1.0.safetensors"
)
emptylatentimage = EmptyLatentImage()
emptylatentimage_5 = emptylatentimage.generate(
width=1024, height=1024, batch_size=1
)
cliptextencode = CLIPTextEncode()
cliptextencode_6 = cliptextencode.encode(
text="evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
clip=checkpointloadersimple_4[1],
)
cliptextencode_7 = cliptextencode.encode(
text="text, watermark", clip=checkpointloadersimple_4[1]
)
checkpointloadersimple_12 = checkpointloadersimple.load_checkpoint(
ckpt_name="sd_xl_refiner_1.0.safetensors"
)
cliptextencode_15 = cliptextencode.encode(
text="evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
clip=checkpointloadersimple_12[1],
)
cliptextencode_16 = cliptextencode.encode(
text="text, watermark", clip=checkpointloadersimple_12[1]
)
ksampleradvanced = KSamplerAdvanced()
vaedecode = VAEDecode()
saveimage = SaveImage()
for q in range(10):
ksampleradvanced_10 = ksampleradvanced.sample(
add_noise="enable",
noise_seed=random.randint(1, 2**64),
steps=25,
cfg=8,
sampler_name="euler",
scheduler="normal",
start_at_step=0,
end_at_step=20,
return_with_leftover_noise="enable",
model=checkpointloadersimple_4[0],
positive=cliptextencode_6[0],
negative=cliptextencode_7[0],
latent_image=emptylatentimage_5[0],
)
ksampleradvanced_11 = ksampleradvanced.sample(
add_noise="disable",
noise_seed=random.randint(1, 2**64),
steps=25,
cfg=8,
sampler_name="euler",
scheduler="normal",
start_at_step=20,
end_at_step=10000,
return_with_leftover_noise="disable",
model=checkpointloadersimple_12[0],
positive=cliptextencode_15[0],
negative=cliptextencode_16[0],
latent_image=ksampleradvanced_10[0],
)
vaedecode_17 = vaedecode.decode(
samples=ksampleradvanced_11[0], vae=checkpointloadersimple_12[2]
)
saveimage_19 = saveimage.save_images(
filename_prefix="ComfyUI", images=vaedecode_17[0]
)
As you may have noticed, naked mode is compatible with the code generated by ComfyUI-to-Python-Extension. You can use it to convert ComfyUI's workflows to naked mode scripts.
v0.3.0 is now released.
Would it be possible to run ComfyScript's without the API being on? Essentially pushing ComfyUI to a library instead of an API server?
Similar thinking to this project