Closed ricardofeynman closed 11 months ago
For "set conditions", you can use whatever control flow you want:
highres_fix = True
async with ComfyScript():
seed = 0
model, clip, vae = CheckpointLoaderSimple(CheckpointLoaderSimple.ckpt_name.v1_5_pruned_emaonly)
pos = 'beautiful scenery nature glass bottle landscape, , purple galaxy bottle,'
neg = 'text, watermark'
latent = EmptyLatentImage(512, 512, 1)
latent = KSampler(model, seed, 20, 8, 'euler', 'normal', CLIPTextEncode(pos, clip), CLIPTextEncode(neg, clip), latent, 1)
if not highres_fix:
SaveImage(VAEDecode(latent, vae), f'{seed}')
else:
latent = LatentUpscale(latent, None, 1152, 1152)
latent = KSampler(model, seed, 14, 8, 'dpmpp_2m', 'simple', CLIPTextEncode(pos, clip), CLIPTextEncode(neg, clip), latent, 0.5)
SaveImage(VAEDecode(latent, vae), f'{seed}-highres')
But I don't get what you mean by "when a grouped set of nodes is active". Did you mean to use the ComfyScript translated from the workflow? The transpiler will remove any inactive nodes for now, but it's not hard to add an option to control this behaviour.
I'm hoping to use the initial part of a workflow to batch generate a set of images, before selecting the best images from that batch to be sent on for further processing in the remainder of the workflow.
Currently we can right click on a group, and bypass all the nodes in a single click using 'Bypass Group Nodes', but unlike Freeze/Unfreeze and Pin/Unpin, there's no option to 'Unbypass Group Nodes', so we have to manually select all nodes within a group and use Ctrl + B to 'Unbypass'.
My guess is that in it's current state, ComfyUI's manual process of having to select all relevent bypassed nodes in the group to unbypass again would not lend iself to being automated by ComfyScript.
"when a grouped set of nodes is active" - A group of nodes being unbypassed and bypassed depending on a user input condition being met.
In my case, I've generated 100 images, I select the images that I want to send forward to the next stage, once selection is complete, the bypassed group is set active/unbypassed and the remainder of the workflow completes only for the manually selected images rather than the full 100 images being scheduled for processing.
I think you misunderstood what this project is. ComfyScript has nothing to do with ComfyUI's web part. When you run a ComfyScript, it generates the prompt and sends it to ComfyUI's server. You can't use ComfyScript to control the web UI.
However, for the workflow you mentioned, you can implement it with ComfyScript:
model, clip, vae = CheckpointLoaderSimple(CheckpointLoaderSimple.ckpt_name.v1_5_pruned_emaonly)
pos = 'beautiful scenery nature glass bottle landscape, , purple galaxy bottle,'
neg = 'text, watermark'
# Generate 100 images
for seed in range(100):
async with ComfyScript():
latent = EmptyLatentImage(512, 512, 1)
latent = KSampler(model, seed, 20, 8, 'euler', 'normal', CLIPTextEncode(pos, clip), CLIPTextEncode(neg, clip), latent, 1)
SaveImage(VAEDecode(latent, vae), f'{seed}')
# Get selection by input(), or show a dialog with Qt or other GUI libraries
selected = input()
async with ComfyScript():
image = LoadImageFromPath(f'{selected} [output]')
latent = VAEEncode(image, vae)
latent = LatentUpscale(latent, None, 1152, 1152)
latent = KSampler(model, seed, 14, 8, 'dpmpp_2m', 'simple', CLIPTextEncode(pos, clip), CLIPTextEncode(neg, clip), latent, 0.5)
SaveImage(VAEDecode(latent, vae), f'{selected}-highres')
Actually yes I'm trying to avoid using the web part, for an older machine with only 16GB of RAM, so I'm not impacting speed by going into the ssd's swap file on memory heavy workflows and for the sake of my disk's health.
I'm hoping to use a ComfyUI server from that machine remotely via a mobile interface. Still at the stage of working out the automation workflow, and my thoughts were that there would also need to be an Unbypass Group Nodes function implemented and a switch for bypassing/unbypassing in the workflow that ComfyScript could transpile.
I had thought that ComfyScript executes a transpiled workflow in python, as it would execute if queued in the GUI (with the option of extra pythony bells, loops and whistles) but without having to use a GUI/browser, avoiding the memory overhead?
The main limitation you mention in the readme is presently a big drawback to what I have in mind, but if I'm not completely off the mark about its capabilities I would still like to explore using ComfyScript as an option, even if presently it would likely involve executing a sequence of smaller transpiled workflows via ComfyScript to accomplish similar results.
Well... If I were you, I would just buy some RAM.
I had thought that ComfyScript executes a transpiled workflow in python, as it would execute if queued in the GUI (with the option of extra pythony bells, loops and whistles) but without having to use a GUI/browser, avoiding the memory overhead?
In ComfyUI, the back end and the web UI use different schemas of workflows. Things like "S&R", "mute", "bypass" and "group" only exist in the web UI's workflows. Before sending the real workflows to the back end, the web UI will perform S&R, remove muted and bypassed nodes, and ignore groups as they are just UI elements that have no effect on the back end.
I have added some examples about these features in ComfyScript in the readme.
The main limitation you mention in the readme is presently a big drawback to what I have in mind
For the limitation, I'm currently working on adding some interoperability with images. It finally may look like this:
async with ComfyScript():
model, clip, vae = CheckpointLoaderSimple(CheckpointLoaderSimple.ckpt_name.v1_5_pruned_emaonly)
pos = 'beautiful scenery nature glass bottle landscape, , purple galaxy bottle,'
neg = 'text, watermark'
# Generate 100 images
images = []
for seed in range(100):
latent = EmptyLatentImage(512, 512, 1)
latent = KSampler(model, seed, 20, 8, 'euler', 'normal', CLIPTextEncode(pos, clip), CLIPTextEncode(neg, clip), latent, 1)
images.append(await VAEDecode(latent, vae))
# Get selection by input(), or show a dialog with Qt or other GUI libraries
selected = my_show_and_select(images)
latent = VAEEncode(selected, vae)
latent = LatentUpscale(latent, None, 1152, 1152)
latent = KSampler(model, seed, 14, 8, 'dpmpp_2m', 'simple', CLIPTextEncode(pos, clip), CLIPTextEncode(neg, clip), latent, 0.5)
SaveImage(VAEDecode(latent, vae), f'{selected}-highres')
This is not perfect, as using VAE to decode and encode will lose some information compared to the original latent, but should work for most cases. Maybe in the future, I'll also add some interoperability with the latent.
Well, I've chucked a spare 3060 and an ssd in to an old machine which took a couple mins to install, but sourcing ancient/slow quad channel DDR2 RAM for the cpu/mobo... and ye then I'd have to pull the system apart to install my new crappy ram sticks due to low profile mega cooler.. so nah, I'll pass on that one!
There's more powerful builds on hand for more demanding tasks, and just happy giving an old machine a dedicated, hands off task and trying to get it working completely remotely on that task.
Thanks for that insight about the schemas. I'll keep this project in mind for other situations. Will dig more in to its workings in the new year, and looking forward to seeing how it develops.
I've made an ipywidgets example that generates 3 images at once, and then lets the user decide which ones they want to hires fix:
import ipywidgets as widgets
queue.watch_display(False, False)
latents = []
image_batches = []
with Workflow():
seed = 0
pos = 'sky, 1girl, smile'
neg = 'embedding:easynegative'
model, clip, vae = CheckpointLoaderSimple(CheckpointLoaderSimple.ckpt_name.AOM3A1B_orangemixs)
model2, clip2, vae2 = CheckpointLoaderSimple(CheckpointLoaderSimple.ckpt_name.CounterfeitV25_25)
for color in 'red', 'green', 'blue':
latent = EmptyLatentImage(440, 640)
latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',
positive=CLIPTextEncode(f'{color}, {pos}', clip), negative=CLIPTextEncode(neg, clip),
latent_image=latent)
latents.append(latent)
image_batches.append(SaveImage(VAEDecode(latent, vae), f'{seed} {color}'))
grid = widgets.GridspecLayout(1, len(image_batches))
for i, image_batch in enumerate(image_batches):
image_batch = image_batch.wait()
image = widgets.Image(value=image_batch[0]._repr_png_())
button = widgets.Button(description=f'Hires fix {i}')
def hiresfix(button, i=i):
print(f'Image {i} is chosen')
with Workflow():
latent = LatentUpscaleBy(latents[i], scale_by=2)
latent = KSampler(model2, seed, steps=15, cfg=6, sampler_name='uni_pc',
positive=CLIPTextEncode(pos, clip2), negative=CLIPTextEncode(neg, clip2),
latent_image=latent, denoise=0.6)
image_batch = SaveImage(VAEDecode(latent, vae2), f'{seed} hires')
display(image_batch.wait())
button.on_click(hiresfix)
grid[0, i] = widgets.VBox(children=(image, button))
display(grid)
Sweet, that's very cool.
Will test it out soon.
Thank you.
Thanks for releasing, I'm looking forward to trying this out.
I've made a request on the main ComfyUI repo https://github.com/comfyanonymous/ComfyUI/issues/2357 hoping for an Unbypass Group Nodes function for the right click menu. I've been planning out a workflow that would require being able to automate bypass/unbypass of group nodes, currently they can be bypassed as a group selection from the right click menu, but Unbypass doesn't exist as an option presently.
If this functionality was implemented within ComfyUI, would ComfyScript in it's current form allow me to set conditions for when a grouped set of nodes is active?