nux1111 / ComfyUI_NetDist_Plus

Run ComfyUI workflows on multiple local GPUs/networked machines.
Apache License 2.0
18 stars 1 forks source link

ComfyUI_NetDist Plus

Run ComfyUI workflows on multiple local GPUs/networked machines with options to edit the json values within comfyui.

Credits

Comfyanonymous; for obvious reasons
City96; without the base netdist repo, I wouldn't have attempted this.
EventStationAI; for some GPU support.
All node creators that I used their work in some ways in the creation of the workflows or code snippets. (Easy Use, Ipadapter_Plus, CR)
Claude; what do I do next? Can you debug this error?
Ogkai; Thanks for encouraging me to start pushing stuffs I make or modify.
*On twitter(X) if you have questions :)

Issues

Remote Latents: I didn't get a chance to test it.
Batched Base64 images: There are existing node that should fix that.
*Batch size > 1 for STYLE TRANSFER: Didn't take note of the errors I got but that needs some work.

Note: I am a primitive coder and I know very little about github. Bear with me if issues arise. Of course collabs are awesome. The listed examples were done on a 4090 host and 3080ti remote pc.

Remote conditioning Workflow

The use case for this is running T5 and clip L on a different comfy instance so the primary PC can focus on running UNET and VAE.

REMOTE_CLIP_OFFSET

Remote Batch Workflow with different checkpoints

This workflow is useful for comparing Flux Dev and Schnell models. Since the remote pc runs the Schnell, it is bearable.

REMOTE_BATCH

Style Transfer Example

This uses a remote pc to run a SDXL ipadapter style transfer pipe.

REMOTE_STYLETRANSFER

Making remote conds

REMOTE_conds

NetDist_2xspeed.webm

Install instructions:

There is currently a single external requirement, which is the requests library.

pip install requests

To install, simply clone into the custom nodes folder.

git clone https://github.com/city96/ComfyUI_NetDist ComfyUI/custom_nodes/ComfyUI_NetDist

Usage

Local Remote control

You will need at least two different ComfyUI instances. You can use two local GPUs by setting different --port [port] and --cuda-device [number] launch arguments. You'll most likely want --port 8288 --cuda-device 1

Simple dual-GPU

This is the simplest setup for people who have 2 GPUs or two separate PCs. It only requires two nodes to work.

You can set the local/remote batch size, as well as when the node should trigger (set it to 'always' if it isn't getting executed - i.e. you changed a sampler setting but not the seed.)

If you're running your second instance on a different PC, add --listen to your launch arguments and set the correct remote IP (open a terminal window and check with ipconfig on windows or ip a on linux).

The FetchRemote ('Fetch from remote') node takes an image input. This should be your final image than you want to get back from your second instance (make sure not to route it back into itself). This node will wait for the second image to be generated (there's currently no preview/progress bar).

Workflow JSON: NetDistSimple.json

NetDistSimple

Simple multi-machine

You can kind of scale the example above by connecting more of the simple queue nodes together, but the seed is a bit jank and you can get duplicate images if you try and reuse it. I guess just set the seed to randomized on both.

NetDistMulti

Advanced

This is mostly meant for more "advanced" setups with more than two GPUs. It allows easier per-batch overrides as well as setting a default batch size.

It also allows using a workflow JSON as an input. To allow any workflow to run, the final image can be set to "any" instead of the default "final_image" (which would require the FetchRemote node to be in the workflow).

I have nodes to save/load the workflows, but ideally there would be some nodes to also edit them - search and replace seed, etc. PRs welcome ;P

Workflow JSON: NetDistAdvancedV2.json

NetDistAdvanced

(This needs a fake image input to trigger, you can just give it a blank image).

NetDistSaved

Remote images

The LoadImageUrl ('Load Image (URL)') Node acts just like the normal 'Load Image' node.

The SaveImageUrl ('Save Image (URL)') Node sends a POST request to the target URL with a json containing the images.

Remote latents

This node pack has a set of nodes which should (in theory) allow you to pass latents between the nodes seamlessly. A node to save the input latent as a .npy file is provided. This node also returns the filename of the saved latent, which can then be loaded by the other instance.

To load a latent from the other instance, you can plug the filename into this URL:

# change the filename with a string replacement node.
http://127.0.0.1:8188/view?filename=ComfyUI_00001_.latent&type=output`
# To load them from the input folder instead, change type to 'input'
http://127.0.0.1:8188/view?filename=TestLatent.npy&type=input

The LoadLatentNumpy node can also load the default safetensor latents, the npy ones (simple numpy file containing just the latent in the standard torch format) as well as the sd_scripts npz cache files.

LatentSave

Things you probably shouldn't do:

Roadmap