city96 / ComfyUI_NetDist

Run ComfyUI workflows on multiple local GPUs/networked machines.
Apache License 2.0
236 stars 26 forks source link

Can I use this to load balance runs? #28

Closed ckao10301 closed 2 days ago

ckao10301 commented 3 days ago

Hi, is it possible to use this to load balance? For example, I have 4 gpus with 4 comfy instances. If I execute 4 runs, each run goes to a separate comfy instance. If the queue is higher than 4, then each run gets allocated depending on the availability of the gpu.

Use case: trying to set up my local server to process comfy api requests.

city96 commented 3 days ago

Not in it's current form I don't think, though I haven't messed with this repo in a long time.

If you're only routing API requests then you could probably have a small python script that checks the availability of each instance, then sends it to the one with the least requests in the queue. Not sure how you'd handle any of the websocket endpoints, though I guess that depends on if you're using them in the first place or not.

ckao10301 commented 2 days ago

I found cloudflare load balancer, which should meet my needs.

On Thu, Jul 4, 2024 at 6:21 PM City @.***> wrote:

Not in it's current form I don't think, though I haven't messed with this repo in a long time.

If you're only routing API requests then you could probably have a small python script that checks the availability of each instance, then sends it to the one with the least requests in the queue. Not sure how you'd handle any of the websocket endpoints, though I guess that depends on if you're using them in the first place or not.

— Reply to this email directly, view it on GitHub https://github.com/city96/ComfyUI_NetDist/issues/28#issuecomment-2209839198, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6462DSBM3UF3QUTCZIUSXDZKXYLFAVCNFSM6AAAAABKMBZVRKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBZHAZTSMJZHA . You are receiving this because you authored the thread.Message ID: @.***>