Open gymnae opened 3 months ago
I'll have a demo for Vast's autoscaler ready soon.
You'll generally create a handler that's invoked by the serverless controller and should interact with the ComfyUI on localhost:18188
- I.e not external:8188
because that requires caddy to run and you won't want that because we should be fast to start on serverless.
You can achieve the fast start by declaring environment variable SERVERLESS=true
or SUPERVISOR_NO_AUTOSTART=caddy,jupyter, syncthing
as required
I'm about 90% done with a universal async API wrapper for processing ComfyUI workflows which should make serverless integration very easy - I just have to finish adding timings and webhooks https://github.com/ai-dock/comfyui/tree/main/build/COPY_ROOT_1/opt/ai-dock/api-wrapper
That sounds great :) I'd prefer to run it on vast, so cool :)
I'm about 90% done with a universal async API wrapper for processing ComfyUI workflows which should make serverless integration very easy - I just have to finish adding timings and webhooks
Any updates?
hi, can you please update the status and how do I make this work?
You can achieve the fast start by declaring environment variable SERVERLESS=true or SUPERVISOR_NO_AUTOSTART=caddy,jupyter, syncthing as required
then what's next?
If we deployed the docker to vastai and want to use as serverless, we have to stop and start instance everytime (before) we request?
do I need to run supervisorctl [start|stop|restart] comfyui
?
You'll generally create a handler that's invoked by the serverless controller and should interact with the ComfyUI on localhost:18188 - I.e not external:8188 because that requires caddy to run and you won't want that because we should be fast to start on serverless.
where we create this handler and how?
sorry a lot of noob questions would be appreciated if you answer.
Vast has an autoscaler template for using this image (a previous version).
The wrapper API for the current version of this is not production ready yet. When it is done I will update the docs
Sent from Outlook for Androidhttps://aka.ms/AAb9ysg
From: quannv4 @.> Sent: Thursday, November 7, 2024 1:23:11 AM To: ai-dock/comfyui @.> Cc: Rob Ballantyne @.>; Comment @.> Subject: Re: [ai-dock/comfyui] Newbie question: Run a serverless on-demand API backend on runpod.io or vast.ai (Issue #88)
hi, can you please update the status and how do I make this work?
You can achieve the fast start by declaring environment variable SERVERLESS=true or SUPERVISOR_NO_AUTOSTART=caddy,jupyter, syncthing as required
then what's next?
If we deployed the docker to vastai and want to use as serverless, we have to stop and start instance everytime (before) we request?
do I need to run supervisorctl [start|stop|restart] comfyui ?
You'll generally create a handler that's invoked by the serverless controller and should interact with the ComfyUI on localhost:18188 - I.e not external:8188 because that requires caddy to run and you won't want that because we should be fast to start on serverless.
where we create this handler and how?
sorry a lot of noob questions would be appreciated if you answer.
— Reply to this email directly, view it on GitHubhttps://github.com/ai-dock/comfyui/issues/88#issuecomment-2461127815, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AB5FURIJSFXLJ323O37NDM3Z7K6H7AVCNFSM6AAAAABMJFPROSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRRGEZDOOBRGU. You are receiving this because you commented.Message ID: @.***>
Hi, I'm a hobbyist and occasionally would like to create images or run queries. Serverless seems like a good fit with cost control. I tried to use this image for a serverless installation on runpod, but was unable to call the API as a backend from my local ComfyUI install. Would this work in a worker/serverless setting, or would I need to run an instance on vast.ai or runpod?
Cheers