Closed ashish-aesthisia closed 7 months ago
The error: Thaaat's a cuda error, ie the issue is somewhere in the range of hardware/drivers/pytorch/maaaybe comfy/not swarm. Sorry not much help I can offer on diagnosing that.
How to run comfy backend on 0.0.0.0 as currently it seems like listening on 127.0.0.1
the comfy instance is intentionally local.
You can swap swarm to 0.0.0.0
in the Server Configuration tab -> Host
setting, or launch with --host 0.0.0.0
on cli (see https://github.com/Stability-AI/StableSwarmUI/blob/master/docs/Command%20Line%20Arguments.md for docs on CLI)
You can access the comfy direct backend (eg for API usage?) by just opening contacting http://localhost:7801/ComfyBackendDirect/
it will autoredirect to your first valid comfy backend.
Do you have enabled queue or gradio-queue by enabled by default?
What? I don't know what this question means. Gradio is a project unrelated to anything here.
If you want an auto remote connection like gradio's share pages you can use cloudflared, see docs here https://github.com/Stability-AI/StableSwarmUI/blob/master/docs/Advanced%20Usage.md#accessing-stableswarmui-from-other-devices
If you mean the general concept of request queuing - yeah swarm does that out of the box.
Do we have an option to run it on TCP/HTTPs mode only?
TCP is the only option yes. for HTTPS you would need to use a proxy layer (eg cloudflared for generated remote address, or apache2/nginx for internal reverse proxy).
./launch-linux.sh --host 0.0.0.0 --port 7860 --launch_mode none
I have updated few things:
It was definitely not an error with the comfy backend.
Seems like it's working. Thanks
Cloud Provider: AWS EC2 Cuda Version: 12.0 Graphics Card: Tesla , 16GB Vram
OS: Ubuntu 22.04
Launch command:
./launch-linux.sh --host 0.0.0.0 --port 7860 --launch_mode none
Proxy: None Access: Over public IP
Console Logs
What i have tried so far:
nvidia-smi
works.Questions: