toverainc / willow-inference-server

Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS
Apache License 2.0
373 stars 33 forks source link

Forgo the need for TLS certificates #82

Closed skorokithakis closed 10 months ago

skorokithakis commented 1 year ago

TLS certificates kind of complicate the setup a bit. It would be good if the WIS either exposed both HTTPS and HTTP ports (so I could ignore the TLS port for an internal setup), or it just didn't use TLS at all, so I can run my own ingress in front.

I think that running TLS termination inside the container is too much coupling, as most users would either be running their own TLS, if they want their stuff accessible from the outside, be running a VPN, making TLS redundant, or just not be exposing the server to the internet.

kristiankielhofner commented 1 year ago

The fundamental limitation here is gunicorn that we use in the main branch. It doesn't support running on multiple sockets with different options without manually editing the run command for the docker image (and even then only supports a single socket).

The wisng branch fronts gunicorn with nginx. In addition to enabling all kinds of other things it includes the ability to listen on HTTP and HTTPS concurrently. There are also several approaches for using nginx with Let's Encrypt (as one example) although none that I've seen so far are anywhere as clean as something like traefik. Unfortunately traefik can't be used either as it doesn't support things like the TTS response caching we are doing with nginx in wisng today. It's also a better fundamental approach architecturally for a variety of reasons, one of which is the decoupling of TLS, cert, and socket management from the WIS API instance itself as you mention.

I'm learning more and more from the community that the variations on hosting, environment, etc are endless - I already see a lot of deployments that fall under "I definitely wouldn't do that but you do you"... User choice and flexibility is one of the bedrock principles of open source and we intend to do everything we can to support as much user flexibility and choice we can. We just don't want to provide a default configuration and architecture that fundamentally opposes our other primary goal of providing an excellent user experience for as many people as possible.

skorokithakis commented 1 year ago

That definitely makes sense. Generally, I prefer my images to just listen to HTTP on localhost at a port of their choosing, and then I use Caddy to do TLS termination and reverse proxying.

Just a data point for your consideration, feel free to close this ticket otherwise, thank you!

kristiankielhofner commented 1 year ago

We're going to try to make everything in the Willow ecosystem live on the same docker network. It helps provide isolation for HTTP requests within the network, is more portable, and dramatically improves the situation around exposed/forwarded ports.

I'm a Caddy fan as well but when it comes to reverse proxying nginx is the reigning champion and as I mentioned it already does many things for us as a reverse proxy that caddy, traefik, etc don't support.

You could certainly proxy from caddy to nginx but you will likely see a performance drop with all of the various layers and some of the protocol support limitations in caddy.

lordratner commented 1 year ago

I was wondering about this as well. I have a host that runs NGINX as a reverse proxy, and everything feeds into that using my wildcard certs for my primary domain.

So far every service I use is proxy forwarded via http (not https), so I'm not actually sure how it would work with https. I assume NGINX can deal with the self-signed certs generated by WIS?

I think the NGINX reverse proxy is a pretty common setup for Home Assistant users, since that's where I learned it from. I'd see how it works on my setup, but I'm still struggling to get WIS installed (as you know).

kristiankielhofner commented 1 year ago

wisng (with our own tuned nginx frontend proxy) listens on HTTP and HTTPS. So you can forward to either - for HTTPS it will depend on the ability for your additional frontend proxy accepting self-signed certs, or you can use HTTP as you do today.

EDIT: Forgot to elaborate on the nginx configuration. If you're using an additional nginx frontend it would be wise to use our various parameters for proxy, buffering, etc as otherwise you will experience unnecessary delays in use with Willow.

tensiondriven commented 1 year ago

Sorry if this was mentioned in the thread and I missed it - I was under the impression that browsers like Chrome require HTTPS to allow access to devices like the microphone, making HTTPS a requirements? Perhaps there's a way to defeat that.. anyhoo, that's what I thought was motivating the requirement for HTTPS

kristiankielhofner commented 1 year ago

Yes, browsers require HTTPS for hardware access, including WebRTC or other means of microphone capture. The nginx frontend supports HTTP and HTTPS with a self-signed by default. Let's Encrypt, etc is left up to the user because the myriad of deployment options make it almost impossible to support.