Open dokterbob opened 1 year ago
if 1 haproxy for ingress and 3 frontend nodes behind balancer then haproxy is superior here (but availability of whole load balancing set will depend on a single haproxy node)
better to do that in end point in my opinion
not sure what's utilization on your frontend and index nodes
we can try both variants and do some tests
disk based is just alright since it's nvme. but of course, it depends on your service SLA ;)
I understand that you want to load balance frontend nodes. But what way should we pick in order to have high availability?
Thanks for the feedback!
I'll ask some further questions below:
- if 1 haproxy for ingress and 3 frontend nodes behind balancer then haproxy is superior here (but availability of whole load balancing set will depend on a single haproxy node)
I would prefer not to have a SPF on the load balancer. So if haproxy, we should run it in a HA setup. Note that Hetzner's balancer is redundant by default.
- better to do that in end point in my opinion
Do you mean at the load balancer or at the frontend server? If you meant to terminate SSL on the frontend, why do you think this is better?
- not sure what's utilization on your frontend and index nodes
Our current frontend is a low performance VM and has very little utilisation. However, within about a week we'll be integrated in the official IPFS GUI, so we should see some actual traffic soon.
The index nodes have a highly variable load, typically between 30-70% (system load divided by no of CPU's).
- we can try both variants and do some tests
Thinking about it now, perhaps that's premature optimisation. But let's keep this as an option for the future.
- disk based is just alright since it's nvme. but of course, it depends on your service SLA ;)
Reasonable usability is our objective. If our users max out the NVMe, we might want to call CloudFlare. ;)
- I understand that you want to load balance frontend nodes. But what way should we pick in order to have high availability?
This is the main question indeed. If we start out with several (e.g. 3) frontend servers configured exactly as our current frontend, I see several options:
a. Round robin DNS (e.g. poor man's load balancing). b. Hetzner load balancer. c. haproxy (or similar) in HA configuration (e.g. 2 or 3 nodes).
Perhaps we could start with a. and move to b. as it becomes necessary?
Thanks again for your thoughts on this.
There's just one thing I'm not fully decided about yet: the SSL termination.
ANY
records which are required by our frontend CDN.Having read the above, I do think perhaps it makes sense to simply buy an old-school certificate and to continue working with that. I am currently investigating prices. Any further thoughts or feedback is welcome though.
After some thinking and research, it does seem that ACME allows for multiple certificates for the same domain. We can use certbot-dns-cloudflare for DNS-based challenges and can create wildcard certificates.
We can then configure Hetzner's load balancer and, possibly later, a CDN for TCP-based, least-connections balancing with PROXY protocol.
As for LetsEncrypt/ACME with CloudFlare using Ansible, I would suggest we keep using certbot, just set it up with Cloudflare plugin.
I would suggest we start with a cluster of 3 frontend nodes.
Thus far, we're serving all requests from a single
nginx
frontend node, running on a different provider.As we now have have experience with Hetzner Cloud and linking it to he bare metal, we are ready to setup an actual frontend cluster and/or load balancing.
This issue serves as a place to clarify the specifications, after which the frontend can be deployed.
Considerations / observations
Questions