Open IgaoWolf opened 1 year ago
@IgaoWolf NGINX configuration can be like this
upstream cloudstack_webui_8080 {
# Balance based on the source. You can also use cookies for persistence btw!
ip_hash;
server <MGMT_IP>:8080;
server <MGMT_IP>:8080;
}
server {
listen 80;
server_name acs..com.br;
return 308 https://acs..com.br$request_uri;
}
server {
listen 443 ssl;
server_name acs..com.br;
ssl_certificate /<certificates>
ssl_certificate_key /<certificates>
location / {
proxy_pass http://cloudstack_webui_8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For agents, use the native feature described at https://www.shapeblue.com/software-based-agent-lb-for-cloudstack/
And keepalived
, make sure you check the nginx
process in vrrp_script
as script "killall -0 nginx"
. Please also keep in mind that NGINX Open Source doesn't have active health check feature for backend servers.
You may also use HAProxy and follow the below reference configuration.
# WEBUI
frontend webui-http
mode http
bind *:80
http-request redirect scheme https unless { ssl_fc }
frontend webui-https
mode http
bind *:443 ssl crt /path/to/cert_key.pem alpn http/1.1
default_backend webui-8080
backend webui-8080
option forwardfor
option httpchk HEAD /client/
balance source
mode http
server mgmt-01 <MGMT_IP>:8080 check
server mgmt-02 <MGMT_IP>:8080 check
For agents, use the native feature described at https://www.shapeblue.com/software-based-agent-lb-for-cloudstack/
You may get quicker answers in the Users ML :)
Thanks, Jayanth Reddy
@IgaoWolf, in addition to https://github.com/apache/cloudstack/issues/8221#issuecomment-1807118065, bear in mind that ACS considers the first entry of header X-Forwarded-For
as the request source IP. Users are able to modify this header; therefore, if the firewall doing the NAT to your floating IP is not handling it properly, users may bypass some configurations, like api.allowed.source.cidr.list
.
Hello @zap51 , Everything alright?
As mentioned earlier, we were analyzing the choice between nginx and haproxy, and we ended up preferring the use of haproxy. So, I'll be setting up this proxy for load balancing between the managements and also for access via URL and the use of SSL certificates to access by name. I just had one doubt: I would like to know if it won't be necessary to configure port 8250 as well, in addition to the http/s ports.
Team, I just set up this file for ports 8250, 8080, and 443.
global log 127.0.0.1 local0 log 127.0.0.1 local1 notice
maxconn 8192
chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
defaults option http-server-close log global mode http option dontlognull retries 3 option redispatch maxconn 2000 timeout connect 5s timeout client 120s timeout server 120s
listen stats
bind
frontend vs-acs-gui-http
bind
bind <IP-MGMT>:443 ssl crt /etc/haproxy/certs/wildcard.crt
mode http
option httplog
option forwardfor
maxconn 2000
acl path_root path /
redirect location https://acs.<domain>.com.br/client/ if path_root
default_backend pool-acs-gui-http
frontend vs-acs-gui-https-int
bind
backend pool-acs-gui-http
mode http
option httplog
option forwardfor
balance leastconn
fullconn 2000
cookie SERVERID insert indirect
server
frontend vs-acs-tcp-8250
bind <IP-MGMT>:8250
mode tcp
option tcplog
maxconn 2000
default_backend pool-acs-tcp-8250
backend pool-acs-tcp-8250
mode tcp
option tcplog
balance source
fullconn 2000
hash-type consistent
server
Hello @zap51 , Everything alright?
As mentioned earlier, we were analyzing the choice between nginx and haproxy, and we ended up preferring the use of haproxy. So, I'll be setting up this proxy for load balancing between the managements and also for access via URL and the use of SSL certificates to access by name. I just had one doubt: I would like to know if it won't be necessary to configure port 8250 as well, in addition to the http/s ports.
Team, I just set up this file for ports 8250, 8080, and 443.
global log 127.0.0.1 local0 log 127.0.0.1 local1 notice
log loghost local0 info
maxconn 8192 chroot /usr/share/haproxy user haproxy group haproxy daemon
defaults option http-server-close log global mode http option dontlognull retries 3 option redispatch maxconn 2000 timeout connect 5s timeout client 120s timeout server 120s
listen stats bind
:9245 mode http stats hide-version stats enable stats uri /admin?stats stats realm Haproxy\ Statistics stats auth admin: . frontend vs-acs-gui-http bind
:80 http-request redirect scheme https if !{ ssl_fc } bind <IP-MGMT>:443 ssl crt /etc/haproxy/certs/wildcard.crt mode http option httplog option forwardfor maxconn 2000 acl path_root path / redirect location https://acs.<domain>.com.br/client/ if path_root default_backend pool-acs-gui-http
frontend vs-acs-gui-https-int bind
:443 ssl crt /etc/haproxy/certs/wildcard.crt mode http option httplog option forwardfor maxconn 2000 acl path_root path / redirect location https://acs. .com.br/client if path_root default_backend pool-acs-gui-http backend pool-acs-gui-http mode http option httplog option forwardfor balance leastconn fullconn 2000 cookie SERVERID insert indirect server
:8080 maxconn 1000 check inter 5s cookie server :8080 maxconn 1000 check inter 5s cookie frontend vs-acs-tcp-8250 bind <IP-MGMT>:8250 mode tcp option tcplog maxconn 2000 default_backend pool-acs-tcp-8250
backend pool-acs-tcp-8250 mode tcp option tcplog balance source fullconn 2000 hash-type consistent server
:8250 maxconn 1000 check inter 5s server :8250 maxconn 1000 check inter 5s
Hi,
Please read https://www.shapeblue.com/software-based-agent-lb-for-cloudstack
Thanks, Jayanth
Is this resolved, can we close the ticket?
Good afternoon, everyone. How are you?
I am implementing an nginx for load balancing between managements, and since I currently have two managements in my environment, it is getting close to the demand to increase to more managements. Currently, I have two because one serves as a slave and the other as a backup, both with a keepalived configured with a VIP interface that goes up when checking if CloudStack is running.
I would like to know if anyone has a template that I can use as a reference to correct my file. I am implementing nginx for the first time to balance the loads between the two managements.
Currently, I am only using keepalived for cases of connection loss and for the elevation of the VIP interface.
I have a file that I put together, but I would like to check a case from someone who has it in production and see if they can comment on how it is working.
I will also leave below the template I am creating for this implementation:
cloudstack_proxy.conf
_upstream cloudstack_backend { server:8250;
server :8250;
}
upstream cloudstack_backend_8080 { server:8080;
server :8080;
}
server { listen 80; server_name acs..com.br;
}
server { listen 443 ssl; server_name acs..com.br;
}
server { listen 8080; server_name acs..com.br;
}
server { listen 8250; server_name acs..com.br;
}_