Open maxee opened 3 years ago
Linking to the bigbluebutton/bigbluebutton issue https://github.com/bigbluebutton/bigbluebutton/issues/12291
We just templated that bug out with Ansible:
upstream poolhtml5servers {
zone poolhtml5servers 32k;
least_conn;
server 10.7.7.200:4100 fail_timeout=10s max_fails=4 backup;
{% for n in range(vars.meteor_backend_processes + vars.meteor_frontend_processes|int)%}
server {{ '10.7.7.201' | ipmath(n) }}:{{4101 + n}} fail_timeout=120s max_fails=1;
{% endfor %}
}
Hi @chfxr, you only need the number of meteor_frontend_processes here, don't you? At least, as I understood it after reading the bigbluebutton issue. I came up with the following template:
upstream poolhtml5servers {
zone poolhtml5servers 32k;
least_conn;
{% for i in range(bbb_html5_frontend_processes | default(2) | int(2)) %}
server 127.0.0.1:410{{ i }} fail_timeout=5s max_fails=3;
{% endfor %}
}
Problem
The service
nginx
expects 8 instances ofhtml5-frontend
for load-balancing in/etc/nginx/conf.d/default.conf
:However, the default env-file
.env.sample
enables only one instance ofhtml-frontend
(the one being declared as backup in the nginx config).As all other seven instances are not started and therefor not available, nginx cycles through all of them. Once connecting to all upstream instances failed, it considers the (only working) backup instance.
This behavior leads to at least 10 seconds of additional loading/connection time.
Workaround
Method 1
Set
NUMBER_OF_FRONTEND_NODEJS_PROCESSES=1
in .env to8
, rebuild and restart the whole stuff.Method 2
Remove the seven frontend instances and keep only the current backup instance:
docker-compose exec nginx /bin/ash
vi /etc/nginx/conf.d/default.conf
exit
docker-compose restart nginx
Solution
The nginx config should only expect as many instances of
html-frontend
as specified in .env.