Open CorvusGei opened 1 year ago
Without knowing more details about your setup there is not much I can do to help. Are you running behind a reverse proxy? Did you make sure these routes are properly setup for WebSockets? https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-docker#running-behind-a-reverse-proxy
I had this warning before as well. In my case the issue was, that the video bridge couldn't be called, hence video performance was bad.
I needed to set JVB_WS_SERVER_ID=127.0.0.1
, however, this may not work for you, because you run Docker and the scripts (usually) detect jvb's ip address correctly. I run my setup in podman within a pod with custom networking (flag --net=none
), so I needed that variable because I do not use podman's internal dns service.
I tried to set JVB_WS_SERVER_ID, it didn't work for me.
I opened firewall ports 4443/TCP and 10000 UDP as written in the self-hosting guide.
I added
location /xmpp-websocket { proxy_pass https://localhost:48443; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location /colibri-ws { proxy_pass https://localhost:48443; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }
to my nginx config (I changed 8443 to 48443, because 8443 is alredy used for another service) as described in the self-hosting guide, too.
In my case I got the hint from the logs from the web container. Maybe you see something there which hints you that a connection could not be established. E.g. with docker logs jitsi-web
or similar, depending what the name of the web container is.
I tried to set JVB_WS_SERVER_ID, it didn't work for me. I opened firewall ports 4443/TCP and 10000 UDP as written in the self-hosting guide. I added
location /xmpp-websocket { proxy_pass https://localhost:48443; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location /colibri-ws { proxy_pass https://localhost:48443; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }
to my nginx config (I changed 8443 to 48443, because 8443 is alredy used for another service) as described in the self-hosting guide, too.
I wonder if the problem could be because your proxy will contact a web server with a self-signed certificate.
Can you try to forward the traffic over HTTP to the Jitsi stack from your reverse proxy.
I have a domain.tld with a proper SSL-Certificate. I use Let's Encrypt for subdomains like meet.domain.tld with a plesk extension and mounted the paths to the certificates via the docker-compose.yml (as described in the self-hosting docker guide). But I'll try the next days to remove the http2https-redirection. So I use Lets Encrypt for Jitsi. I mounted the certs as volumes in docker-compose.yml as described in the self hosting guide. And it's working, the browsers say "Connection secure" and "verified by Let's Encrypt. But I'll try to disable http2https-redirection the next days.
I deactivated http2https-redirection in nginx and added ENABLE_HTTP_REDIRECT=1 in the .env-File. It didn't help. Meanwhile I upgraded to version 8319 and used the manual on https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-docker/ and adjusted my nginx-Configuration. (I use Plesk, hat makes the nginx-Configs and have to correct them afterwards). Do you have any suggestions what will be needed to make the Jitsi Videobridge working?
You are conflating a number of things. The JVB communication does not go over XMPP WebSockets, those are the Colibri WebSockets. Which one is failing for you?
Sorry for my confusing description. Here is the error (translated from my Germand deployment): "Bad Video Quality - The control connection (Bridge Channel) has been interrupted, therefore the video quality is limited to the worst level." I deployed it exactly after the manual, only my Nginx Reverse Proxy is configured separately through Plesk, but I added the sections for Colibri and XMPP from the manual.
That message means the colibri channel is not working.
Check if the request arrives at the web container from Plesk.
I have the same problem. I solve it by setting JVB_WS_SERVER_ID
et customising meet.conf. I think this is a bug.
In web/rootfs/default/meet.conf
the ws proxify like this :
{{ if $ENABLE_COLIBRI_WEBSOCKET }}
# colibri (JVB) websockets
location ~ ^/colibri-ws/([a-zA-Z0-9-\._]+)/(.*) {
tcp_nodelay on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://$1:{{ $COLIBRI_WEBSOCKET_PORT }}/colibri-ws/$1/$2$is_args$args;
}
So the address of JVB server and the ws server ID was extract from the second part of the URL and MUST be the same. But for me, with JVB_WS_SERVER_ID
set the second part of the URL is empty.
On the JVB conf jvb/rootfs/default/jvb.conf
we have
{{ $WS_DOMAIN := .Env.JVB_WS_DOMAIN | default $PUBLIC_URL_DOMAIN -}}
{{ $WS_SERVER_ID := .Env.JVB_WS_SERVER_ID | default .Env.JVB_WS_SERVER_ID_FALLBACK -}}
...
websockets {
enabled = {{ $ENABLE_COLIBRI_WEBSOCKET }}
domain = "{{ $WS_DOMAIN }}"
tls = true
server-id = "{{ $WS_SERVER_ID }}"
}
JVB_WS_SERVER_ID_FALLBACK
is never set or read from compose so WS_SERVER_ID was empty. And have no relation with the server address like in the meet.conf.
In my case set the JVB_WS_SERVER_ID
to 127.0.0.1
work because web and jvb server was on the same server. But if not you can force the meet.conf with
location ~ ^/colibri-ws/([a-zA-Z0-9-\._]+)/(.*) {
tcp_nodelay on;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://myjvbserver:9090/colibri-ws/$1/$2$is_args$args;
}
But for me the best solution was to fix the ENV configurations to set the good ws server id in jvb.conf
The fallback is set here: https://github.com/jitsi/docker-jitsi-meet/blob/8555fe1c4a7ea434960ec61e7774f1091400d16a/jvb/rootfs/etc/cont-init.d/10-config#L27
And it being exported before we build the template, it should pick it up.
In my case set the
JVB_WS_SERVER_ID
to127.0.0.1
work because web and jvb server was on the same server. But if not you can force the meet.conf with
That is very weird because the containers do have different IPs.
My Jitsi Meet instance has video performance problems and none of the provided workarounds by google worked. My server has plenty of resources left, so I don't think the server is the cause.![image](https://user-images.githubusercontent.com/101200194/216481972-2bdcf559-3ea7-4d6a-865f-cfe649296123.png)
Maybe you could update the self hosting guide how to setup correctly with Docker.![image](https://user-images.githubusercontent.com/101200194/216481752-d7e9d32a-76f1-48f1-ae1d-42506a79e2b4.png)