jitsi / docker-jitsi-meet

Jitsi Meet on Docker
https://hub.docker.com/u/jitsi/
Apache License 2.0
3.05k stars 1.36k forks source link

Clients try to use ports other than 10000/udp #113

Closed immanuelfodor closed 5 years ago

immanuelfodor commented 5 years ago

Hi guys,

We have a Jitsi dockerized instance running behind NAT, and ports 443/tcp (through an nginx reverse proxy with LE SSL certs) and 10000/udp (forwarded directly to the Jitsi VM in Proxmox) are open on the external firewall. The Ubuntu VM hosting the Jitsi docker has ufw enabled with 8443/tcp and 10000/udp ports open. Port 443 at the reverse proxy is mapped to https://jitsi-vm:8443 Docker host env variable is set to the Jitsi VM's LAN IP address.

When both participants are on the same LAN env, video conferencing with all the bells and whistles are working fine from laptop to laptop, laptop to mobile and mobile to mobile. When one or both participants go to an external network (e.g., mobile internet for mobile client, or mobile hotspot for a laptop), both can join the meeting but as soon as there are the two of them in the same room, video and audio are not available for both parties, the connection issues message is displayed and only the chat works.

When we tried to debug the issue, the firewall live log showed that packets from and to port 10000/udp are accepted at first (an in-out pair) but there are many packets that are being dropped on several other udp ports in the range of 1400-65000 as observed over some connection trials after the two successful packets. Ports seems to be chosen randomly between two connection attempts but stay the same for a session. And so the dropped packets are causing the connection problems, as there are no video and audio received on either end. The Jitsi docker images was built from the up-to-date dev branch locally using make yesterday, then an unmodified docker compose file was used to put them online. Running with a user different from root but added to the docker group to be able to run commands without sudo.

It is not an option to open up other UDP ports with port forwards set up to the Jitsi VM, clients should only use 10000/udp. How could we debug this further? Can you recognize some typical misconfiguration here that we could tick off the list quickly to narrow the problem space? Or is there a bug somewhere why the cliens seem to ignore the single port harvester setting?

saghul commented 5 years ago

We have a Jitsi dockerized instance running behind NAT, and ports 443/tcp (through an nginx reverse proxy with LE SSL certs) and 10000/udp (forwarded directly to the Jitsi VM in Proxmox) are open on the external firewall. The Ubuntu VM hosting the Jitsi docker has ufw enabled with 8443/tcp and 10000/udp ports open. Port 443 at the reverse proxy is mapped to https://jitsi-vm:8443

Why are you using HTTPS in the container if you are doing the TLS stuff outside of it? You could just proxy to the HTTP endpoint instead.

Docker host env variable is set to the Jitsi VM's LAN IP address.

This is incorrect, it must map the public IP.

but there are many packets that are being dropped on several other udp ports in the range of 1400-65000

Who is sending those packets?

And so the dropped packets are causing the connection problems, as there are no video and audio received on either end.

I think the problem is the incorrect DOCKER_HOST_ADDRESS value. Put the public IP there and give it another try.

Cheers!

immanuelfodor commented 5 years ago

In #11, we discovered that it only worked through localhost when the connection was https to make WebRTC work in Chrome, so it was a historical decision. We can try to proxy the http port, of course. But it should not be the problem, as you mention.

Okay, we'll try it with the public IP. If the IP changes, the setting should be maintained, is there an automatic way to do it here, e.g., providing a domain name? We could monitor the external IP and make the changes via bash, but if it can be avoided, then it'd be great. We set the VM address here as we thought the Docker env is the local and the public is the VM hosting Docker.

For example, Jitsi Meet mobile client when the mobile is connecting over external network. The first packets are 10000/udp, then it sends packets to other ports hence there is no video or audio. I assume chat is over http, this is why it works.

Okay, we'll definitely try and report it back within a day. Thank you!

saghul commented 5 years ago

If the IP changes, the setting should be maintained, is there an automatic way to do it here, e.g., providing a domain name?

Nope, sorry.

For example, Jitsi Meet mobile client when the mobile is connecting over external network. The first packets are 10000/udp, then it sends packets to other ports hence there is no video or audio.

I'm not sure what that traffic is about, but in principle the client will only send traffic to candidates advertised by the bridge.

damencho commented 5 years ago

Well in your scenarios you describe one in internal and one in the external network, so I suppose these are just p2p connection attempts which use random ports. If you disable p2p you should not see those.

immanuelfodor commented 5 years ago

Well, it can be, nice catch! I'll add it to the list of changes needed, as far as I see, the following should be tested:

The config.js would only affect web clients and not the mobile, but if the docker host address is the main problem, it should not cause trouble if mobiles keep sending blocked udp packets on ports other than 10000.

Thanks for the ideas, I'll report back what we see after the test.

immanuelfodor commented 5 years ago

Implemented all the above points, and it didn't help. Symptoms are the same, second participant joins from external network, and both participants can see/hear nothing, then later connectivity issues message shows up. Local LAN conference on the same wifi works fine. Tested with two mobile apps, then one connected to 4G, and the problems appeared. Both Jitsi Meet apps were restarted in between network switch. Tomorrow, I'll check the same with two web clients and one web+one mobile, and also see the firewall logs. I'll also experiment with disabling ufw on the Docker VM as I saw somewhere that it can cause problems in iptables/netfilter. Stun or other settings were not modified, I'm curious why the connection doesn't work out of the box over NAT.

damencho commented 5 years ago

Have you done the port forwarding, from the public ip address to the private one? Similar to this: https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md#advanced-configuration

immanuelfodor commented 5 years ago

Of course, these are open.

Today we tried with two laptops and one laptop+one mobile, and we made an interesting find. It seems the laptops are connecting fine with the above 3 changes from external network when operating on mobile hotspots. However, when a mobile client connected on its own, the picture/audio was lost. We also tried with 2 laptops on LAN and the mobile connecting from 4G as third participant, and only the mobile was affected. This is how it was obvious that somehow the mobile client is acting wrong, the issue seems to be isolated to mobile. At this point, I realized that the mobile is running Blokada which is a localhost VPN loop for adblocking (similar to DNS66). When it was turned off, both laptops could see the mobile and vice versa. It seems the Jitsi setup is fine with the changes you suggested, and everyone is happy when Blokada is turned off on mobiles. This is also why it didn't work yesterday when we only had time to test it with two mobile phones. I hope this find can help others as well later on.

Thank you very much for your support and ideas! We'll continue to debug why Blokada is blocking the Jitsi 10000/udp traffic, and I'm closing this issue right now. Have a nice weekend!

immanuelfodor commented 5 years ago

Quick follow-up: it seems adding the Jitsi Meet app to the whitelist in Blockada helps eliminating the issue, no need to turn off the whole service.

rebelga commented 4 years ago

I just had this EXACT problem and did not suspect Blockada. Thank you for documenting all this.