nextcloud / spreed

🗨️ Nextcloud Talk – chat, video & audio calls for Nextcloud
https://nextcloud.com/talk
GNU Affero General Public License v3.0
1.63k stars 436 forks source link

Nextcloud Talk can't handle more than 1 guest at the time #746

Closed Svarto closed 6 years ago

Svarto commented 6 years ago

Steps to reproduce

  1. Install Nextcloud Talk
  2. Connect to COTURN server on external VPS installed with strukturag/docker-webrtc-turnserver
  3. Call in from LAN and have 2 or more guests join via Group Call share link

Expected behaviour

Both guests and host being able to hear eachother and have a conversation.

Actual behaviour

Either the current call gets interrupted and everything breaks down, or only the 2 guests can hear eachother but the host can only hear and communicate with the first guest (i.e. he can't hear the new person).

Browser Firefox

Microphone available: yes

Camera available: no

Operating system: Windows

Browser name: Firefox

Browser version: 59.0.1 (64-bit)

Browser log

ICE failed, your TURN server appears to be broken, see about:webrtc for more details connectivityError event received simplewebrtc.js:18115:4 SimpleWebRTC/< https://nextcloud.svarto-server.com/custom_apps/spreed/js/simplewebrtc.js:18115:4 [100]

Spreed app

**Spreed app version: 3.1.0

**Custom TURN server configured: yes

**Custom STUN server configured: yes

Server configuration

Operating system: Debian.

Web server: Apache

Database: MySQL

**PHP version: 7.0

**Nextcloud Version: 13.0.1

List of activated apps:

PDF viewer Activity Collaborative tags Comments Deleted files External storage support Federation File sharing First run wizard Gallery Log Reader Monitoring Nextcloud announcements Notifications Password policy Share by mail Talk Tasks Text editor Theming Update notification Usage survey Versions Video player Auditing / Logging Default encryption module External user support LDAP user and group backend

Nextcloud configuration:

Docker image, no changes in the config from standard

Turnserver log

Mar 24 12:58:48 Debianwebhost turnserver: 1031: session 001000000000000030: new, realm=, username=<1521982722>, lifetime=600 Mar 24 12:58:48 Debianwebhost turnserver: 1031: session 001000000000000030: realm user <1521982722>: incoming packet ALLOCATE processed, success Mar 24 12:58:49 Debianwebhost turnserver: 1032: IPv4. tcp or tls connected to: 212.51.149.199:12093 Mar 24 12:58:49 Debianwebhost turnserver: 1032: session 000000000000000032: realm user <>: incoming packet message processed, error 401: Unauthorized Mar 24 12:58:49 Debianwebhost turnserver: 1032: session 000000000000000032: realm user <1521981703>: incoming packet ALLOCATE processed, error 486: Allocation Bandwidth Quota Reached Mar 24 12:58:49 Debianwebhost turnserver: 1032: session 000000000000000032: realm user <1521981703>: incoming packet message processed, error 486: Allocation Bandwidth Quota Reached Mar 24 12:58:49 Debianwebhost turnserver: 1032: IPv4. tcp or tls connected to: 213.55.184.204:46257 Mar 24 12:58:49 Debianwebhost turnserver: 1032: IPv4. tcp or tls connected to: 213.55.184.204:40287 Mar 24 12:58:49 Debianwebhost turnserver: 1032: session 000000000000000033: realm user <>: incoming packet message processed, error 401: Unauthorized Mar 24 12:58:49 Debianwebhost turnserver: 1032: session 000000000000000034: realm user <>: incoming packet BINDING processed, success Mar 24 12:58:49 Debianwebhost turnserver: 1032: session 000000000000000033: realm user <1521981832>: incoming packet ALLOCATE processed, error 486: Allocation Bandwidth Quota Reached Mar 24 12:58:49 Debianwebhost turnserver: 1032: session 000000000000000033: realm user <1521981832>: incoming packet message processed, error 486: Allocation Bandwidth Quota Reached Mar 24 12:58:50 Debianwebhost turnserver: 1033: session 000000000000000029: refreshed, realm=, username=<1521982722>, lifetime=0
jospoortvliet commented 6 years ago

Hmm, this WorksForMe™, even with half a dozen guests - but considering the very first info in the browser log is that your TURN server is broken and you should look at about:webrtc for more details I suggest adding that info here, too, or using it to debug the TURN server ;-)

Svarto commented 6 years ago

Apologies, I actually laughed reading your answer - I was a bit stupid to not include that.

No idea what this means, tried to google local mis-match but didn't find anything that helped me solve this...

Here goes:

((ice/INFO) ICE(PC:1521907487011000 (id=8589934603 url=https://nextcloud.example.com/call/cr4o5j8x)): Skipping STUN server because of link local mis-match

(ice/INFO) ICE(PC:1521907487011000 (id=8589934603 url=https://nextcloud.example.com/call/cr4o5j8x)): Skipping TURN server because of link local mis-match

(ice/INFO) ICE(PC:1521907487011000 (id=8589934603 url=https://nextcloud.example.com/call/cr4o5j8x)): Skipping STUN server because of link local mis-match

(ice/INFO) ICE(PC:1521907487011000 (id=8589934603 url=https://nextcloud.example.com/call/cr4o5j8x)): Skipping TURN server because of link local mis-match

(ice/INFO) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:173 function nr_socket_multi_tcp_create_stun_server_socket skipping UDP STUN server(addr:IP4:external_turnserver_ip:5733/UDP)

(ice/INFO) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:173 function nr_socket_multi_tcp_create_stun_server_socket skipping UDP STUN server(addr:IP4:external_turnserver_ip:5733/UDP)

(ice/INFO) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:179 function nr_socket_multi_tcp_create_stun_server_socket skipping STUN with different IP version (4) than local socket (6),

(ice/WARNING) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:617 function nr_socket_multi_tcp_listen failed with error 3

(ice/WARNING) ICE(PC:1521907487011000 (id=8589934603 url=https://nextcloud.svarto-server.com/call/cr4o5j8x)): failed to create passive TCP host candidate: 3)
nickvergessen commented 6 years ago

Duplicate of #649

But yeah, we can't reproduce this.