Closed Nicba1010 closed 1 year ago
I am thinking this should be possible already. What matters for RTC connectivity is the client facing IP address of the instance running the plugin so it doesn't really require a domain. If needed this host can be overridden though the ICE Host Override
config setting.
Oh I apologize, I am not really aware of the whole terminology in this yet. I'll try the ICE host override.
@Nicba1010 Let us know if you were able to make this work or if you need support with it. Happy to help.
I was trying to set up a similar call plugin instance as @Nicba1010, but due to a different reason: our Mattermost server is hosted under an Azure AKS instance, and this does not allow a load balancer with a mixed TCP/UDP setup (yet). So I have setup my usual HTTP(S) ingress on one service, and another LoadBalancer service with a different IP that exposes port 8443 on the Mattermost service.
Next, I have configured the ICE Host override
in the System Console, pointing to the new hostname of the UDP loadbalancer listening on port 8443.
However, when I try to start a call, I can see in the browser JS console that the client still tries to connect on port 443 of the normal ingress (i.e. the hostname of the HTTPS Mattermost service), which does not have the UDP port enabled, so after 10 seconds I get a connection failure.
I've also tried to change the RTC Server Port
to something arbitrary like 10000, but still the UDP connection goes out to port 8443.
So it almost seems as if changing the values in the System Console for the ICE server hostname or the RTC Server Port are not picked up by the client browser at all?
I'm using the Mattermost v7.7.0 container, team edition.
@cedricroijakkers Did you restart the plugin? Changing those settings requires a restart.
Your steps look fine to me. As for the RTC Server Port
port, it needs to match what is exposed on the load balancer so make sure it's the same.
However, when I try to start a call, I can see in the browser JS console that the client still tries to connect on port 443 of the normal ingress (i.e. the hostname of the HTTPS Mattermost service)
I am not sure how that's possible if we are talking about the RTC connection. You should be looking at the remote candidates (console logs starting with remote signal
).
@streamer45 well, that was indeed stupid, I've saved the configuration in the System Console, and restarted the whole Mattermost pod in kubernetes for good measure, and I do see the connection being established to the correct hostname now.
The following works in a network limited (not allowing TCP and UDP on the same public IP) kubernetes environment (Azure AKS in our case):
mattermost.example.com
), pointing to the ClusterIP service mentioned above on port 8000, set up your usual Let's Encrypt SSL certificate and other Ingress settings as required (this creates a LoadBalancer service which sends traffic to port 80/443 of your Ingress).mattermost-rtc.example.com
).mattermost-rtc.example.com
as ICE Host Override, make sure the port is set to 8443, you can leave the ICE Servers Configurations field empty.I would say this feature request can be closed, it only needs a little documentation to make the process more clear for people configuring it.
Thanks @cedricroijakkers , that's such a great contribution. Will try to port it our official documentation if you don't mind.
I've just hacked this together while in a meeting :smile: I can create a pull request with some more details if you point me to the documentation.
That would be amazing. I think this kind of information would fit well in https://docs.mattermost.com/configure/calls-deployment.html, which requires changes in https://github.com/mattermost/docs/blob/master/source/configure/calls-deployment.rst.
Let me know if you need any help and feel free to ping me and @justinegeffen, our editor.
Hi @cedricroijakkers! Wow this is awesome! Thank you. :)
FYI: I've tested calls with some people, and it does indeed work. I'm still struggling a little with people behind a firewall that have port 8443 UDP blocked, so I would like to bounce them over our TURN server (we're already running a Jitsi which has a coturn deployed, I think I can probably re-use that one). But I cannot find the documentation on how to configure the TURN server, could you point me to that? Or give me some pointers, and I will put it in the PR as well?
Also, for some weird reason when using Mattermost inside Ferdium (another project I'm involved in) calling doesn't work, while in the browser or the Mattermost desktop application it does work, which is weird since both are based on Electron. I'll have to investigate a little further what is going on here...
But I cannot find the documentation on how to configure the TURN server, could you point me to that? Or give me some pointers, and I will put it in the PR as well?
Yes, you can use coturn
for that. The ICE Servers Configurations
would be something like:
[{
"urls": [turn:turn.example.com:443]
}]
Then you likely want to set the TURN Static Auth Secret
to the value configured as static-auth-secret
in coturn
.
Also, for some weird reason when using Mattermost inside Ferdium (another project I'm involved in) calling doesn't work, while in the browser or the Ferdium desktop application it does work, which is weird since both are based on Electron. I'll have to investigate a little further what is going on here...
Is it failing to connect or some other failure? I am not familiar with Ferdium but happy to help debug if you get stuck.
Yes, you can use
coturn
for that. TheICE Servers Configurations
would be something like:[{ "urls": [turn:turn.example.com:443] }]
Then you likely want to set the
TURN Static Auth Secret
to the value configured asstatic-auth-secret
incoturn
.
I've tried that, it doesn't seem to work. I know this coturn server is working fine, since we are using it to bounce TCP connections for Jitsi. But when configured like this, I do not see any traffic from my machine being sent to either the RTC endpoint or the TURN server. After a few seconds, it gives up and says it cannot connect. No error in the logging, it just seems as if it doesn't even try to connect. Removing the TURN server works, but that only works if the user can connect via UDP to the RTC endpoint of course.
Update: I've got it all working. My TURN
server needed some reconfiguring, but I also needed to configure both a STUN
and a TURN
server. I've used a public STUN
server to avoid any external IP resolution issues, and a private TURN
server to bounce the connections of people that cannot reach the UDP port of the RTC server over a TCP port hosted by coturn
.
Current working configuration in Mattermost is:
[
{
"urls":[
"stun:meet-jit-si-turnrelay.jitsi.net:443"
]
},
{
"urls":[
"turn:my.turnserver.com:443?transport=tcp"
]
}
]
And in coturn
, make sure that no-udp
is configured so that the TURN
server will accept TCP connections and bounce them as UDP towards Mattermost RTC.
At the very least you should see some messages on the TURN side when you try to connect to a call as the relay addresses are allocated. Are you able to get to this point or nothing at all? You may have to turn on debug logs.
Example:
204: : session 001000000000000002: realm <localhost> user <1674057359:teamadmin>: incoming packet ALLOCATE processed, success
204: : session 001000000000000002: realm <localhost> user <1674057359:teamadmin>: incoming packet ALLOCATE processed, success
204: : IPv4. Local relay addr: 192.168.227.146:56138
204: : IPv4. Local relay addr: 192.168.227.146:56138
A couple of things to check from the Calls side would be:
/plugins/com.mattermost.calls/config
should return a config with NeedsTURNCredentials
set to true./plugins/com.mattermost.calls/turn-credentials
should return valid credentials.Oh well, I see you just posted :)
Closing this as resolved.
I'd like to preface this by saying that I know that RTCd exists in the enterprise plan but I am using this for other non-commercial purposes too. I'd like to be able to tell the clients to use another domain to access the RTC server so that I can put the main web endpoint under cloudflare protection and still be able to use the integrated RTC.
Example: Web UI on https://mm.domain.tld:443 (which is under a cloudflare proxy) RTC on rtc.mm.domain.tld:8443 (which is not a proxied record)
This could also be done with a SRV record if it's more practical.