Closed Javierd closed 1 year ago
Hi @Javierd,
Solid choice, STUNner works great for that kind of workload.
As of you issue, my guess is that the External-IP column of the stunner-gateway-udp-svc service is <pending>
. This can happen if your k8s cluster does not support LoadBalancer services (on UDP ports).
To check if this is the case, query the k8s services in the stunner namespace and observe the External-IP column:
kubectl get svc -n stunner
For minikube, use minikube tunnel
. It should fix the pending External-IP and thus the missing Public IP.
Let me know if this fixes your issue or not.
Okey, so I've been trying to get this working by using minikube tunnel, as you suggested, but I still haven't been able to get it right. Currently, the problem I'm having is that the webrtc connection fails to be established most of the times, and when it works, it takes over 30 seconds to connect, which makes it unusable. As far as I know, everything should be configured and working properly:
apiVersion: v1
kind: Service
metadata:
name: mediaserver-media-plane
namespace: default
labels:
app: mediaserver-media-plane
spec:
ports:
# port is ignored in the below
- port: 9999
protocol: UDP
name: mediaserver-media-plane-port
selector:
app: mediaserver
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
name: mediaserver-media-plane
namespace: stunner
spec:
parentRefs:
- name: mediaserver-udp-listener
rules:
- backendRefs:
- name: mediaserver-media-plane
namespace: default
Now, when running minikube tunnel
, I can get the external IP address and port
javierd@javierd:~$ kubectl get service -n stunner
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
stunner ClusterIP 10.111.68.47 <none> 3478/UDP 10d
stunner-gateway-operator-controller-manager-metrics-service ClusterIP 10.100.237.22 <none> 8443/TCP 10d
stunner-gateway-udp-gateway-svc LoadBalancer 10.97.29.57 10.97.29.57 3479:31117/UDP 10d
stunner-gateway-mediaserver-udp-gateway-svc LoadBalancer 10.97.194.199 10.97.194.199 3479:30141/UDP 3d1h
which I use to set the ICE candidates at my javascript client
[{
url: 'turn:10.97.194.199:30141?transport=udp',
username: 'user-1',
credential: 'pass-1',
},]
So, as far as I know, this should be working, but when I try to use the client to send voice to my webrtc server, it takes more than 40 seconds to connect, and sometimes even more than one minute.
For example, this was one the situations when it actually connected: The SDP offer, generated by the server, is:
v=0
o=rtc 1992977704 0 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE audio
a=group:LS audio
a=msid-semantic:WMS *
a=setup:actpass
a=ice-ufrag:oo1N
a=ice-pwd:mdqWAmzjaZ1bhSoivlaNQV
a=ice-options:ice2,trickle
a=fingerprint:sha-256 DB:D0:9A:C3:38:08:AE:9E:BB:31:E2:2E:0E:37:8C:84:F9:BD:F2:F2:1B:24:2F:48:AE:8B:2C:C8:C7:88:DA:78
m=audio 51064 UDP/TLS/RTP/SAVPF 111
c=IN IP4 172.17.0.14
a=mid:audio
a=recvonly
a=ssrc:340588426 cname:audio_send
a=ssrc:340588426 msid:stream1 audio_send
a=rtcp-mux
a=rtpmap:111 OPUS/48000/2
a=fmtp:111 minptime=10;maxaveragebitrate=96000;stereo=0;sprop-stereo=0;sprop-stereo=0;useinbandfec=1
a=candidate:1 1 UDP 2122317823 172.17.0.14 51064 typ host
a=end-of-candidates
While the answer, generated by the client is:
v=0
o=- 1768309174358733854 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE audio
a=msid-semantic: WMS ivNJJmlpuq6SI7UheiXSXMYqv2mnsvBoYuLK
m=audio 47862 UDP/TLS/RTP/SAVPF 111
c=IN IP4 192.168.49.1
a=rtcp:9 IN IP4 0.0.0.0
a=candidate:2580271359 1 udp 2122260223 192.168.49.1 47862 typ host generation 0 network-id 1 network-cost 50
a=candidate:4210271736 1 udp 2122194687 172.17.0.1 50059 typ host generation 0 network-id 2 network-cost 50
a=candidate:2778140534 1 udp 2122129151 10.9.8.29 58788 typ host generation 0 network-id 3
a=candidate:2058198308 1 udp 2122063615 192.168.56.1 59723 typ host generation 0 network-id 4
a=candidate:1405438904 1 udp 2121998079 192.168.1.48 55627 typ host generation 0 network-id 5 network-cost 10
a=candidate:1734411371 1 tcp 1518280447 192.168.49.1 9 typ host tcptype active generation 0 network-id 1 network-cost 50
a=candidate:72970604 1 tcp 1518214911 172.17.0.1 9 typ host tcptype active generation 0 network-id 2 network-cost 50
a=candidate:1530780642 1 tcp 1518149375 10.9.8.29 9 typ host tcptype active generation 0 network-id 3
a=candidate:2215070128 1 tcp 1518083839 192.168.56.1 9 typ host tcptype active generation 0 network-id 4
a=candidate:2909773612 1 tcp 1518018303 192.168.1.48 9 typ host tcptype active generation 0 network-id 5 network-cost 10
a=ice-ufrag:/ur3
a=ice-pwd:J02xfNw+GEDJOp9JXjYEGrSC
a=ice-options:trickle
a=fingerprint:sha-256 EC:3C:95:60:C4:23:18:A1:F1:05:06:C5:9F:82:EC:69:50:42:94:90:A0:89:16:8A:D5:0B:F8:AE:08:5E:10:AB
a=setup:active
a=mid:audio
a=sendonly
a=rtcp-mux
a=rtpmap:111 OPUS/48000/2
a=fmtp:111 minptime=10;useinbandfec=1
a=ssrc:3443373755 cname:UBU3PO84nx415qcf
a=ssrc:3443373755 msid:ivNJJmlpuq6SI7UheiXSXMYqv2mnsvBoYuLK 91db16c5-25ee-4ba9-8943-37da1f0c5b37
I'm currently running multiple docker containers, appart from the minikube cluster, so I suppose that's why there are so many ICE candidates. However, I don't get why there are TCP candidates available when I've only set up a UDP route.
When the connection is being stablished, it always fails initially, but sometimes it ends up connecting, as shown in this screenshot from chrome's webrtc-internals: The ICE candidate grid can be seen here:
Should I need to set up anything on the media server so that it works? What could be the problem? Any help would be appreciated.
Thanks a lot!
Three things:
helm repo update
, then helm uninstall
both stunner
and stunner-gateway-operator
, and finally reinstall both again (Gateway API objects can remain). turn:10.97.194.199:30141?transport=udp
) it seems to me that you are trying to use the NodePort service (port > 30000) to reach STUNner instead of the LoadBalancer service (please read https://kubernetes.io/docs/concepts/services-networking/service) and this usually does not work in Minikube (nodes do not have an external address). Can you retry again with turn:10.97.194.199:3479?transport=udp
(assuming that your Gateway listener is opened at port 3479)? Also, can you please upload the output of cmd/stunnerctl/stunnerctl running-config stunner/stunnerd-config
?kubectl logs <stunner-pod-name>
: stunner maintains an access log so if anything ever connects to it you can see it in the logs. My guess is that your access log output is essentially empty.I have just reinstalled both stunner and stunner-gateway-operator, so these are the versions currently used:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
stunner stunner 1 2023-01-24 09:23:52.482646722 +0100 CET deployed stunner-0.14.0 0.14.0
stunner-gateway-operator stunner-system 1 2023-01-24 09:23:42.072111996 +0100 CET deployed stunner-gateway-operator-0.13.0 0.13.0
I've also removed the old gateway API objects and create new ones, as I had a bit of a mix between the example objects and the ones I created. Once this is done, I only have a UDP listener on port 3478, and if I'm use that port as the TURN port, it works perfectly, so thanks a lot. However, I dont exactly know why is that working, let me explain myself: If I list the stunner services, the load balancer ports are 3478:31595.
javierd@javierd:~/stunner$ kubectl get svc -n stunner
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
stunner ClusterIP 10.104.179.171 <none> 3478/UDP 46m
vsc-mediaserver-udp-gateway LoadBalancer 10.109.148.228 10.109.148.228 3478:31595/UDP 24m
As far as I know, the port 3478 is the internal port of the node, inside the cluster, while the public port is 31595, isn't that right? The output of the command you suggest seems to not be working properly, as it outputs information for a listener already deleted
javierd@javierd:~/stunner$ cmd/stunnerctl/stunnerctl running-config stunner/stunnerd-config
STUN/TURN authentication type: plaintext
STUN/TURN username: user-1
STUN/TURN password: pass-1
Listener 1
Name: stunner/udp-gateway/udp-listener
Listener: stunner/udp-gateway/udp-listener
Protocol: UDP
Public port: 31117
About the stunner logs, now I can check that there are some access, and although on the webrtc client I can see that the connection works (it's status changes to connected), on the logs there are some auth errors:
09:00:06.916953 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:4382
09:00:06.917083 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:61761
09:00:06.917297 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:47037
09:00:06.917338 turn.go:235: turn INFO: permission denied for client 172.17.0.1:47037 to peer 172.17.0.13
09:00:06.917444 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:10370
09:00:06.917472 turn.go:235: turn INFO: permission denied for client 172.17.0.1:10370 to peer 172.17.0.13
09:00:06.917822 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:3994
09:00:06.917866 turn.go:235: turn INFO: permission denied for client 172.17.0.1:3994 to peer 172.17.0.13
09:00:06.918118 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:4382
09:00:06.918176 turn.go:235: turn INFO: permission denied for client 172.17.0.1:4382 to peer 172.17.0.13
09:00:06.918503 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:61761
09:00:06.918538 turn.go:235: turn INFO: permission denied for client 172.17.0.1:61761 to peer 172.17.0.13
09:00:48.026533 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:47037
09:00:48.028467 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:10370
I've also removed the old gateway API objects and create new ones, as I had a bit of a mix between the example objects and the ones I created. Once this is done, I only have a UDP listener on port 3478, and if I'm use that port as the TURN port, it works perfectly, so thanks a lot. However, I dont exactly know why is that working, let me explain myself: If I list the stunner services, the load balancer ports are 3478:31595.
javierd@javierd:~/stunner$ kubectl get svc -n stunner NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE stunner ClusterIP 10.104.179.171 <none> 3478/UDP 46m vsc-mediaserver-udp-gateway LoadBalancer 10.109.148.228 10.109.148.228 3478:31595/UDP 24m
As far as I know, the port 3478 is the internal port of the node, inside the cluster, while the public port is 31595, isn't that right?
Not precisely. From the K8s docs:
port
is the "official" port of the service, it is used both for ClusterIP and LoadBalancer service access,targetPort
is the actual port opened at the pods (if not set then by default targetPort
=port
),nodePort
(30000-32768) is a funky port that is opened at the actual Kubernetes node hosting the pods, so you can reach the service using the IP of the node plus the nodeport if this is enabled and if your nodes have an actual public external address (which is not the case for Minikube).Long story short: STUNner tries to expose your Gateway listener on an LB service, choosing the port to be equal to the port specified in the Gateway listener (3478) and hence you should use the EXTERNAL-IP:service port (10.109.148.228:3478) from your clients to reach it.
The output of the command you suggest seems to not be working properly, as it outputs information for a listener already deleted.
javierd@javierd:~/stunner$ cmd/stunnerctl/stunnerctl running-config stunner/stunnerd-config STUN/TURN authentication type: plaintext STUN/TURN username: user-1 STUN/TURN password: pass-1 Listener 1 Name: stunner/udp-gateway/udp-listener Listener: stunner/udp-gateway/udp-listener Protocol: UDP Public port: 31117
This is weird: your kubectl get svc
shows that the LB service was created for a Gateway called vsc-mediaserver-udp-gateway
and it has a working external IP while stunnerctl
suggests that the Gateway is called udp-gateway
and it has no external IP. Which one is right? Are you sure both outputs were collected from the same setup with the same Gateway CRDs and everything is installed into the stunner
namespace?
About the stunner logs, now I can check that there are some access, and although on the webrtc client I can see that the connection works (it's status changes to connected), on the logs there are some auth errors:
09:00:06.916953 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:4382 09:00:06.917083 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:61761 09:00:06.917297 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:47037 09:00:06.917338 turn.go:235: turn INFO: permission denied for client 172.17.0.1:47037 to peer 172.17.0.13 09:00:06.917444 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:10370 09:00:06.917472 turn.go:235: turn INFO: permission denied for client 172.17.0.1:10370 to peer 172.17.0.13 09:00:06.917822 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:3994 09:00:06.917866 turn.go:235: turn INFO: permission denied for client 172.17.0.1:3994 to peer 172.17.0.13 09:00:06.918118 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:4382 09:00:06.918176 turn.go:235: turn INFO: permission denied for client 172.17.0.1:4382 to peer 172.17.0.13 09:00:06.918503 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:61761 09:00:06.918538 turn.go:235: turn INFO: permission denied for client 172.17.0.1:61761 to peer 172.17.0.13 09:00:48.026533 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:47037 09:00:48.028467 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:10370
It seems your client is trying to reach a pod at the IP 172.17.0.13
, can you please check what's running there (k get pods -o wide
) and whether that pod is actually allowed to be reached via STUNner? Note that only permission denied
events are reported at the logLevel INFO, try increasing the log-level to all:DEBUG
(you can set the logvelel in the GatewayConfig) to see the "permission granted" messages as well (maybe we should change this?).
Thanks a lot for the explanation, I think I get it now.
This is weird: your
kubectl get svc
shows that the LB service was created for a Gateway calledvsc-mediaserver-udp-gateway
and it has a working external IP whilestunnerctl
suggests that the Gateway is calledudp-gateway
and it has no external IP. Which one is right? Are you sure both outputs were collected from the same setup with the same Gateway CRDs and everything is installed into thestunner
namespace?
Yes, it also seemed pretty strange to me. I'm using lens to access the minikube cluster, and all the commands were executed on the same terminal, so it's the same setup, yes, and everything is installed on the stunner namespace.
The right information is the one provided by kubectl get svc
(or at least is the one i'm using to connect).
The Gateway called udp-gateway was the first one I created by followind the tutorial, but I deleted it after some testing. In fact, it's not available now:
javierd@javierd:~/stunner$ kubectl get gateway -n stunner
NAME CLASS ADDRESS READY AGE
vsc-mediaserver-udp-gateway stunner-gatewayclass True 160m
javierd@javierd:~/stunner$ kubectl get svc -n stunner
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
stunner ClusterIP 10.104.179.171 <none> 3478/UDP 3h2m
vsc-mediaserver-udp-gateway LoadBalancer 10.109.148.228 10.109.148.228 3478:31595/UDP 160m
javierd@javierd:~/stunner$ cmd/stunnerctl/stunnerctl running-config stunner/stunnerd-config
STUN/TURN authentication type: plaintext
STUN/TURN username: user-1
STUN/TURN password: pass-1
Listener 1
Name: stunner/udp-gateway/udp-listener
Listener: stunner/udp-gateway/udp-listener
Protocol: UDP
Public port: 31117
javierd@javierd:~/stunner$
It looks like stunnerctl has cached the information or something similar.
It seems your client is trying to reach a pod at the IP
172.17.0.13
, can you please check what's running there (k get pods -o wide
) and whether that pod is actually allowed to be reached via STUNner? Note that onlypermission denied
events are reported at the logLevel INFO, try increasing the log-level toall:DEBUG
(you can set the logvelel in the GatewayConfig) to see the "permission granted" messages as well (maybe we should change this?).
The pod running at the IP 172.17.0.13
is the mediaservr which should be receiving the data. While testing, I have checked that although the connection is stablished, the data cannot be sent from the mediaserver to the client, the used library (libdatachannel) fails to send the data.
Regarding your question about whether that pod is allowed to be reached, I think it should be allowed, as I created the service and the UDP route posted on my second message on this thread. Is there anything else that should be done?
About the logs, I think it's cool that they are set to INFO by default, as it's pretty easy to change and otherwise the logs may become unusable on its default behaviour.
I've been trying to solve the error but:
javierd@javierd:~/$ kubectl describe gatewayconfigs/stunner-gatewayconfig
Name: stunner-gatewayconfig
Namespace: default
Labels: <none>
Annotations: <none>
API Version: stunner.l7mp.io/v1alpha1
Kind: GatewayConfig
Metadata:
Creation Timestamp: 2023-01-13T10:27:56Z
Generation: 2
Managed Fields:
API Version: stunner.l7mp.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:authType:
f:password:
f:realm:
f:stunnerConfig:
f:userName:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-01-13T10:27:56Z
API Version: stunner.l7mp.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:logLevel:
Manager: kubectl-edit
Operation: Update
Time: 2023-01-25T11:12:36Z
Resource Version: 283548
UID: e2395590-fc93-4089-bb50-128ac75ad480
Spec:
Auth Type: plaintext
Log Level: all:DEBUG,turn:DEBUG
Password: pass-1
Realm: stunner.l7mp.io
Stunner Config: stunnerd-config
User Name: user-1
Events: <none>
javierd@javierd:~/Vaelsys/k8s/stunner-files$
11:18:46.618645 main.go:75: stunnerd INFO: watching configuration file at "/etc/stunnerd/stunnerd.conf"
11:18:46.620231 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
11:18:46.620346 server.go:18: stunner INFO: listener stunner/udp-gateway/udp-listener: [udp://172.17.0.13:3478<32768:65535>] (re)starting
11:18:46.620477 server.go:157: stunner INFO: listener stunner/udp-gateway/udp-listener: TURN server running
11:18:46.620495 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 4, changed objects: 0, deleted objects: 0, started objects: 1, restarted objects: 0
11:18:46.620523 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: plaintext, listeners: stunner/udp-gateway/udp-listener: [udp://172.17.0.13:3478<32768:65535>]
As I pointed out above, there's a major misconfiguration in your setup in that the dataplane you think should be controlled by your Gateway/GatewayConfig/UDPRoute resources fails to pick up the configuration rendered by the operator. 99 percent of the time this is because STUNner was not installed cleanly, e.g., there are two operators running in your cluster repeatedly stepping on each other's toe, or there are stale Gateway resources that interfere with the main config, or similar. Until we fix that, nothing will work. I can help you but you need to provide me with the info I asked, otherwise I cannot find out what got misconfigured.
I though I had given you all the information. In case you need the all pods, here you have them:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kms-854689cc8b-zkp57 1/1 Running 8 (14h ago) 12d 172.17.0.2 minikube <none> <none>
media-plane-788fcc869b-2ht55 1/1 Running 8 (14h ago) 12d 172.17.0.11 minikube <none> <none>
nfs-server-59989c7f97-hgjz8 1/1 Running 7 (14h ago) 5d18h 172.17.0.8 minikube <none> <none>
backend-6fd864875b-djgkj 3/3 Running 89 (4h3m ago) 5d18h 172.17.0.14 minikube <none> <none>
db-8588cfd5cb-44x28 1/1 Running 5 (14h ago) 5d3h 172.17.0.4 minikube <none> <none>
device-76df76d979-ddbsd 1/1 Running 0 126m 172.17.0.19 minikube <none> <none>
malamute-6c66455b49-ln8qn 1/1 Running 4 (14h ago) 5d18h 172.17.0.5 minikube <none> <none>
mediaserver-789f59d9db-6x7dd 1/1 Running 0 46m 172.17.0.21 minikube <none> <none>
siteservice-7c9dd6b8cd-ks8w7 1/1 Running 76 (4h3m ago) 5d18h 172.17.0.17 minikube <none> <none>
taskmanager-86df966c4b-k6qm8 1/1 Running 76 (4h3m ago) 5d18h 172.17.0.15 minikube <none> <none>
vsbnservice-6c7f9fff69-xb2sd 3/3 Running 83 (4h9m ago) 5d4h 172.17.0.18 minikube <none> <none>
webbackend-788f69c748-n5v6j 3/3 Running 13 (4h3m ago) 44h 172.17.0.16 minikube <none> <none>
webrtc-server-997db67f5-cmtc6 1/1 Running 8 (14h ago) 12d 172.17.0.6 minikube <none> <none>
As I have restarted the deployments several times, the mediaserver IP has now changed, but it's the same IP stunner is reporting errors about now:
12:09:27.268127 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:40072
12:09:27.268221 turn.go:235: turn INFO: permission denied for client 172.17.0.1:40072 to peer 172.17.0.21
12:09:27.268983 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:49528
12:09:27.269019 turn.go:235: turn INFO: permission denied for client 172.17.0.1:49528 to peer 172.17.0.21
12:09:27.269088 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:47485
12:09:27.269119 turn.go:235: turn INFO: permission denied for client 172.17.0.1:47485 to peer 172.17.0.21
12:09:27.269186 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:64169
12:09:27.269206 turn.go:235: turn INFO: permission denied for client 172.17.0.1:64169 to peer 172.17.0.21
12:09:27.269678 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:49556
I don't really know how I can check if the pod is able to be accessed via stunner.
Regarding the other kinds of objects:
javierd@javierd:~$ kubectl get gateways -A
NAMESPACE NAME CLASS ADDRESS READY AGE
stunner mediaserver-udp-gateway stunner-gatewayclass True 28h
javierd@javierd:~$ kubectl get gatewayconfigs -A
NAMESPACE NAME REALM AUTH AGE
default stunner-gatewayconfig stunner.l7mp.io plaintext 12d
stunner stunner-gatewayconfig stunner.l7mp.io plaintext 12d
javierd@javierd:~$ kubectl get udproutes -A
NAMESPACE NAME AGE
stunner media-plane 12d
stunner mediaserver-media-plane 4d22h
javierd@javierd:~$ kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kms-control ClusterIP 10.104.119.49 <none> 8888/TCP 12d
default kms-media-plane ClusterIP 10.97.19.161 <none> 9999/UDP 12d
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d
default malamute ClusterIP 10.108.59.53 <none> 9999/TCP 5d18h
default media-plane ClusterIP 10.97.56.239 <none> 9001/UDP 12d
default nfs-server ClusterIP 10.104.45.134 <none> 2049/TCP,2048/TCP,111/UDP 5d18h
default overlay-image ClusterIP 10.109.162.115 <none> 80/TCP 12d
default db NodePort 10.103.244.44 <none> 3306:30662/TCP 5d3h
default mediaserver-media-plane ClusterIP 10.110.29.204 <none> 9999/UDP 4d22h
default vcontrol ClusterIP None <none> 8082/TCP 5d18h
default wamprouter ClusterIP None <none> 55555/TCP 44h
default web NodePort 10.110.231.4 <none> 80:31336/TCP 5d18h
default webrtc-server LoadBalancer 10.107.63.109 10.107.63.109 8443:32045/TCP 12d
ingress-nginx ingress-nginx-controller LoadBalancer 10.101.119.37 10.101.119.37 80:31260/TCP,443:31483/TCP 4h7m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.82.241 <none> 443/TCP 4h7m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 12d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.98.25.185 <none> 8000/TCP 12d
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.107.213.202 <none> 80/TCP 12d
stunner-system stunner-gateway-operator-controller-manager-metrics-service ClusterIP 10.96.226.241 <none> 8443/TCP 28h
stunner stunner ClusterIP 10.104.179.171 <none> 3478/UDP 28h
stunner mediaserver-udp-gateway LoadBalancer 10.109.148.228 10.109.148.228 3478:31595/UDP 28h
javierd@javierd:~$
Sorry, my fault, I forgot to attach the commands to get the info I need. So please send me the output of this:
kubectl get pods,deploy,svc -A -o wide
kubectl get gatewayclasses -o yaml
kubectl -n stunner get configmaps -o yaml
kubectl get gateways,gatewayconfigs,udproutes -A -o yaml
In addition please send me the logs of the operator and the stunner pods. We need to see what's going on here.
No worries, here you have the outputs of each command:
javierd@javierd:~$ kubectl get pods,deploy,svc -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/kms-854689cc8b-zkp57 1/1 Running 9 (3h52m ago) 12d 172.17.0.2 minikube <none> <none>
default pod/media-plane-788fcc869b-2ht55 1/1 Running 9 (3h52m ago) 12d 172.17.0.12 minikube <none> <none>
default pod/nfs-server-59989c7f97-hgjz8 1/1 Running 8 (3h52m ago) 5d23h 172.17.0.7 minikube <none> <none>
default pod/backend-6fd864875b-djgkj 3/3 Running 92 (3h52m ago) 5d23h 172.17.0.17 minikube <none> <none>
default pod/db-8588cfd5cb-44x28 1/1 Running 6 (3h52m ago) 5d7h 172.17.0.10 minikube <none> <none>
default pod/device-76df76d979-ddbsd 1/1 Running 1 (3h52m ago) 6h21m 172.17.0.14 minikube <none> <none>
default pod/malamute-6c66455b49-ln8qn 1/1 Running 5 (3h52m ago) 5d23h 172.17.0.9 minikube <none> <none>
default pod/mediaserver-789f59d9db-6x7dd 1/1 Running 1 (3h52m ago) 5h1m 172.17.0.15 minikube <none> <none>
default pod/siteservice-7c9dd6b8cd-ks8w7 1/1 Running 77 (3h52m ago) 5d23h 172.17.0.16 minikube <none> <none>
default pod/taskmanager-86df966c4b-k6qm8 1/1 Running 77 (3h52m ago) 5d23h 172.17.0.19 minikube <none> <none>
default pod/vsbnservice-6c7f9fff69-xb2sd 3/3 Running 86 (3h52m ago) 5d8h 172.17.0.20 minikube <none> <none>
default pod/webbackend-788f69c748-n5v6j 3/3 Running 16 (3h52m ago) 2d 172.17.0.18 minikube <none> <none>
default pod/webrtc-server-997db67f5-cmtc6 1/1 Running 9 (3h52m ago) 12d 172.17.0.4 minikube <none> <none>
ingress-nginx pod/ingress-nginx-controller-6f7bd4bcfb-spwpm 1/1 Running 1 (3h52m ago) 8h 172.17.0.8 minikube <none> <none>
kube-system pod/coredns-6d4b75cb6d-2rg47 1/1 Running 9 (3h52m ago) 12d 172.17.0.5 minikube <none> <none>
kube-system pod/etcd-minikube 1/1 Running 9 (3h52m ago) 12d 192.168.49.2 minikube <none> <none>
kube-system pod/kube-apiserver-minikube 1/1 Running 9 (3h52m ago) 12d 192.168.49.2 minikube <none> <none>
kube-system pod/kube-controller-manager-minikube 1/1 Running 9 (3h52m ago) 12d 192.168.49.2 minikube <none> <none>
kube-system pod/kube-proxy-vs8pt 1/1 Running 9 (3h52m ago) 12d 192.168.49.2 minikube <none> <none>
kube-system pod/kube-scheduler-minikube 1/1 Running 9 (3h52m ago) 12d 192.168.49.2 minikube <none> <none>
kube-system pod/storage-provisioner 1/1 Running 30 (147m ago) 12d 192.168.49.2 minikube <none> <none>
kubernetes-dashboard pod/dashboard-metrics-scraper-78dbd9dbf5-5grfq 1/1 Running 9 (3h52m ago) 12d 172.17.0.13 minikube <none> <none>
kubernetes-dashboard pod/kubernetes-dashboard-5fd5574d9f-vbz6w 1/1 Running 14 (3h52m ago) 12d 172.17.0.6 minikube <none> <none>
stunner-system pod/stunner-gateway-operator-controller-manager-5df78b4677-5nkmh 2/2 Running 11 (147m ago) 32h 172.17.0.11 minikube <none> <none>
stunner pod/stunner-64c5dd65fb-znlbw 1/1 Running 1 (3h52m ago) 5h50m 172.17.0.3 minikube <none> <none>
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.apps/kms 1/1 1 1 12d kms kurento/kurento-media-server:latest app=kms
default deployment.apps/media-plane 1/1 1 1 12d net-debug l7mp/net-debug:0.5.3 app=media-plane
default deployment.apps/nfs-server 1/1 1 1 5d23h nfs-server-container itsthenetwork/nfs-server-alpine:latest name=nfs-server
default deployment.apps/backend 1/1 1 1 5d23h vcontrol,php,web own/backend:3.2.0-1,own/php8.1-fpm:1.1.0-1,own/web:1.1.0-1 own=backend
default deployment.apps/db 1/1 1 1 5d7h mysql own/mysql8.0:1.0.0-1 own=db
default deployment.apps/device 1/1 1 1 5d23h dhrnode own/dhrnode:1.3.0-2 own=device
default deployment.apps/malamute 1/1 1 1 5d23h dhrbroker own/dhrbroker:1.3.0-1 own=malamute
default deployment.apps/mediaserver 1/1 1 1 2d mediaserver own/mediaserver:3.2.0-7 own=mediaserver
default deployment.apps/siteservice 1/1 1 1 5d23h siteservice own/siteservice:3.2.0-1 own=siteservice
default deployment.apps/taskmanager 1/1 1 1 5d23h taskmanager own/taskmanager:3.2.0-1 own=taskmanager
default deployment.apps/vsbnservice 1/1 1 1 5d8h vsbnserver,php,web own/vsbn-server:1.0.0-1,own/php8.1-fpm:1.1.0-1,own/web:1.1.0-1 own=vsbnservice
default deployment.apps/webbackend 1/1 1 1 2d vcontrol,php,web own/webbackend:3.2.0-1,own/php8.1-fpm:1.1.0-1,own/web:1.1.0-1 own=webbackend
default deployment.apps/webrtc-server 1/1 1 1 12d webrtc-server nmate/kurento-magic-mirror-server:latest app=webrtc-server
ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 8h controller registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system deployment.apps/coredns 1/1 1 1 12d coredns k8s.gcr.io/coredns/coredns:v1.8.6 k8s-app=kube-dns
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 12d dashboard-metrics-scraper kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c k8s-app=dashboard-metrics-scraper
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 12d kubernetes-dashboard kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 k8s-app=kubernetes-dashboard
stunner-system deployment.apps/stunner-gateway-operator-controller-manager 1/1 1 1 32h kube-rbac-proxy,manager gcr.io/kubebuilder/kube-rbac-proxy:v0.11.0,l7mp/stunner-gateway-operator:0.13.0 control-plane=controller-manager
stunner deployment.apps/stunner 1/1 1 1 32h stunnerd l7mp/stunnerd:0.14.0 app=stunner,app.kubernetes.io/instance=stunner,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=stunner
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kms-control ClusterIP 10.104.119.49 <none> 8888/TCP 12d app=kms
default service/kms-media-plane ClusterIP 10.97.19.161 <none> 9999/UDP 12d app=kms
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d <none>
default service/malamute ClusterIP 10.108.59.53 <none> 9999/TCP 5d23h own=malamute
default service/media-plane ClusterIP 10.97.56.239 <none> 9001/UDP 12d app=media-plane
default service/nfs-server ClusterIP 10.104.45.134 <none> 2049/TCP,2048/TCP,111/UDP 5d23h name=nfs-server
default service/overlay-image ClusterIP 10.109.162.115 <none> 80/TCP 12d app=webrtc-server
default service/db NodePort 10.103.244.44 <none> 3306:30662/TCP 5d7h own=db
default service/mediaserver-media-plane ClusterIP 10.110.29.204 <none> 9999/UDP 5d2h own=mediaserver
default service/vcontrol ClusterIP None <none> 8082/TCP 5d23h own=backend
default service/wamprouter ClusterIP None <none> 55555/TCP 2d own=webbackend
default service/web NodePort 10.110.231.4 <none> 80:31336/TCP 5d23h own=backend
default service/webrtc-server LoadBalancer 10.107.63.109 10.107.63.109 8443:32045/TCP 12d app=webrtc-server
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.101.119.37 10.101.119.37 80:31260/TCP,443:31483/TCP 8h app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.101.82.241 <none> 443/TCP 8h app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 12d k8s-app=kube-dns
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.98.25.185 <none> 8000/TCP 12d k8s-app=dashboard-metrics-scraper
kubernetes-dashboard service/kubernetes-dashboard ClusterIP 10.107.213.202 <none> 80/TCP 12d k8s-app=kubernetes-dashboard
stunner-system service/stunner-gateway-operator-controller-manager-metrics-service ClusterIP 10.96.226.241 <none> 8443/TCP 32h control-plane=controller-manager
stunner service/stunner ClusterIP 10.104.179.171 <none> 3478/UDP 32h app=stunner
stunner service/mediaserver-udp-gateway LoadBalancer 10.109.148.228 10.109.148.228 3478:31595/UDP 32h app=stunner
javierd@javierd:~$ kubectl get gatewayclasses -o yaml
apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"gateway.networking.k8s.io/v1alpha2","kind":"GatewayClass","metadata":{"annotations":{},"name":"stunner-gatewayclass"},"spec":{"controllerName":"stunner.l7mp.io/gateway-operator","description":"STUNner is a WebRTC ingress gateway for Kubernetes","parametersRef":{"group":"stunner.l7mp.io","kind":"GatewayConfig","name":"stunner-gatewayconfig","namespace":"default"}}}
creationTimestamp: "2023-01-13T08:55:35Z"
generation: 2
name: stunner-gatewayclass
resourceVersion: "8931"
uid: 91d3d76a-2d15-4869-af07-70e173feae11
spec:
controllerName: stunner.l7mp.io/gateway-operator
description: STUNner is a WebRTC ingress gateway for Kubernetes
parametersRef:
group: stunner.l7mp.io
kind: GatewayConfig
name: stunner-gatewayconfig
namespace: default
status:
conditions:
- lastTransitionTime: "2023-01-13T09:09:31Z"
message: gateway-class is now managed by controller "stunner.l7mp.io/gateway-operator"
observedGeneration: 2
reason: Accepted
status: "True"
type: Accepted
kind: List
metadata:
resourceVersion: ""
javierd@javierd:~$ kubectl -n stunner get configmaps -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
ca.crt: |
-----BEGIN CERTIFICATE-----
MIIDBjCCAe6gAwIBAgIBATANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwptaW5p
a3ViZUNBMB4XDTIyMDkxMzA2NDkyMloXDTMyMDkxMTA2NDkyMlowFTETMBEGA1UE
AxMKbWluaWt1YmVDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOv9
w47apCNolJwWVLUN3vROst5KPXQ4Kyp+f+r+DF0v/kWo4TW/uNPLsB0nY+BtyeY8
GdgUy9EoewuiLaE4qhY/+E7YcOCSdprzDQ011tpR+TGjwcX58ojiv4nvSn/Rb48g
BNNJHvt+pNlOQLk5NvNRjz3OZ2UxEca3S3o7ru8FCyNBTaod3pPmqd6CXRzQngpW
g472A5pCRoYfAbeSukAT+CULZr2qfqvLtYjtbCsLoI7HdnERhCSQ0IHwuRf5zICk
3IVrE9u+kbYBwJCei63hNhivToDCHWYe5/gfu13Kp4NvBFEJcdeBv7FeKvGKAAVg
64RfhTYVUUMKt9Re3k0CAwEAAaNhMF8wDgYDVR0PAQH/BAQDAgKkMB0GA1UdJQQW
MBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBRwf5W5VV8f3tFjH1vbzDNuwiaw5DANBgkqhkiG9w0BAQsFAAOCAQEAyXQCJlEU
DKgHDWdiJlCpKot7u8WojHO+Zgj6nnTos3KGF68qoRZUowhXBu1UbiYuCZF82qcK
G111STeRh94MwJ5X3M2EuuGbiNjp44r5d3K7G3zCePZ9VsSUy8kbLOiNeOU5ier5
nWM76wGyVfozoBJaleBSOHBlzevMA67+y19VOlFisijgPQAwuHjw7LcC2YJjpMi+
gEphV1TkSc5F444qwiSk0mwbBxY6ObH0PQfH9ru5iA/298i7XyhJ5poSbsEFtBdA
FS+vJLlJcExAw+KnkvrWWoUiDLCYW0f3Hz09Bp3EmhbaSpPcUUPh/OP+gTbaKf8e
Tv4aFWR/7uPhXA==
-----END CERTIFICATE-----
kind: ConfigMap
metadata:
annotations:
kubernetes.io/description: Contains a CA bundle that can be used to verify the
kube-apiserver when using internal endpoints such as the internal service
IP or kubernetes.default.svc. No other usage is guaranteed across distributions
of Kubernetes clusters.
creationTimestamp: "2023-01-13T08:39:03Z"
name: kube-root-ca.crt
namespace: stunner
resourceVersion: "823"
uid: fdbc1fd3-b8a3-419e-95c6-50e06c378c87
- apiVersion: v1
data:
stunnerd.conf: '{"version":"v1alpha1","admin":{"name":"stunner-daemon","loglevel":"all:INFO","healthcheck_endpoint":"http://0.0.0.0:8086"},"auth":{"type":"plaintext","realm":"stunner.l7mp.io","credentials":{"password":"pass-1","username":"user-1"}},"listeners":[{"name":"stunner/udp-gateway/udp-listener","protocol":"UDP","public_port":31117,"address":"$STUNNER_ADDR","port":3478,"min_relay_port":32768,"max_relay_port":65535,"routes":["stunner/media-plane"]}],"clusters":[{"name":"stunner/media-plane","type":"STATIC","protocol":"udp","endpoints":["10.97.56.239","172.17.0.5"]}]}'
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-13T09:09:31Z"
name: stunnerd-config
namespace: stunner
ownerReferences:
- apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
name: stunner-gatewayconfig
uid: 131b59b5-ccd5-4c19-8d74-8f4b76df17d0
resourceVersion: "6983"
uid: 8c5cbae9-f2dd-4dbe-bae4-030069e04f8e
kind: List
metadata:
resourceVersion: ""
javierd@javierd:~$ kubectl get gateways,gatewayconfigs,udproutes -A -o yaml
apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"gateway.networking.k8s.io/v1alpha2","kind":"Gateway","metadata":{"annotations":{},"name":"mediaserver-udp-gateway","namespace":"stunner"},"spec":{"gatewayClassName":"stunner-gatewayclass","listeners":[{"name":"mediaserver-udp-listener","port":3478,"protocol":"UDP"}]}}
creationTimestamp: "2023-01-24T08:45:59Z"
generation: 1
name: mediaserver-udp-gateway
namespace: stunner
resourceVersion: "313189"
uid: 06e98e47-958e-469a-bca0-d39bb6054051
spec:
gatewayClassName: stunner-gatewayclass
listeners:
- allowedRoutes:
namespaces:
from: Same
name: mediaserver-udp-listener
port: 3478
protocol: UDP
status:
conditions:
- lastTransitionTime: "1970-01-01T00:00:00Z"
message: Waiting for controller
reason: NotReconciled
status: Unknown
type: Accepted
- lastTransitionTime: "2023-01-24T08:45:59Z"
message: gateway under processing by controller "stunner.l7mp.io/gateway-operator"
observedGeneration: 1
reason: Scheduled
status: "True"
type: Scheduled
- lastTransitionTime: "2023-01-24T08:45:59Z"
message: gateway successfully processed by controller "stunner.l7mp.io/gateway-operator"
observedGeneration: 1
reason: Ready
status: "True"
type: Ready
listeners:
- attachedRoutes: 0
conditions:
- lastTransitionTime: "2023-01-25T17:23:55Z"
message: listener accepted
observedGeneration: 1
reason: Attached
status: "False"
type: Detached
- lastTransitionTime: "2023-01-25T17:23:55Z"
message: listener object references sucessfully resolved
observedGeneration: 1
reason: ResolvedRefs
status: "True"
type: ResolvedRefs
- lastTransitionTime: "2023-01-25T17:23:55Z"
message: public address found for gateway
observedGeneration: 1
reason: Ready
status: "True"
type: Ready
name: mediaserver-udp-listener
supportedKinds:
- group: gateway.networking.k8s.io
kind: UDPRoute
- apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"stunner.l7mp.io/v1alpha1","kind":"GatewayConfig","metadata":{"annotations":{},"name":"stunner-gatewayconfig","namespace":"default"},"spec":{"authType":"plaintext","password":"pass-1","realm":"stunner.l7mp.io","userName":"user-1"}}
creationTimestamp: "2023-01-13T10:27:56Z"
generation: 2
name: stunner-gatewayconfig
namespace: default
resourceVersion: "283548"
uid: e2395590-fc93-4089-bb50-128ac75ad480
spec:
authType: plaintext
logLevel: all:DEBUG,turn:DEBUG
password: pass-1
realm: stunner.l7mp.io
stunnerConfig: stunnerd-config
userName: user-1
- apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"stunner.l7mp.io/v1alpha1","kind":"GatewayConfig","metadata":{"annotations":{},"name":"stunner-gatewayconfig","namespace":"stunner"},"spec":{"authType":"plaintext","logLevel":"all:DEBUG,turn:DEBUG","password":"pass-1","realm":"stunner.l7mp.io","userName":"user-1"}}
creationTimestamp: "2023-01-13T09:09:31Z"
generation: 2
name: stunner-gatewayconfig
namespace: stunner
resourceVersion: "283062"
uid: 131b59b5-ccd5-4c19-8d74-8f4b76df17d0
spec:
authType: plaintext
logLevel: all:DEBUG,turn:DEBUG
password: pass-1
realm: stunner.l7mp.io
stunnerConfig: stunnerd-config
userName: user-1
- apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"gateway.networking.k8s.io/v1alpha2","kind":"UDPRoute","metadata":{"annotations":{},"name":"media-plane","namespace":"stunner"},"spec":{"parentRefs":[{"name":"udp-gateway"}],"rules":[{"backendRefs":[{"name":"mediaserver-media-plane","namespace":"default"}]}]}}
creationTimestamp: "2023-01-13T09:55:32Z"
generation: 2
name: media-plane
namespace: stunner
resourceVersion: "313190"
uid: 3a163c08-5b26-4b89-beaa-ec8de24257f9
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: udp-gateway
rules:
- backendRefs:
- group: ""
kind: Service
name: mediaserver-media-plane
namespace: default
weight: 1
status:
parents:
- conditions:
- lastTransitionTime: "2023-01-25T17:23:55Z"
message: parent rejects the route
observedGeneration: 2
reason: NotAllowedByListeners
status: "False"
type: Accepted
- lastTransitionTime: "2023-01-25T17:23:55Z"
message: all backend references successfully resolved
observedGeneration: 2
reason: ResolvedRefs
status: "True"
type: ResolvedRefs
controllerName: stunner.l7mp.io/gateway-operator
parentRef:
group: gateway.networking.k8s.io
kind: Gateway
name: udp-gateway
- apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"gateway.networking.k8s.io/v1alpha2","kind":"UDPRoute","metadata":{"annotations":{},"name":"mediaserver-media-plane","namespace":"stunner"},"spec":{"parentRefs":[{"name":"mediaserver-udp-listener"}],"rules":[{"backendRefs":[{"name":"mediaserver-media-plane","namespace":"default"}]}]}}
creationTimestamp: "2023-01-20T14:37:30Z"
generation: 1
name: mediaserver-media-plane
namespace: stunner
resourceVersion: "313191"
uid: ab07c77e-c938-46cd-8a21-f27b394749f3
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: mediaserver-udp-listener
rules:
- backendRefs:
- group: ""
kind: Service
name: mediaserver-media-plane
namespace: default
weight: 1
status:
parents:
- conditions:
- lastTransitionTime: "2023-01-25T17:23:55Z"
message: parent rejects the route
observedGeneration: 1
reason: NotAllowedByListeners
status: "False"
type: Accepted
- lastTransitionTime: "2023-01-25T17:23:55Z"
message: all backend references successfully resolved
observedGeneration: 1
reason: ResolvedRefs
status: "True"
type: ResolvedRefs
controllerName: stunner.l7mp.io/gateway-operator
parentRef:
group: gateway.networking.k8s.io
kind: Gateway
name: mediaserver-udp-listener
kind: List
metadata:
resourceVersion: ""
About the logs of the operator and stunner pods, here you have some logs of the stunner pod
13:22:49.685116 main.go:75: stunnerd INFO: watching configuration file at "/etc/stunnerd/stunnerd.conf"
13:22:49.686400 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
13:22:49.686491 server.go:18: stunner INFO: listener stunner/udp-gateway/udp-listener: [udp://172.17.0.3:3478<32768:65535>] (re)starting
13:22:49.686648 server.go:157: stunner INFO: listener stunner/udp-gateway/udp-listener: TURN server running
13:22:49.686669 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 4, changed objects: 0, deleted objects: 0, started objects: 1, restarted objects: 0
13:22:49.686682 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: plaintext, listeners: stunner/udp-gateway/udp-listener: [udp://172.17.0.3:3478<32768:65535>]
13:29:13.965481 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:39602
13:29:13.965806 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:25713
13:29:13.965962 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:34671
13:29:13.966129 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:63956
13:29:13.966267 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:20812
13:29:13.966446 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:50279
13:29:13.966639 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:39602
13:29:13.966764 turn.go:235: turn INFO: permission denied for client 172.17.0.1:39602 to peer 172.17.0.15
13:29:13.966904 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:25713
13:29:13.966937 turn.go:235: turn INFO: permission denied for client 172.17.0.1:25713 to peer 172.17.0.15
13:29:13.967214 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:34671
13:29:13.967257 turn.go:235: turn INFO: permission denied for client 172.17.0.1:34671 to peer 172.17.0.15
13:29:13.967695 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:63956
13:29:13.967739 turn.go:235: turn INFO: permission denied for client 172.17.0.1:63956 to peer 172.17.0.15
13:29:13.968019 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:20812
13:29:13.968052 turn.go:235: turn INFO: permission denied for client 172.17.0.1:20812 to peer 172.17.0.15
13:29:13.968414 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:50279
13:29:13.968458 turn.go:235: turn INFO: permission denied for client 172.17.0.1:50279 to peer 172.17.0.15
13:29:33.152413 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:39602
13:29:33.152699 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:25713
13:29:33.152848 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:34671
13:29:33.152957 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:63956
13:29:33.153064 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:20812
13:29:33.153172 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:50279
13:44:22.856817 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:27838
13:44:22.857110 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:35915
16:03:06.061100 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:11988
16:03:06.061611 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:5156
16:03:06.061762 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:54184
16:03:06.061879 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:13992
16:03:06.061986 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:18720
16:03:06.062082 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:41367
16:07:24.569784 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:2826
16:07:24.643197 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:2826
16:07:24.643252 turn.go:235: turn INFO: permission denied for client 172.17.0.1:2826 to peer 172.17.0.15
16:07:24.680869 server.go:198: turn ERROR: error when handling datagram: failed to handle Send-indication from 172.17.0.1:2826: unable to handle send-indication, no permission added: 172.17.0.15:56648
16:07:27.589402 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:56648 on allocation 172.17.0.3:34116
16:07:28.117559 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:56648 on allocation 172.17.0.3:34116
16:07:29.118835 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:56648 on allocation 172.17.0.3:34116
16:07:31.121216 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:56648 on allocation 172.17.0.3:34116
16:07:35.126183 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:56648 on allocation 172.17.0.3:34116
16:07:43.133847 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:56648 on allocation 172.17.0.3:34116
16:16:24.702819 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:24.702873 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:24.917257 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:24.917324 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:25.417472 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:25.417541 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:26.484032 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:26.484149 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:28.433643 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:28.433700 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:32.503738 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:32.503790 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:40.513405 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:40.513470 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:48.553906 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:48.553964 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:16:56.633810 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:51732
16:16:56.633861 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:51732: no allocation found 172.17.0.1:51732:172.17.0.3:3478
16:29:10.412481 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:65044
16:29:10.418535 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:65044
16:29:10.418586 turn.go:235: turn INFO: permission denied for client 172.17.0.1:65044 to peer 172.17.0.15
16:29:10.421181 server.go:198: turn ERROR: error when handling datagram: failed to handle Send-indication from 172.17.0.1:65044: unable to handle send-indication, no permission added: 172.17.0.15:39337
16:29:12.894704 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:39337 on allocation 172.17.0.3:60245
16:29:13.417470 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:39337 on allocation 172.17.0.3:60245
16:29:14.420830 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:39337 on allocation 172.17.0.3:60245
16:29:16.423019 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:39337 on allocation 172.17.0.3:60245
16:29:20.430409 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:39337 on allocation 172.17.0.3:60245
16:29:28.437326 allocation.go:290: turn INFO: No Permission or Channel exists for 172.17.0.15:39337 on allocation 172.17.0.3:60245
16:38:10.736582 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:18139
16:38:10.736643 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:18139: no allocation found 172.17.0.1:18139:172.17.0.3:3478
16:38:11.251016 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:18139
16:38:11.251072 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:18139: no allocation found 172.17.0.1:18139:172.17.0.3:3478
16:38:12.226685 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:18139
16:38:12.226727 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:18139: no allocation found 172.17.0.1:18139:172.17.0.3:3478
16:38:14.248654 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:18139
16:38:14.249038 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:18139: no allocation found 172.17.0.1:18139:172.17.0.3:3478
16:38:18.311039 handlers.go:37: stunner-auth INFO: plaintext auth request: username="user-1" realm="stunner.l7mp.io" srcAddr=172.17.0.1:18139
16:38:18.311189 server.go:198: turn ERROR: error when handling datagram: failed to handle Refresh-request from 172.17.0.1:18139: no allocation found 172.17.0.1:18139:172.17.0.3:3478
By the way, the logs with RROR: error when handling datagram
only happen when I try to connect to STUNner from a different machine on the same LAN, and in this case the connections always fails.
And the only logs available on the stunner gateway operator
I0125 13:22:59.036466 1 main.go:180] Valid token audiences:
I0125 13:22:59.036624 1 main.go:284] Generating self signed cert as no cert is provided
I0125 13:23:02.029349 1 main.go:334] Starting TCP socket on 0.0.0.0:8443
I0125 13:23:02.029846 1 main.go:341] Listening securely on 0.0.0.0:8443
Thanks. It seems your setup is a bit messy, in that it contains some spurious references and extra Gateway API resources, and this confuses STUNner. Here is a list of the apparent issues:
stunner-gatewayconfig
that lives in the default
namespace (see the parametersRef
), which will then trick the gateway operator into believing that you want to run the dataplane in the default
namespace so it renders a dataplane configuration there (I guess kubectl get cms
would list a stunnerd-config
in the default namespace),stunner
pods) in the default namespace, no one will ever reconcile your dataplane config,stunner
namespace but it is an orphan, as the root GatewayClass refers to the GatewayConfig in the default
namespace, and this GatewayConfig seems to be having the DEBUG log-level setting but it has no effect since the operator never considers this GatewayConfig, stunner
namespace, but I think it runs with a stale config (maybe an older setup left a stunnerd-config
there),Accepted=False
with a message NotAllowedByListeners
, indicating that the UDPRoute was never ever attached to a Gateway, and indeed, the Gateway referred (mediaserver-udp-listener
) is missing (the correct name is mediaserver-udp-gateway
). This is then causes all the permission denied errors.May I recommend to start anew? Clean up all GatewayClass, GatewayConfig, Gateway and UDPRoute resources and clean up all stale stunnerd-config
ConfigMaps from all namespaces that have them. Then, add the GatewayClass but make sure it refers to a GatewayConfig in the stunner
namespace, and then add the necessary Gateway and UDPRoute resources to the stunner
namespace as well. Always check the object status in the Gateway and UDPRoute resources: if something went wrong then you should see a clear sign of it in status conditions set to False (except for Detached which should normally be False but anyway) and the message should indicate the reason.
After cleaning everything up and creating all the resources from scratch everything seems to be working perfectly. Thanks a lot for all the help and recommendations!
Thanks, happy to hear that. Closing this issue now, feel free to reopen (or better yet, join our Discord) if something new comes up.
Hi everyone, First of all, I'm just starting to work with kubernetes and I have a pretty basic understanding, but I'm trying to learn, so excuse me in case my question is too basic. I've been working on a webrtc project which I need to scale, and after I found out stunner I thought it might be the perfect solution to my needs. I've been playing arround with it, but I haven't managed to get it working yet. I've followed the Getting started tutorial on both a minikube cluster and a real k8s cluster from my company to which I have access, but I can't get a public IP address on any of the enviroments.
After successfully following every step, when I execute the command
stunnerctl running-config stunner/stunnerd-config
, I end up with something like:which doesn't have the Public IP field. If I dump the entire running configuration I get
Which gives me no clue about what could be happening.
I think it could be related to the Gateway class not being implemented on minikube/my real cluster, but I haven't found any way to check if this is true or if it's related to something else.
As I can't get a public IP i can't test any of the examples, which is a stopper for me.
Could somebody give me any cues about what might be happening?
Thanks a lot.