Closed PStibbons2K closed 1 year ago
A bit more log output, seems the standard logging levels are not sufficient to get more information on the issue:
k3s_daemon.log
Apr 14 08:11:44 truenas k3s[5143]: I0414 08:11:44.623588 5143 event.go:294] "Event occurred" object="ix-syncthing-user2/syncthing-user2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set syncthing-user2-66444c94d9 to 0 from 1"
Apr 14 08:11:44 truenas k3s[5143]: I0414 08:11:44.682862 5143 event.go:294] "Event occurred" object="ix-syncthing-user2/syncthing-user2-66444c94d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: syncthing-user2-66444c94d9-2bh9v"
Apr 14 08:11:48 truenas k3s[5143]: time="2023-04-14T08:11:48+02:00" level=info msg="Using CNI configuration file /etc/cni/net.d/00-multus.conf"
Apr 14 08:11:52 truenas k3s[5143]: I0414 08:11:52.797660 5143 alloc.go:327] "allocated clusterIPs" service="ix-syncthing-user2/syncthing-user2" clusterIPs=map[IPv4:172.17.241.34]
Apr 14 08:11:52 truenas k3s[5143]: I0414 08:11:52.865564 5143 event.go:294] "Event occurred" object="ix-syncthing-user2/syncthing-user2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set syncthing-user2-84d9b9bd76 to 1"
Apr 14 08:11:52 truenas k3s[5143]: I0414 08:11:52.931739 5143 event.go:294] "Event occurred" object="ix-syncthing-user2/syncthing-user2-84d9b9bd76" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: syncthing-user2-84d9b9bd76-562zb"
Apr 14 08:22:11 truenas k3s[5143]: E0414 08:22:11.574550 5143 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"syncthing-user2-84d9b9bd76-562zb.1755b89f9cf0b82a", GenerateName:"", Namespace:"ix-syncthing-user2", SelfLink:"", UID:"65bfdee3-db88-443d-b08f-4bff6fe193f4", ResourceVersion:"2982004", Generation:0, CreationTimestamp:time.Date(2023, time.April, 14, 8, 17, 11, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:time.Date(2023, time.April, 14, 8, 17, 11, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc025158c90), Subresource:""}}}, EventTime:time.Date(2023, time.April, 14, 8, 17, 11, 565076345, time.Local), Series:(*v1.EventSeries)(0xc01adf7e00), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-truenas", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"ix-syncthing-user2", Name:"syncthing-user2-84d9b9bd76-562zb", UID:"9e34c7f4-ec1f-4730-90c1-59bfb2ec8a75", APIVersion:"v1", ResourceVersion:"2981475", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedLastTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedCount:0}': 'Event "syncthing-user2-84d9b9bd76-562zb.1755b89f9cf0b82a" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!)
kube_router.log
Apr 14 08:11:44 truenas kube-router[6027]: W0414 08:11:44.521170 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:11:44 truenas kube-router[6027]: W0414 08:11:44.532138 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:11:52 truenas kube-router[6027]: W0414 08:11:52.870072 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:11:52 truenas kube-router[6027]: W0414 08:11:52.872775 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:11:52 truenas kube-router[6027]: W0414 08:11:52.875322 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:12:11 truenas kube-router[6027]: W0414 08:12:11.786477 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:12:11 truenas kube-router[6027]: W0414 08:12:11.788805 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:12:11 truenas kube-router[6027]: W0414 08:12:11.790831 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:17:11 truenas kube-router[6027]: W0414 08:17:11.786832 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:17:11 truenas kube-router[6027]: W0414 08:17:11.789168 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
Apr 14 08:17:11 truenas kube-router[6027]: W0414 08:17:11.791046 6027 network_services_controller.go:195] Failed to replace route to service VIP 192.168.114.12 configured on kube-dummy-if. Error: exit status 2, Output: RTNETLINK answers: File exists
k3s kubectl get events --sort-by=.metadata.creationTimestamp --namespace=ix-syncthing-user2
6m11s Normal ScalingReplicaSet deployment/syncthing-user2 Scaled up replica set syncthing-user2-84d9b9bd76 to 1
6m10s Warning FailedScheduling pod/syncthing-user2-84d9b9bd76-562zb 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
6m11s Normal SuccessfulCreate replicaset/syncthing-user2-84d9b9bd76 Created pod: syncthing-user2-84d9b9bd76-562zb
52s Warning FailedScheduling pod/syncthing-user2-84d9b9bd76-562zb 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
k3s kubectl get services -A
default kubernetes ClusterIP 172.17.0.1 <none> 443/TCP 23d
kube-system kube-dns ClusterIP 172.17.0.10 <none> 53/UDP,53/TCP,9153/TCP 23d
cert-manager cert-manager ClusterIP 172.17.255.145 <none> 9402/TCP 23d
cert-manager cert-manager-webhook ClusterIP 172.17.192.137 <none> 443/TCP 23d
cnpg-system cnpg-webhook-service ClusterIP 172.17.125.103 <none> 443/TCP 23d
metallb-system webhook-service ClusterIP 172.17.208.38 <none> 443/TCP 23d
prometheus-operator prometheus-operator ClusterIP None <none> 8080/TCP 23d
kube-system kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 23d
ix-pihole pihole-dns LoadBalancer 172.17.146.95 192.168.114.12 53:14440/UDP 20d
ix-pihole pihole-dns-tcp LoadBalancer 172.17.104.13 192.168.114.12 53:46954/TCP 20d
ix-pihole pihole LoadBalancer 172.17.194.26 192.168.114.12 9089:43091/TCP 20d
ix-syncthing-user1 syncthing-user1 NodePort 172.17.130.46 <none> 20910:20910/TCP,20978:20978/TCP,20978:20978/UDP 18h
ix-syncthing-user2 syncthing-user2 NodePort 172.17.241.34 <none> 21910:21910/TCP,21978:21978/TCP,21979:21979/UDP 7m59s
/var/log/containers: No entry for user2
"Use host network" is disabled for both apps.
Kind regards, Martin
Can you please take screenshots of your app settings from both apps? I just tried and I was able to install both apps (different set of ports each)
Thanks
I do have the same issue. Seems that the pod itself has a few host ports set, even if the "host path" tick is not ticked in the configuration.
Here is some extra info:
$ kubectl describe po -n ix-syncthing-lena syncthing-lena-68d54bdd9c-7fw56
Name: syncthing-lena-68d54bdd9c-7fw56
Namespace: ix-syncthing-lena
Priority: 0
Service Account: default
Node: ix-truenas/192.168.2.4
Start Time: Sat, 29 Apr 2023 15:50:52 +0200
Labels: app=syncthing
app.kubernetes.io/instance=syncthing-lena
app.kubernetes.io/name=syncthing
pod-template-hash=68d54bdd9c
release=syncthing-lena
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "ix-net",
"interface": "eth0",
"ips": [
"172.16.0.26"
],
"mac": "c6:d2:36:4a:db:2d",
"default": true,
"dns": {}
}]
rollme: retN9
Status: Running
IP: 172.16.0.26
IPs:
IP: 172.16.0.26
Controlled By: ReplicaSet/syncthing-lena-68d54bdd9c
Containers:
syncthing:
Container ID: docker://5956cb0827185c21feaf40ad32080a19673c71219b84b84565d432b5c93b1074
Image: syncthing/syncthing:1.23.4
Image ID: docker-pullable://syncthing/syncthing@sha256:e573106f5bb194f2e27200d42bce883a18367c8446118c84d18a8d4e10de08de
Ports: 8384/TCP, 22000/TCP, 22000/UDP
Host Ports: 8384/TCP, 22000/TCP, 22000/UDP
State: Running
Started: Sat, 29 Apr 2023 15:50:56 +0200
Ready: True
Restart Count: 0
Liveness: http-get http://:8384/rest/noauth/health delay=10s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8384/rest/noauth/health delay=10s timeout=5s period=10s #success=2 #failure=5
Startup: http-get http://:8384/rest/noauth/health delay=10s timeout=2s period=5s #success=1 #failure=60
Environment:
PUID: 568
PGID: 568
STGUIADDRESS: 0.0.0.0:8384
Mounts:
/data from extrappvolume-0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4lht8 (ro)
/var/syncthing from config (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: HostPath (bare host directory volume)
Path: /mnt/wd_red_3tb_4drives/apps/syncthing-lena/configuration
HostPathType:
extrappvolume-0:
Type: HostPath (bare host directory volume)
Path: /mnt/wd_red_3tb_4drives/apps/syncthing-lena/data
HostPathType:
kube-api-access-4lht8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46s default-scheduler Successfully assigned ix-syncthing-lena/syncthing-lena-68d54bdd9c-7fw56 to ix-truenas
Normal AddedInterface 46s multus Add eth0 [172.16.0.26/16] from ix-net
Normal Pulled 46s kubelet Container image "syncthing/syncthing:1.23.4" already present on machine
Normal Created 43s kubelet Created container syncthing
Normal Started 43s kubelet Started container syncthing
It's about the set host ports which conflict when installed multiple times.
Host Ports: 8384/TCP, 22000/TCP, 22000/UDP
TrueNAS Scale version: TrueNAS-SCALE-22.12.2 Syncthing installed version: 1.23.4_1.0.23
My current workaround is to install the official app once, and then the one from the community truecharts as well for the second installation, in case that helps anyone to get the same idea and have it working for now :-)
Okay, I might have found something, can you verify that this specific instance had hostNetwork turned on at some point?
Yes, i had host ports activated for at least one of the installations!
Seems that could be the issue - i'll double check as soon as i am back home!
Thanks a lot!
Dear all,
there seems to be an issue with the SyncThing chart - the configurable ports from the TrueNAS GUI are apparently not properly used during the pod deployment or something else is wrong. I ran into this issue trying to run multiple instances of the syncthing application, resulting in errors during startup due to ports already in use.
Steps to reproduce:
The first app is starting fine, the exposed ports are usable and there is a dedicated pod for the app. The second instance fails with this error, even if there is a dedicated pod for the second instance:
As far as i understood the kubernetes concept. each pod should get its own IP adress and ports should be only blocked by containers within the same pod?
The status window of the running app shows additional ports - the syncthing default ports (8384, 22000).
What am i missing? Is there a way to run multiple instances of the same app?
Thanks and kind regards,
Martin
TrueNAS Version: TrueNAS-SCALE-22.12.2