Open busyboy77 opened 2 years ago
anyone?
I have not tried running on K3S, and lack familiarity with it. However, you have reviewed this video where I show a step by step deploy on K8S on GCP? https://youtu.be/Rim812-p8mo
Here is an example of how I assigned taints and labels on a digital ocean cluster, in case it helps
MacBook-Pro-2:~ dhorton$ doctl kubernetes cluster list
ID Name Region Version Auto Upgrade Status Node Pools
411930a2-122e-467c-817b-e3abdf9ab118 k8s-1-22-7-do-0-nyc3-1648555963360 nyc3 1.22.7-do.0 false running pool-5bz50cx8o pool-sip pool-rtp
MacBook-Pro-2:~ dhorton$ doctl kubernetes cluster node-pool update k8s-1-22-7-do-0-nyc3-1648555963360 pool-sip --label "voip-environment=sip"
ID Name Size Count Tags Labels Taints Nodes
3c5f6acf-3266-4f35-8086-9aa3cea56224 pool-sip s-2vcpu-4gb 1 k8s,k8s:411930a2-122e-467c-817b-e3abdf9ab118,k8s:worker map[voip-environment:sip] [] [pool-sip-cul4g]
MacBook-Pro-2:~ dhorton$ doctl kubernetes cluster node-pool update k8s-1-22-7-do-0-nyc3-1648555963360 pool-rtp --label "voip-environment=rtp"
ID Name Size Count Tags Labels Taints Nodes
6304ea56-7644-42a9-b9d4-182a17ea5db7 pool-rtp s-2vcpu-4gb 1 k8s,k8s:411930a2-122e-467c-817b-e3abdf9ab118,k8s:worker map[voip-environment:rtp] [] [pool-rtp-cul4e]
MacBook-Pro-2:~ dhorton$ doctl kubernetes cluster node-pool update k8s-1-22-7-do-0-nyc3-1648555963360 pool-sip --taint "sip=true:NoSchedule"
ID Name Size Count Tags Labels Taints Nodes
3c5f6acf-3266-4f35-8086-9aa3cea56224 pool-sip s-2vcpu-4gb 1 k8s,k8s:411930a2-122e-467c-817b-e3abdf9ab118,k8s:worker map[voip-environment:sip] [sip=true:NoSchedule] [pool-sip-cul4g]
MacBook-Pro-2:~ dhorton$ doctl kubernetes cluster node-pool update k8s-1-22-7-do-0-nyc3-1648555963360 pool-rtp --taint "rtp=true:NoSchedule"
ID Name Size Count Tags Labels Taints Nodes
6304ea56-7644-42a9-b9d4-182a17ea5db7 pool-rtp s-2vcpu-4gb 1 k8s,k8s:411930a2-122e-467c-817b-e3abdf9ab118,k8s:worker map[voip-environment:rtp] [rtp=true:NoSchedule] [pool-rtp-cul4e]
You have this in your values.yaml, right?
sbc:
sip:
nodeSelector:
label: voip-environment
value: sip
toleration: sip
rtp:
nodeSelector:
label: voip-environment
value: rtp
toleration: rtp
because the error message 1 node(s) had taint {rtp: true}, that the pod didn't tolerate
would seem to indicate that the sbc-rtp-daemonset did not have the toleration set.
That is set here, in sbc-rtp-daemonset.yaml
template:
metadata:
labels:
app: jambonz-sbc-rtp
spec:
nodeSelector:
{{ .Values.sbc.rtp.nodeSelector.label }}: {{.Values.sbc.rtp.nodeSelector.value | quote }}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations:
- key: {{ .Values.sbc.rtp.toleration | quote }}
operator: "Exists"
effect: "NoSchedule"
@busyboy77 did you get a solution to this, I'm seeing similar problems with k3s I've changed the label and tolleration to be the same for sip & rtp but getting the no free ports error for the sbc-sip pod
I have a cluster setup using K3s and all the nodes are already taking load ( in acceptable format ).
As per the deployment guide, I have applied tolerations and taints from the guide and then did a helm install ( given below )
kubectl taint node devops232.com sip=true:NoSchedule
kubectl taint node devops231.com rtp=true:NoSchedule
and
kubectl label node devops231.com voip-environment=rtp
kubectl label node devops232.com voip-environment=sip
now when I try to install the solution using helm chart, I'm getting
Questions:
1 -- What are the right procedures for giving/assigning nodes to these pods -- any example will suffice. ( even to disable this taint/tolerations, I can't disable it altogether, getting errors ) 2 -- what is the error
node(s) didn't have free ports for the requested pod ports
means.Please note that I'm deploying the cluster for on-prem based deployments for an RnD to materialize it for production usage. the Cluster is based on K3s