Open jiejiecn opened 3 days ago
Update: I add "automountServiceAccountToken: true" to automatically mount the kubeconfig in the pod and it can start without specify --kubeconfig
But the log shows:
error retrieving resource lock kube-system/cloud-controller-manager: leases.coordination.k8s.io "cloud-controller-manager" is forbidden: User "system:serviceaccount:kube-system:ccm-linode" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
Update:
I grant all methods on "leases" under "coordination.k8s.io" for ClusterRole "ccm-linode-clusterrole". The error in the log stopped.
So maybe the yaml file missing some rights for "leases" under ClusterRole section.
I tried to setup an unmanaged k8s cluster on Linode through Rancher. After this I tried to install CCM so I can create a NodeBalancer through Serivce LoadBalancer. During the procedure, I found some invalid parameters in the template yaml file. Maybe it's not up to date.
Under the "DaemonSet" section:
Other issues: The log shows I need to specify the --master or --kubeconfig, otherwise the CCM can't access the API server. I mounted a volume with kubeconfig file and specify the --kubeconfig, it's working. But I don't think it's a good idea to do so. I'm wondering if it's a BUG?
The final parameters that I make it running
I'm not specialized in k8s, pls correct me if I missed something :-)