Open phuongvtn opened 3 months ago
@phuongvtn: The label(s) area/logging
cannot be applied, because the repository doesn't have them.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@phuongvtn I believe the kubeconfig logic was changed in #248. Could you try to see if you're still having the same issue?
Hi, I'm newbie on clusterAPI operator and facing error log that appear continuously on caaph (addon-helm) I'm testing clusterAPI with Openstack Infra that can provision workload cluster and using helmchartproxy for deploying cni (calico and cilium) for target workload But, I see caaph pod report error log continuously about get kubeconfig for cluster
Below are logs that I collected about my facing:
These error logs have been appeared right after the first deployment of helmchartproxy on target workload cluster successfully. Seem they from result of func KubeconfigGetter.GetClusterKubeconfig but I not sure
Although I tested updating helm values via helmchartproxy of target workload cluster as well as check revision of helmreleaseproxy (with corresponding namespace) and configmap of target workload cluster, all still update successfully, I not sure how the above logs may affect helmchart 's lifecycle as well as values of cni on target workload cluster in future
Do I missing configure or we can ignore these logs at current? Thanks
Environment:
kubectl version
): v1.28.4/etc/os-release
): Ubuntu 20.04.4/kind bug /area logging