microsoft / farmvibes-ai

FarmVibes.AI: Multi-Modal GeoSpatial ML Models for Agriculture and Sustainability
https://microsoft.github.io/farmvibes-ai/
MIT License
675 stars 117 forks source link

Error on Installing Farmvibes.ai on Github Codespaces #61

Closed nitinya9av closed 1 year ago

nitinya9av commented 1 year ago

I am trying to run Farmvibe.ai on codespaces, everything is ok and it is installed on it but the rest API is running on the wrong IP. Screenshot (257) Screenshot (258) Screenshot (259)

renatolfc commented 1 year ago

Hi @nitinya9av, can you share the spec of the codespace you created?

Also, can you share the output of the following commands?

To get all networking data about your cluster:

To get the IP docker thinks your cluster has:

To get all logs from the cluster's services:

To get the status of the pods:

Thanks!

nitinya9av commented 1 year ago

I am using 4 core cpu 8gb ram and 32gb storage

outputs are:

  1. docker network inspect k3d-farmvibes-ai Screenshot (261)
  2. docker network inspect k3d-farmvibes-ai -f "{{range \$i, \$value := .Containers}}{{if eq \$value. Name \"k3d-farmvibes-ai-server-0\"}}{{println .IPv4Address}}{{end}}{{end}}"

Screenshot (264)

  1. docker logs k3d-farmvibes-ai-server-0

netes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=BF5C0DBDABFE908D00816E6BA1904463975B8E8A]" time="2023-03-02T13:54:53Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-farmvibes-ai-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" I0302 13:54:53.296912 7 server.go:231] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP" E0302 13:54:53.297619 7 proxier.go:657] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.4.0-1103-azure/modules.builtin: no such file or directory" filePath="/lib/modules/5.4.0-1103-azure/modules.builtin" I0302 13:54:53.298275 7 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs" I0302 13:54:53.298786 7 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr" I0302 13:54:53.299297 7 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr" I0302 13:54:53.299793 7 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh" I0302 13:54:53.300340 7 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack" I0302 13:54:53.308750 7 node.go:163] Successfully retrieved node IP: 172.19.0.3 I0302 13:54:53.308772 7 server_others.go:138] "Detected node IP" address="172.19.0.3" I0302 13:54:53.311583 7 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6 I0302 13:54:53.311602 7 server_others.go:206] "Using iptables Proxier" I0302 13:54:53.311619 7 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0302 13:54:53.311882 7 server.go:661] "Version info" version="v1.24.4+k3s1" I0302 13:54:53.311899 7 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0302 13:54:53.312439 7 config.go:317] "Starting service config controller" I0302 13:54:53.312590 7 config.go:226] "Starting endpoint slice config controller" I0302 13:54:53.312602 7 shared_informer.go:255] Waiting for caches to sync for endpoint slice config I0302 13:54:53.312847 7 config.go:444] "Starting node config controller" I0302 13:54:53.312970 7 shared_informer.go:255] Waiting for caches to sync for node config I0302 13:54:53.312565 7 shared_informer.go:255] Waiting for caches to sync for service config I0302 13:54:53.315811 7 controller.go:611] quota admission added evaluator for: events.events.k8s.io I0302 13:54:53.412897 7 shared_informer.go:262] Caches are synced for endpoint slice config I0302 13:54:53.413054 7 shared_informer.go:262] Caches are synced for node config I0302 13:54:53.414041 7 shared_informer.go:262] Caches are synced for service config I0302 13:54:53.570389 7 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"29efe34a-ce0c-4e02-be90-acb63cc70e3b\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"237\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"29efe34a-ce0c-4e02-be90-acb63cc70e3b\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"237\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"3edaf1d3-f6c2-4d69-b213-d9b8c3871ffe\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"245\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\"" I0302 13:54:53.687635 7 controller.go:611] quota admission added evaluator for: deployments.apps I0302 13:54:53.697798 7 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.43.0.10] time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"3edaf1d3-f6c2-4d69-b213-d9b8c3871ffe\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"245\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"0e50e402-021b-4997-bec3-c3ba27144f31\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"256\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"0e50e402-021b-4997-bec3-c3ba27144f31\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"256\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"e6d851da-d76c-486c-b4f2-b7451178f117\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"266\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"e6d851da-d76c-486c-b4f2-b7451178f117\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"266\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"aeaa00a5-275a-49ad-8c91-3af2a9549bf0\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"271\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"aeaa00a5-275a-49ad-8c91-3af2a9549bf0\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"271\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"8df8b5a1-6423-42fe-b2d5-d412d3edf51a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"276\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\"" time="2023-03-02T13:54:53Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"8df8b5a1-6423-42fe-b2d5-d412d3edf51a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"276\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\"" time="2023-03-02T13:54:54Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"5fa6b84e-f081-4e65-9a81-f6809ab4b1fe\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"281\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\"" time="2023-03-02T13:54:54Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"5fa6b84e-f081-4e65-9a81-f6809ab4b1fe\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"281\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\"" W0302 13:54:54.384441 7 reflector.go:324] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: failed to list v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default" E0302 13:54:54.384468 7 reflector.go:138] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: Failed to watch v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default" I0302 13:54:54.426012 7 serving.go:355] Generated self-signed cert in-memory time="2023-03-02T13:54:54Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"b9ff11be-76dc-4cec-8298-5bad21eeac12\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"287\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\"" time="2023-03-02T13:54:54Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"b9ff11be-76dc-4cec-8298-5bad21eeac12\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"287\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\"" time="2023-03-02T13:54:54Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"3dbc625e-2a84-4a03-9e13-13f1bfb3d926\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"293\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\"" I0302 13:54:54.854917 7 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.43.232.65] time="2023-03-02T13:54:54Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"3dbc625e-2a84-4a03-9e13-13f1bfb3d926\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"293\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\"" I0302 13:54:54.924763 7 controllermanager.go:143] Version: v1.24.4+k3s1 I0302 13:54:54.927201 7 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0302 13:54:54.927275 7 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController I0302 13:54:54.927224 7 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0302 13:54:54.927439 7 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0302 13:54:54.927227 7 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0302 13:54:54.927599 7 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0302 13:54:54.927361 7 secure_serving.go:210] Serving securely on 127.0.0.1:10258 I0302 13:54:54.927376 7 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0302 13:54:55.027972 7 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController I0302 13:54:55.028001 7 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0302 13:54:55.027970 7 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file W0302 13:54:55.081318 7 handler_proxy.go:102] no RequestInfo found in the context E0302 13:54:55.081382 7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] W0302 13:54:55.081393 7 handler_proxy.go:102] no RequestInfo found in the context I0302 13:54:55.081396 7 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. E0302 13:54:55.081424 7 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService I0302 13:54:55.082516 7 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. time="2023-03-02T13:54:55Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"fc686ec8-df26-4b71-b556-9e3d999c9788\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\"" time="2023-03-02T13:54:55Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"fc686ec8-df26-4b71-b556-9e3d999c9788\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\"" time="2023-03-02T13:54:55Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"c62d50e9-7e35-4748-b305-c48f8ca7922a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"306\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\"" time="2023-03-02T13:54:55Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"c62d50e9-7e35-4748-b305-c48f8ca7922a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"306\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\"" time="2023-03-02T13:54:56Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"c97604c6-7b06-4a9a-b31a-3c33f7b53f2d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"313\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\"" I0302 13:54:56.051636 7 controller.go:611] quota admission added evaluator for: helmcharts.helm.cattle.io time="2023-03-02T13:54:56Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"314\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" time="2023-03-02T13:54:56Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"c97604c6-7b06-4a9a-b31a-3c33f7b53f2d\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"313\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\"" time="2023-03-02T13:54:56Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"db6251c3-3860-4253-b0b0-ddf82c0b5969\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"315\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd" I0302 13:54:56.072546 7 controller.go:611] quota admission added evaluator for: jobs.batch time="2023-03-02T13:54:56Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"db6251c3-3860-4253-b0b0-ddf82c0b5969\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd" time="2023-03-02T13:54:56Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" time="2023-03-02T13:54:56Z" level=info msg="Updated coredns node hosts entry [172.19.0.3 k3d-farmvibes-ai-server-0]" I0302 13:54:56.332171 7 request.go:601] Waited for 1.048036424s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/policy/v1beta1 E0302 13:54:56.933024 7 controllermanager.go:463] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0302 13:54:56.933417 7 node_controller.go:118] Sending events to api server. I0302 13:54:56.933514 7 controllermanager.go:291] Started "cloud-node" I0302 13:54:56.933588 7 node_controller.go:157] Waiting for informer caches to sync I0302 13:54:56.934337 7 node_lifecycle_controller.go:77] Sending events to api server I0302 13:54:56.934369 7 controllermanager.go:291] Started "cloud-node-lifecycle" I0302 13:54:57.033767 7 node_controller.go:406] Initializing node k3d-farmvibes-ai-server-0 with cloud provider I0302 13:54:57.040813 7 node_controller.go:470] Successfully initialized node k3d-farmvibes-ai-server-0 with cloud provider I0302 13:54:57.041047 7 event.go:294] "Event occurred" object="k3d-farmvibes-ai-server-0" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully" I0302 13:54:57.347059 7 serving.go:355] Generated self-signed cert in-memory I0302 13:54:57.579404 7 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4+k3s1" I0302 13:54:57.579425 7 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0302 13:54:57.581928 7 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0302 13:54:57.581948 7 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController I0302 13:54:57.581952 7 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0302 13:54:57.581963 7 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0302 13:54:57.582080 7 secure_serving.go:210] Serving securely on 127.0.0.1:10259 I0302 13:54:57.582141 7 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0302 13:54:57.582174 7 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0302 13:54:57.582363 7 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0302 13:54:57.682639 7 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController I0302 13:54:57.682671 7 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0302 13:54:57.682671 7 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file time="2023-03-02T13:55:00Z" level=info msg="Stopped tunnel to 127.0.0.1:6443" time="2023-03-02T13:55:00Z" level=info msg="Connecting to proxy" url="wss://172.19.0.3:6443/v1-k3s/connect" time="2023-03-02T13:55:00Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect" time="2023-03-02T13:55:00Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF" time="2023-03-02T13:55:00Z" level=info msg="Handling backend connection request [k3d-farmvibes-ai-server-0]" I0302 13:55:00.756071 7 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0302 13:55:01.735935 7 range_allocator.go:83] Sending events to api server. I0302 13:55:01.736079 7 range_allocator.go:117] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I0302 13:55:01.736124 7 controllermanager.go:593] Started "nodeipam" I0302 13:55:01.736260 7 node_ipam_controller.go:154] Starting ipam controller I0302 13:55:01.736276 7 shared_informer.go:255] Waiting for caches to sync for node I0302 13:55:01.742203 7 controllermanager.go:593] Started "persistentvolume-expander" I0302 13:55:01.742228 7 expand_controller.go:341] Starting expand controller I0302 13:55:01.742239 7 shared_informer.go:255] Waiting for caches to sync for expand I0302 13:55:01.748007 7 controllermanager.go:593] Started "endpointslicemirroring" I0302 13:55:01.748143 7 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller I0302 13:55:01.748159 7 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring I0302 13:55:01.753986 7 controllermanager.go:593] Started "deployment" I0302 13:55:01.754115 7 deployment_controller.go:153] "Starting controller" controller="deployment" I0302 13:55:01.754133 7 shared_informer.go:255] Waiting for caches to sync for deployment I0302 13:55:01.759592 7 controllermanager.go:593] Started "csrcleaner" I0302 13:55:01.759682 7 cleaner.go:82] Starting CSR cleaner controller I0302 13:55:01.771946 7 controllermanager.go:593] Started "persistentvolume-binder" I0302 13:55:01.772042 7 pv_controller_base.go:311] Starting persistent volume controller I0302 13:55:01.772055 7 shared_informer.go:255] Waiting for caches to sync for persistent volume I0302 13:55:01.778224 7 controllermanager.go:593] Started "pv-protection" I0302 13:55:01.778317 7 pv_protection_controller.go:79] Starting PV protection controller I0302 13:55:01.778340 7 shared_informer.go:255] Waiting for caches to sync for PV protection I0302 13:55:01.791657 7 garbagecollector.go:149] Starting garbage collector controller I0302 13:55:01.791681 7 shared_informer.go:255] Waiting for caches to sync for garbage collector I0302 13:55:01.791703 7 graph_builder.go:289] GraphBuilder running I0302 13:55:01.791831 7 controllermanager.go:593] Started "garbagecollector" I0302 13:55:01.799754 7 controllermanager.go:593] Started "replicaset" I0302 13:55:01.799884 7 replica_set.go:205] Starting replicaset controller I0302 13:55:01.803027 7 shared_informer.go:255] Waiting for caches to sync for ReplicaSet W0302 13:55:01.811334 7 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0302 13:55:01.925219 7 controllermanager.go:593] Started "disruption" I0302 13:55:01.925251 7 disruption.go:363] Starting disruption controller I0302 13:55:01.925267 7 shared_informer.go:255] Waiting for caches to sync for disruption I0302 13:55:01.976458 7 node_lifecycle_controller.go:377] Sending events to api server. I0302 13:55:01.976628 7 taint_manager.go:163] "Sending events to api server" I0302 13:55:01.976693 7 node_lifecycle_controller.go:505] Controller will reconcile labels. I0302 13:55:01.976730 7 controllermanager.go:593] Started "nodelifecycle" I0302 13:55:01.976785 7 node_lifecycle_controller.go:539] Starting node controller I0302 13:55:01.976800 7 shared_informer.go:255] Waiting for caches to sync for taint I0302 13:55:02.126166 7 controllermanager.go:593] Started "attachdetach" I0302 13:55:02.126228 7 attach_detach_controller.go:328] Starting attach detach controller I0302 13:55:02.126344 7 shared_informer.go:255] Waiting for caches to sync for attach detach I0302 13:55:02.308329 7 controllermanager.go:593] Started "clusterrole-aggregation" I0302 13:55:02.308383 7 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator I0302 13:55:02.308393 7 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator I0302 13:55:02.437027 7 controllermanager.go:593] Started "endpoint" I0302 13:55:02.437247 7 endpoints_controller.go:178] Starting endpoint controller I0302 13:55:02.437271 7 shared_informer.go:255] Waiting for caches to sync for endpoint I0302 13:55:02.576547 7 controllermanager.go:593] Started "job" I0302 13:55:02.576571 7 job_controller.go:184] Starting job controller I0302 13:55:02.576583 7 shared_informer.go:255] Waiting for caches to sync for job I0302 13:55:02.726884 7 controllermanager.go:593] Started "cronjob" I0302 13:55:02.726940 7 cronjob_controllerv2.go:135] "Starting cronjob controller v2" I0302 13:55:02.726951 7 shared_informer.go:255] Waiting for caches to sync for cronjob I0302 13:55:02.876368 7 controllermanager.go:593] Started "ttl" I0302 13:55:02.876426 7 ttl_controller.go:121] Starting TTL controller I0302 13:55:02.876436 7 shared_informer.go:255] Waiting for caches to sync for TTL I0302 13:55:03.026281 7 controllermanager.go:593] Started "endpointslice" I0302 13:55:03.026329 7 endpointslice_controller.go:257] Starting endpoint slice controller I0302 13:55:03.026337 7 shared_informer.go:255] Waiting for caches to sync for endpoint_slice E0302 13:55:03.334939 7 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0302 13:55:03.348526 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I0302 13:55:03.348563 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0302 13:55:03.348575 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps I0302 13:55:03.348591 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0302 13:55:03.348620 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch I0302 13:55:03.348646 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0302 13:55:03.348672 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I0302 13:55:03.348701 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io I0302 13:55:03.348731 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io I0302 13:55:03.348754 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates I0302 13:55:03.348793 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints I0302 13:55:03.348811 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps I0302 13:55:03.348848 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io I0302 13:55:03.348869 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io W0302 13:55:03.348907 7 shared_informer.go:533] resyncPeriod 15h46m5.898587508s is smaller than resyncCheckPeriod 20h12m27.716716586s and the informer has already started. Changing it to 20h12m27.716716586s I0302 13:55:03.348974 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts I0302 13:55:03.348999 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges I0302 13:55:03.349022 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps I0302 13:55:03.349039 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch I0302 13:55:03.349070 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io W0302 13:55:03.349094 7 shared_informer.go:533] resyncPeriod 17h8m49.185068042s is smaller than resyncCheckPeriod 20h12m27.716716586s and the informer has already started. Changing it to 20h12m27.716716586s I0302 13:55:03.349161 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps I0302 13:55:03.349184 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps I0302 13:55:03.349210 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io I0302 13:55:03.349232 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0302 13:55:03.349255 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for addons.k3s.cattle.io I0302 13:55:03.349280 7 controllermanager.go:593] Started "resourcequota" I0302 13:55:03.349487 7 resource_quota_controller.go:273] Starting resource quota controller I0302 13:55:03.349504 7 shared_informer.go:255] Waiting for caches to sync for resource quota I0302 13:55:03.349523 7 resource_quota_monitor.go:308] QuotaMonitor running E0302 13:55:03.355869 7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0302 13:55:03.376875 7 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-serving" I0302 13:55:03.376982 7 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0302 13:55:03.377020 7 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key" I0302 13:55:03.377146 7 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-client" I0302 13:55:03.377168 7 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0302 13:55:03.377219 7 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key" I0302 13:55:03.377545 7 controllermanager.go:593] Started "csrsigning" I0302 13:55:03.377561 7 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key" I0302 13:55:03.377547 7 certificate_controller.go:119] Starting certificate controller "csrsigning-kube-apiserver-client" I0302 13:55:03.377601 7 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0302 13:55:03.377638 7 certificate_controller.go:119] Starting certificate controller "csrsigning-legacy-unknown" I0302 13:55:03.377649 7 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0302 13:55:03.377675 7 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key" I0302 13:55:03.528081 7 controllermanager.go:593] Started "ephemeral-volume" I0302 13:55:03.528161 7 controller.go:170] Starting ephemeral volume controller I0302 13:55:03.528171 7 shared_informer.go:255] Waiting for caches to sync for ephemeral I0302 13:55:03.677267 7 controllermanager.go:593] Started "replicationcontroller" I0302 13:55:03.677318 7 replica_set.go:205] Starting replicationcontroller controller I0302 13:55:03.677329 7 shared_informer.go:255] Waiting for caches to sync for ReplicationController I0302 13:55:03.826742 7 controllermanager.go:593] Started "podgc" I0302 13:55:03.826787 7 gc_controller.go:92] Starting GC controller I0302 13:55:03.826795 7 shared_informer.go:255] Waiting for caches to sync for GC I0302 13:55:03.979340 7 controllermanager.go:593] Started "serviceaccount" I0302 13:55:03.979405 7 serviceaccounts_controller.go:117] Starting service account controller I0302 13:55:03.979414 7 shared_informer.go:255] Waiting for caches to sync for service account I0302 13:55:04.028738 7 controllermanager.go:593] Started "csrapproving" W0302 13:55:04.028760 7 controllermanager.go:558] "service" is disabled I0302 13:55:04.028847 7 certificate_controller.go:119] Starting certificate controller "csrapproving" I0302 13:55:04.028863 7 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving I0302 13:55:04.177429 7 controllermanager.go:593] Started "pvc-protection" W0302 13:55:04.177453 7 controllermanager.go:558] "cloud-node-lifecycle" is disabled I0302 13:55:04.177496 7 pvc_protection_controller.go:103] "Starting PVC protection controller" I0302 13:55:04.177505 7 shared_informer.go:255] Waiting for caches to sync for PVC protection I0302 13:55:04.326756 7 controllermanager.go:593] Started "ttl-after-finished" I0302 13:55:04.326871 7 ttlafterfinished_controller.go:109] Starting TTL after finished controller I0302 13:55:04.326885 7 shared_informer.go:255] Waiting for caches to sync for TTL after finished I0302 13:55:04.331652 7 shared_informer.go:255] Waiting for caches to sync for resource quota W0302 13:55:04.350341 7 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3d-farmvibes-ai-server-0" does not exist I0302 13:55:04.354804 7 shared_informer.go:262] Caches are synced for deployment I0302 13:55:04.363926 7 shared_informer.go:262] Caches are synced for crt configmap I0302 13:55:04.366446 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:04.366621 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:04.375698 7 shared_informer.go:262] Caches are synced for persistent volume I0302 13:55:04.377784 7 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client I0302 13:55:04.378070 7 shared_informer.go:262] Caches are synced for PVC protection I0302 13:55:04.379583 7 shared_informer.go:262] Caches are synced for service account E0302 13:55:04.379983 7 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0302 13:55:04.380929 7 shared_informer.go:262] Caches are synced for TTL I0302 13:55:04.381651 7 shared_informer.go:262] Caches are synced for job I0302 13:55:04.381692 7 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving I0302 13:55:04.381718 7 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client I0302 13:55:04.381772 7 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown I0302 13:55:04.382932 7 shared_informer.go:262] Caches are synced for PV protection I0302 13:55:04.382971 7 shared_informer.go:262] Caches are synced for ReplicationController E0302 13:55:04.385714 7 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0302 13:55:04.387454 7 shared_informer.go:255] Waiting for caches to sync for garbage collector I0302 13:55:04.389537 7 shared_informer.go:262] Caches are synced for namespace I0302 13:55:04.398449 7 shared_informer.go:262] Caches are synced for daemon sets I0302 13:55:04.405180 7 shared_informer.go:262] Caches are synced for ReplicaSet I0302 13:55:04.410384 7 shared_informer.go:262] Caches are synced for ClusterRoleAggregator I0302 13:55:04.410367 7 shared_informer.go:262] Caches are synced for HPA I0302 13:55:04.415581 7 shared_informer.go:262] Caches are synced for stateful set I0302 13:55:04.425659 7 shared_informer.go:262] Caches are synced for disruption I0302 13:55:04.425722 7 disruption.go:371] Sending events to api server. I0302 13:55:04.426525 7 shared_informer.go:262] Caches are synced for attach detach I0302 13:55:04.426884 7 shared_informer.go:262] Caches are synced for GC I0302 13:55:04.427117 7 shared_informer.go:262] Caches are synced for cronjob I0302 13:55:04.427184 7 shared_informer.go:262] Caches are synced for TTL after finished I0302 13:55:04.428336 7 shared_informer.go:262] Caches are synced for ephemeral I0302 13:55:04.429470 7 shared_informer.go:262] Caches are synced for certificate-csrapproving I0302 13:55:04.436749 7 shared_informer.go:262] Caches are synced for node I0302 13:55:04.436777 7 range_allocator.go:173] Starting range CIDR allocator I0302 13:55:04.436783 7 shared_informer.go:255] Waiting for caches to sync for cidrallocator I0302 13:55:04.436797 7 shared_informer.go:262] Caches are synced for cidrallocator I0302 13:55:04.443673 7 shared_informer.go:262] Caches are synced for expand time="2023-03-02T13:55:04Z" level=info msg="Flannel found PodCIDR assigned for node k3d-farmvibes-ai-server-0" I0302 13:55:04.445470 7 range_allocator.go:374] Set node k3d-farmvibes-ai-server-0 PodCIDR to [10.42.0.0/24] time="2023-03-02T13:55:04Z" level=info msg="The interface eth0 with ipv4 address 172.19.0.3 will be used by flannel" I0302 13:55:04.452855 7 kube.go:121] Waiting 10m0s for node controller to sync I0302 13:55:04.452932 7 kube.go:402] Starting kube subnet manager I0302 13:55:04.529709 7 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24" I0302 13:55:04.530641 7 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24" I0302 13:55:04.537334 7 shared_informer.go:262] Caches are synced for endpoint I0302 13:55:04.541555 7 controller.go:611] quota admission added evaluator for: replicasets.apps I0302 13:55:04.547400 7 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-b96499967 to 1" I0302 13:55:04.548022 7 event.go:294] "Event occurred" object="kube-system/local-path-provisioner" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-7b7dc8d6f5 to 1" I0302 13:55:04.548279 7 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring I0302 13:55:04.549082 7 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-668d979685 to 1" time="2023-03-02T13:55:04Z" level=info msg="Starting the netpol controller" I0302 13:55:04.560503 7 network_policy_controller.go:162] Starting network policy controller I0302 13:55:04.590890 7 shared_informer.go:262] Caches are synced for taint I0302 13:55:04.590996 7 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: W0302 13:55:04.591062 7 node_lifecycle_controller.go:1014] Missing timestamp for Node k3d-farmvibes-ai-server-0. Assuming now as a timestamp. I0302 13:55:04.591090 7 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal. I0302 13:55:04.591276 7 taint_manager.go:187] "Starting NoExecuteTaintManager" I0302 13:55:04.591501 7 event.go:294] "Event occurred" object="k3d-farmvibes-ai-server-0" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node k3d-farmvibes-ai-server-0 event: Registered Node k3d-farmvibes-ai-server-0 in Controller" I0302 13:55:04.619879 7 network_policy_controller.go:174] Starting network policy controller full sync goroutine I0302 13:55:04.626378 7 shared_informer.go:262] Caches are synced for endpoint_slice I0302 13:55:04.632398 7 shared_informer.go:262] Caches are synced for resource quota I0302 13:55:04.649702 7 shared_informer.go:262] Caches are synced for resource quota I0302 13:55:04.991215 7 event.go:294] "Event occurred" object="kube-system/helm-install-traefik" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-btc92" I0302 13:55:04.998371 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:04.999143 7 event.go:294] "Event occurred" object="kube-system/helm-install-traefik-crd" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-crd-5t9dc" I0302 13:55:05.007047 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:05.007682 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd time="2023-03-02T13:55:05Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" I0302 13:55:05.017333 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:05.019213 7 topology_manager.go:200] "Topology Admit Handler" I0302 13:55:05.029396 7 topology_manager.go:200] "Topology Admit Handler" I0302 13:55:05.022756 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:05.030245 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:05.037395 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik time="2023-03-02T13:55:05Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"db6251c3-3860-4253-b0b0-ddf82c0b5969\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd" time="2023-03-02T13:55:05Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" I0302 13:55:05.072609 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:05.093560 7 shared_informer.go:262] Caches are synced for garbage collector I0302 13:55:05.093639 7 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0302 13:55:05.094790 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik time="2023-03-02T13:55:05Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" I0302 13:55:05.100312 7 shared_informer.go:262] Caches are synced for garbage collector I0302 13:55:05.129732 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-values\") pod \"helm-install-traefik-crd-5t9dc\" (UID: \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\") " pod="kube-system/helm-install-traefik-crd-5t9dc" I0302 13:55:05.129780 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-content\") pod \"helm-install-traefik-crd-5t9dc\" (UID: \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\") " pod="kube-system/helm-install-traefik-crd-5t9dc" I0302 13:55:05.129921 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8cd\" (UniqueName: \"kubernetes.io/projected/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-kube-api-access-wz8cd\") pod \"helm-install-traefik-crd-5t9dc\" (UID: \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\") " pod="kube-system/helm-install-traefik-crd-5t9dc" I0302 13:55:05.137072 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:05.202695 7 event.go:294] "Event occurred" object="kube-system/local-path-provisioner-7b7dc8d6f5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-7b7dc8d6f5-7fbxf" I0302 13:55:05.202723 7 event.go:294] "Event occurred" object="kube-system/metrics-server-668d979685" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-668d979685-nrmls" I0302 13:55:05.213865 7 event.go:294] "Event occurred" object="kube-system/coredns-b96499967" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-b96499967-wkl49" I0302 13:55:05.230747 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-values\") pod \"helm-install-traefik-btc92\" (UID: \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\") " pod="kube-system/helm-install-traefik-btc92" I0302 13:55:05.231119 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-content\") pod \"helm-install-traefik-btc92\" (UID: \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\") " pod="kube-system/helm-install-traefik-btc92" I0302 13:55:05.269904 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqf2b\" (UniqueName: \"kubernetes.io/projected/4e52dd64-7509-4b5d-8b71-be170f1b7dce-kube-api-access-bqf2b\") pod \"helm-install-traefik-btc92\" (UID: \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\") " pod="kube-system/helm-install-traefik-btc92" W0302 13:55:05.300840 7 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date W0302 13:55:05.301794 7 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/metrics-server", retrying. Error: EndpointSlice informer cache is out of date I0302 13:55:05.322129 7 topology_manager.go:200] "Topology Admit Handler" I0302 13:55:05.322309 7 topology_manager.go:200] "Topology Admit Handler" I0302 13:55:05.322392 7 topology_manager.go:200] "Topology Admit Handler" I0302 13:55:05.373790 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r96x\" (UniqueName: \"kubernetes.io/projected/2b28dd13-c65c-4ecb-b645-84ac882dd410-kube-api-access-5r96x\") pod \"local-path-provisioner-7b7dc8d6f5-7fbxf\" (UID: \"2b28dd13-c65c-4ecb-b645-84ac882dd410\") " pod="kube-system/local-path-provisioner-7b7dc8d6f5-7fbxf" I0302 13:55:05.373884 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0874964c-1e1c-4a66-aef4-3cbd0774b389-config-volume\") pod \"coredns-b96499967-wkl49\" (UID: \"0874964c-1e1c-4a66-aef4-3cbd0774b389\") " pod="kube-system/coredns-b96499967-wkl49" I0302 13:55:05.373916 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/0874964c-1e1c-4a66-aef4-3cbd0774b389-custom-config-volume\") pod \"coredns-b96499967-wkl49\" (UID: \"0874964c-1e1c-4a66-aef4-3cbd0774b389\") " pod="kube-system/coredns-b96499967-wkl49" I0302 13:55:05.373946 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b28dd13-c65c-4ecb-b645-84ac882dd410-config-volume\") pod \"local-path-provisioner-7b7dc8d6f5-7fbxf\" (UID: \"2b28dd13-c65c-4ecb-b645-84ac882dd410\") " pod="kube-system/local-path-provisioner-7b7dc8d6f5-7fbxf" I0302 13:55:05.373988 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmrc\" (UniqueName: \"kubernetes.io/projected/0874964c-1e1c-4a66-aef4-3cbd0774b389-kube-api-access-wxmrc\") pod \"coredns-b96499967-wkl49\" (UID: \"0874964c-1e1c-4a66-aef4-3cbd0774b389\") " pod="kube-system/coredns-b96499967-wkl49" I0302 13:55:05.374019 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f5f25557-119c-4c09-9646-aa2b5aa6d09e-tmp-dir\") pod \"metrics-server-668d979685-nrmls\" (UID: \"f5f25557-119c-4c09-9646-aa2b5aa6d09e\") " pod="kube-system/metrics-server-668d979685-nrmls" I0302 13:55:05.374050 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ddl\" (UniqueName: \"kubernetes.io/projected/f5f25557-119c-4c09-9646-aa2b5aa6d09e-kube-api-access-24ddl\") pod \"metrics-server-668d979685-nrmls\" (UID: \"f5f25557-119c-4c09-9646-aa2b5aa6d09e\") " pod="kube-system/metrics-server-668d979685-nrmls" I0302 13:55:05.453316 7 kube.go:128] Node controller sync successful I0302 13:55:05.453480 7 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false I0302 13:55:05.486302 7 kube.go:357] Skip setting NodeNetworkUnavailable time="2023-03-02T13:55:05Z" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env" time="2023-03-02T13:55:05Z" level=info msg="Running flannel backend." I0302 13:55:05.490478 7 vxlan_network.go:61] watching for new subnet leases I0302 13:55:05.550327 7 iptables.go:177] bootstrap done I0302 13:55:05.563301 7 iptables.go:177] bootstrap done W0302 13:55:06.152790 7 handler_proxy.go:102] no RequestInfo found in the context E0302 13:55:06.152879 7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]] W0302 13:55:06.153048 7 handler_proxy.go:102] no RequestInfo found in the context E0302 13:55:06.153080 7 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService I0302 13:55:06.153146 7 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I0302 13:55:06.154582 7 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue. I0302 13:55:06.632663 7 request.go:601] Waited for 1.157905916s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1:6443/api/v1/namespaces/kube-system/serviceaccounts/metrics-server/token time="2023-03-02T13:55:11Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"3edaf1d3-f6c2-4d69-b213-d9b8c3871ffe\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"255\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\"" time="2023-03-02T13:55:11Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"3edaf1d3-f6c2-4d69-b213-d9b8c3871ffe\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"255\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\"" I0302 13:55:24.514357 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:24.541896 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:25.520460 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd time="2023-03-02T13:55:25Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"db6251c3-3860-4253-b0b0-ddf82c0b5969\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd" I0302 13:55:25.547046 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik time="2023-03-02T13:55:25Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" I0302 13:55:28.506761 7 scope.go:110] "RemoveContainer" containerID="1df1c9f20038d2f5a7b9b13365d190dba8826ab669da8e321a2202335b96878c" I0302 13:55:28.553950 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:28.572277 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:29.528854 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:29.609946 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd time="2023-03-02T13:55:29Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"db6251c3-3860-4253-b0b0-ddf82c0b5969\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd" I0302 13:55:29.995995 7 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz8cd\" (UniqueName: \"kubernetes.io/projected/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-kube-api-access-wz8cd\") pod \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\" (UID: \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\") " I0302 13:55:29.996044 7 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-content\") pod \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\" (UID: \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\") " I0302 13:55:29.996075 7 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-values\") pod \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\" (UID: \"2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf\") " W0302 13:55:29.996302 7 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf/volumes/kubernetes.io~configmap/content: clearQuota called, but quotas disabled W0302 13:55:29.996310 7 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf/volumes/kubernetes.io~configmap/values: clearQuota called, but quotas disabled I0302 13:55:29.996578 7 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-content" (OuterVolumeSpecName: "content") pod "2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf" (UID: "2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf"). InnerVolumeSpecName "content". PluginName "kubernetes.io/configmap", VolumeGidValue "" I0302 13:55:29.996604 7 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-values" (OuterVolumeSpecName: "values") pod "2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf" (UID: "2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf"). InnerVolumeSpecName "values". PluginName "kubernetes.io/configmap", VolumeGidValue "" I0302 13:55:29.998550 7 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-kube-api-access-wz8cd" (OuterVolumeSpecName: "kube-api-access-wz8cd") pod "2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf" (UID: "2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf"). InnerVolumeSpecName "kube-api-access-wz8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" I0302 13:55:30.096325 7 reconciler.go:384] "Volume detached for volume \"values\" (UniqueName: \"kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-values\") on node \"k3d-farmvibes-ai-server-0\" DevicePath \"\"" I0302 13:55:30.096366 7 reconciler.go:384] "Volume detached for volume \"content\" (UniqueName: \"kubernetes.io/configmap/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-content\") on node \"k3d-farmvibes-ai-server-0\" DevicePath \"\"" I0302 13:55:30.096378 7 reconciler.go:384] "Volume detached for volume \"kube-api-access-wz8cd\" (UniqueName: \"kubernetes.io/projected/2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf-kube-api-access-wz8cd\") on node \"k3d-farmvibes-ai-server-0\" DevicePath \"\"" I0302 13:55:30.381058 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:30.513035 7 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8b9c0c26114d9f3a73036ef10785936daa43f9f8e6450e64be2b3eeb9cf2b735" I0302 13:55:31.426356 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd time="2023-03-02T13:55:31Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"db6251c3-3860-4253-b0b0-ddf82c0b5969\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd" I0302 13:55:31.444877 7 event.go:294] "Event occurred" object="kube-system/helm-install-traefik-crd" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0302 13:55:31.447505 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd I0302 13:55:31.450237 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik-crd time="2023-03-02T13:55:31Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"db6251c3-3860-4253-b0b0-ddf82c0b5969\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"328\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd" E0302 13:55:31.457023 7 job_controller.go:533] syncing job: tracking status: adding uncounted pods to status: Operation cannot be fulfilled on jobs.batch "helm-install-traefik-crd": the object has been modified; please apply your changes to the latest version and try again I0302 13:55:31.588397 7 event.go:294] "Event occurred" object="kube-system/traefik" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set traefik-7cd4fcff68 to 1" I0302 13:55:31.600123 7 event.go:294] "Event occurred" object="kube-system/traefik-7cd4fcff68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: traefik-7cd4fcff68-lqk7j" I0302 13:55:31.600775 7 alloc.go:327] "allocated clusterIPs" service="kube-system/traefik" clusterIPs=map[IPv4:10.43.70.225] I0302 13:55:31.619367 7 controller.go:611] quota admission added evaluator for: daemonsets.apps I0302 13:55:31.620716 7 topology_manager.go:200] "Topology Admit Handler" E0302 13:55:31.620834 7 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf" containerName="helm" I0302 13:55:31.620896 7 memory_manager.go:345] "RemoveStaleState removing state" podUID="2c63e8cd-e591-4fc0-ac2e-5e2530bbdfcf" containerName="helm" time="2023-03-02T13:55:31Z" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"41a37c5e-104f-4a32-8be3-d208156b48ea\", APIVersion:\"v1\", ResourceVersion:\"561\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-41a37c5e" time="2023-03-02T13:55:31Z" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"41a37c5e-104f-4a32-8be3-d208156b48ea\", APIVersion:\"v1\", ResourceVersion:\"561\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-41a37c5e" I0302 13:55:31.648539 7 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0302 13:55:31.659909 7 event.go:294] "Event occurred" object="kube-system/svclb-traefik-41a37c5e" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: svclb-traefik-41a37c5e-pkfg4" I0302 13:55:31.680796 7 topology_manager.go:200] "Topology Admit Handler" I0302 13:55:31.724683 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nxv4\" (UniqueName: \"kubernetes.io/projected/de81ebbc-d01b-4ffb-bb41-aab178d8f36e-kube-api-access-5nxv4\") pod \"traefik-7cd4fcff68-lqk7j\" (UID: \"de81ebbc-d01b-4ffb-bb41-aab178d8f36e\") " pod="kube-system/traefik-7cd4fcff68-lqk7j" I0302 13:55:31.724737 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de81ebbc-d01b-4ffb-bb41-aab178d8f36e-tmp\") pod \"traefik-7cd4fcff68-lqk7j\" (UID: \"de81ebbc-d01b-4ffb-bb41-aab178d8f36e\") " pod="kube-system/traefik-7cd4fcff68-lqk7j" I0302 13:55:31.724764 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/de81ebbc-d01b-4ffb-bb41-aab178d8f36e-data\") pod \"traefik-7cd4fcff68-lqk7j\" (UID: \"de81ebbc-d01b-4ffb-bb41-aab178d8f36e\") " pod="kube-system/traefik-7cd4fcff68-lqk7j" I0302 13:55:31.726823 7 controller.go:611] quota admission added evaluator for: ingressroutes.traefik.containo.us I0302 13:55:32.519998 7 scope.go:110] "RemoveContainer" containerID="1df1c9f20038d2f5a7b9b13365d190dba8826ab669da8e321a2202335b96878c" I0302 13:55:32.535259 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:33.537803 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:33.545095 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik time="2023-03-02T13:55:33Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" I0302 13:55:33.837815 7 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqf2b\" (UniqueName: \"kubernetes.io/projected/4e52dd64-7509-4b5d-8b71-be170f1b7dce-kube-api-access-bqf2b\") pod \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\" (UID: \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\") " I0302 13:55:33.837853 7 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-content\") pod \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\" (UID: \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\") " I0302 13:55:33.837885 7 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-values\") pod \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\" (UID: \"4e52dd64-7509-4b5d-8b71-be170f1b7dce\") " W0302 13:55:33.838125 7 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/4e52dd64-7509-4b5d-8b71-be170f1b7dce/volumes/kubernetes.io~configmap/content: clearQuota called, but quotas disabled W0302 13:55:33.838169 7 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/4e52dd64-7509-4b5d-8b71-be170f1b7dce/volumes/kubernetes.io~configmap/values: clearQuota called, but quotas disabled I0302 13:55:33.838278 7 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-content" (OuterVolumeSpecName: "content") pod "4e52dd64-7509-4b5d-8b71-be170f1b7dce" (UID: "4e52dd64-7509-4b5d-8b71-be170f1b7dce"). InnerVolumeSpecName "content". PluginName "kubernetes.io/configmap", VolumeGidValue "" I0302 13:55:33.838356 7 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-values" (OuterVolumeSpecName: "values") pod "4e52dd64-7509-4b5d-8b71-be170f1b7dce" (UID: "4e52dd64-7509-4b5d-8b71-be170f1b7dce"). InnerVolumeSpecName "values". PluginName "kubernetes.io/configmap", VolumeGidValue "" I0302 13:55:33.839171 7 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e52dd64-7509-4b5d-8b71-be170f1b7dce-kube-api-access-bqf2b" (OuterVolumeSpecName: "kube-api-access-bqf2b") pod "4e52dd64-7509-4b5d-8b71-be170f1b7dce" (UID: "4e52dd64-7509-4b5d-8b71-be170f1b7dce"). InnerVolumeSpecName "kube-api-access-bqf2b". PluginName "kubernetes.io/projected", VolumeGidValue "" I0302 13:55:33.938715 7 reconciler.go:384] "Volume detached for volume \"content\" (UniqueName: \"kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-content\") on node \"k3d-farmvibes-ai-server-0\" DevicePath \"\"" I0302 13:55:33.938748 7 reconciler.go:384] "Volume detached for volume \"values\" (UniqueName: \"kubernetes.io/configmap/4e52dd64-7509-4b5d-8b71-be170f1b7dce-values\") on node \"k3d-farmvibes-ai-server-0\" DevicePath \"\"" I0302 13:55:33.938787 7 reconciler.go:384] "Volume detached for volume \"kube-api-access-bqf2b\" (UniqueName: \"kubernetes.io/projected/4e52dd64-7509-4b5d-8b71-be170f1b7dce-kube-api-access-bqf2b\") on node \"k3d-farmvibes-ai-server-0\" DevicePath \"\"" I0302 13:55:34.527286 7 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="be1fa000ae430935c2619fe8a21a5cdd58ad0c71e6247f6cf3a448c9dd1911be" I0302 13:55:34.539291 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik E0302 13:55:34.642484 7 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0302 13:55:34.642775 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for traefikservices.traefik.containo.us I0302 13:55:34.642829 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingressroutes.traefik.containo.us I0302 13:55:34.642860 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for middlewaretcps.traefik.containo.us I0302 13:55:34.642886 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingressrouteudps.traefik.containo.us I0302 13:55:34.642911 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for tlsoptions.traefik.containo.us I0302 13:55:34.642930 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingressroutetcps.traefik.containo.us I0302 13:55:34.642954 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for tlsstores.traefik.containo.us I0302 13:55:34.642970 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for middlewares.traefik.containo.us I0302 13:55:34.642994 7 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serverstransports.traefik.containo.us I0302 13:55:34.653151 7 shared_informer.go:255] Waiting for caches to sync for resource quota I0302 13:55:34.853393 7 shared_informer.go:262] Caches are synced for resource quota W0302 13:55:35.116468 7 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E0302 13:55:35.120335 7 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0302 13:55:35.130077 7 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request I0302 13:55:35.130874 7 shared_informer.go:255] Waiting for caches to sync for garbage collector I0302 13:55:35.130935 7 shared_informer.go:262] Caches are synced for garbage collector I0302 13:55:35.547532 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik I0302 13:55:35.552342 7 event.go:294] "Event occurred" object="kube-system/helm-install-traefik" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0302 13:55:35.552393 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik time="2023-03-02T13:55:35Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" I0302 13:55:35.559168 7 job_controller.go:498] enqueueing job kube-system/helm-install-traefik E0302 13:55:35.562720 7 job_controller.go:533] syncing job: tracking status: adding uncounted pods to status: Operation cannot be fulfilled on jobs.batch "helm-install-traefik": the object has been modified; please apply your changes to the latest version and try again time="2023-03-02T13:55:35Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"7f521ca2-8163-444c-bb2b-dd538515df2c\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"329\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik" time="2023-03-02T13:55:41Z" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"41a37c5e-104f-4a32-8be3-d208156b48ea\", APIVersion:\"v1\", ResourceVersion:\"561\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-41a37c5e" time="2023-03-02T13:55:41Z" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"41a37c5e-104f-4a32-8be3-d208156b48ea\", APIVersion:\"v1\", ResourceVersion:\"618\", FieldPath:\"\"}): type: 'Normal' reason: 'UpdatedIngressIP' LoadBalancer Ingress IP addresses updated: 172.19.0.3" time="2023-03-02T13:55:41Z" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"41a37c5e-104f-4a32-8be3-d208156b48ea\", APIVersion:\"v1\", ResourceVersion:\"620\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-41a37c5e" time="2023-03-02T13:55:44Z" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"41a37c5e-104f-4a32-8be3-d208156b48ea\", APIVersion:\"v1\", ResourceVersion:\"620\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-41a37c5e" I0302 13:55:45.917390 7 trace.go:205] Trace[1883213263]: "List" url:/apis/networking.k8s.io/v1/ingresses,user-agent:traefik/2.6.2 (linux/amd64) kubernetes/ingress,audit-id:f701e33e-341b-4613-ac1c-652f53bc7e08,client:10.42.0.8,accept:application/json, /,protocol:HTTP/2.0 (02-Mar-2023 13:55:45.049) (total time: 851ms): Trace[1883213263]: ---"Writing http response done" count:0 851ms (13:55:45.900) Trace[1883213263]: [851.372802ms] [851.372802ms] END I0302 13:55:45.917396 7 trace.go:205] Trace[354391925]: "List" url:/api/v1/services,user-agent:traefik/2.6.2 (linux/amd64) kubernetes/crd,audit-id:01c5bf1f-770a-4de8-b13c-631365fda8a3,client:10.42.0.8,accept:application/json, /,protocol:HTTP/2.0 (02-Mar-2023 13:55:45.060) (total time: 668ms): Trace[354391925]: ---"Writing http response done" count:4 668ms (13:55:45.729) Trace[354391925]: [668.803614ms] [668.803614ms] END I0302 13:55:45.917396 7 trace.go:205] Trace[473980620]: "List" url:/api/v1/secrets,user-agent:traefik/2.6.2 (linux/amd64) kubernetes/crd,audit-id:033685f3-55a9-4974-b90e-7fc5ad84dabe,client:10.42.0.8,accept:application/json, /,protocol:HTTP/2.0 (02-Mar-2023 13:55:45.060) (total time: 669ms): Trace[473980620]: ---"Writing http response done" count:2 669ms (13:55:45.729) Trace[473980620]: [669.231319ms] [669.231319ms] END time="2023-03-02T13:56:02Z" level=info msg="Event(v1.ObjectReference{Kind:\"Service\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"41a37c5e-104f-4a32-8be3-d208156b48ea\", APIVersion:\"v1\", ResourceVersion:\"620\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedDaemonSet' Applied LoadBalancer DaemonSet kube-system/svclb-traefik-41a37c5e" I0302 13:56:13.056028 7 trace.go:205] Trace[2145374602]: "GuaranteedUpdate etcd3" type:*coordination.Lease (02-Mar-2023 13:56:12.212) (total time: 812ms): Trace[2145374602]: ---"Transaction committed" 812ms (13:56:13.025) Trace[2145374602]: [812.967646ms] [812.967646ms] END I0302 13:56:13.056285 7 trace.go:205] Trace[1801390223]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k3d-farmvibes-ai-server-0,user-agent:k3s/v1.24.4+k3s1 (linux/amd64) kubernetes/c3f830e,audit-id:bbadf713-b599-416e-b1ad-b9ab4ab96d26,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (02-Mar-2023 13:56:12.212) (total time: 843ms): Trace[1801390223]: ---"Object stored in database" 843ms (13:56:13.056) Trace[1801390223]: [843.484542ms] [843.484542ms] END I0302 13:59:49.324508 7 trace.go:205] Trace[1506066132]: "DeltaFIFO Pop Process" ID:v1.certificates.k8s.io,Depth:31,Reason:slow event handlers blocking the queue (02-Mar-2023 13:59:49.030) (total time: 168ms): Trace[1506066132]: [168.774488ms] [168.774488ms] END W0302 13:59:50.582336 7 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" I0302 14:00:19.392758 7 alloc.go:327] "allocated clusterIPs" service="dapr-system/dapr-sentry" clusterIPs=map[IPv4:10.43.103.78] I0302 14:00:19.392978 7 trace.go:205] Trace[1308736651]: "Create" url:/api/v1/namespaces/dapr-system/services,user-agent:Go-http-client/2.0,audit-id:1c142d87-19b9-48e7-9604-d502c0056139,client:172.19.0.4,accept:application/json,protocol:HTTP/2.0 (02-Mar-2023 14:00:18.805) (total time: 587ms): Trace[1308736651]: ---"Object stored in database" 586ms (14:00:19.392) Trace[1308736651]: [587.160586ms] [587.160586ms] END I0302 14:00:19.437614 7 alloc.go:327] "allocated clusterIPs" service="dapr-system/dapr-webhook" clusterIPs=map[IPv4:10.43.203.204] I0302 14:00:19.437921 7 trace.go:205] Trace[1976644238]: "Create" url:/api/v1/namespaces/dapr-system/services,user-agent:Go-http-client/2.0,audit-id:77fe5473-d6ed-48c3-b40b-1aacd918691b,client:172.19.0.4,accept:application/json,protocol:HTTP/2.0 (02-Mar-2023 14:00:18.806) (total time: 631ms): Trace[1976644238]: ---"Object stored in database" 631ms (14:00:19.437) Trace[1976644238]: [631.386258ms] [631.386258ms] END I0302 14:00:19.446899 7 alloc.go:327] "allocated clusterIPs" service="dapr-system/dapr-sidecar-injector" clusterIPs=map[IPv4:10.43.220.18] I0302 14:00:19.447041 7 trace.go:205] Trace[1524510042]: "Create" url:/api/v1/namespaces/dapr-system/services,user-agent:Go-http-client/2.0,audit-id:36106ae7-5f58-44df-aa88-3a1b6935a9d8,client:172.19.0.4,accept:application/json,protocol:HTTP/2.0 (02-Mar-2023 14:00:18.804) (total time: 642ms): Trace[1524510042]: ---"Object stored in database" 642ms (14:00:19.446) Trace[1524510042]: [642.3445ms] [642.3445ms] END I0302 14:00:19.456333 7 alloc.go:327] "allocated clusterIPs" service="dapr-system/dapr-api" clusterIPs=map[IPv4:10.43.144.166] I0302 14:00:19.456459 7 trace.go:205] Trace[123312877]: "Create" url:/api/v1/namespaces/dapr-system/services,user-agent:Go-http-client/2.0,audit-id:3227985e-0a77-419a-9be3-f729796c72f9,client:172.19.0.4,accept:application/json,protocol:HTTP/2.0 (02-Mar-2023 14:00:18.806) (total time: 649ms): Trace[123312877]: ---"Object stored in database" 649ms (14:00:19.456) Trace[123312877]: [649.902998ms] [649.902998ms] END I0302 14:00:19.465366 7 alloc.go:327] "allocated clusterIPs" service="dapr-system/dapr-dashboard" clusterIPs=map[IPv4:10.43.34.20] I0302 14:00:19.465508 7 trace.go:205] Trace[1931475105]: "Create" url:/api/v1/namespaces/dapr-system/services,user-agent:Go-http-client/2.0,audit-id:0010908c-81e4-4c3e-90c9-78b50620329c,client:172.19.0.4,accept:application/json,protocol:HTTP/2.0 (02-Mar-2023 14:00:18.805) (total time: 659ms): Trace[1931475105]: ---"Object stored in database" 646ms (14:00:19.465) Trace[1931475105]: [659.657923ms] [659.657923ms] END I0302 14:00:19.551121 7 controller.go:611] quota admission added evaluator for: statefulsets.apps I0302 14:00:19.570131 7 event.go:294] "Event occurred" object="dapr-system/dapr-sentry" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dapr-sentry-56879d959f to 1" I0302 14:00:19.570175 7 event.go:294] "Event occurred" object="dapr-system/dapr-sidecar-injector" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dapr-sidecar-injector-bcd764594 to 1" I0302 14:00:19.570281 7 event.go:294] "Event occurred" object="dapr-system/dapr-operator" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dapr-operator-59cb45b4d4 to 1" I0302 14:00:19.583194 7 event.go:294] "Event occurred" object="dapr-system/dapr-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dapr-dashboard-695f57dfc7 to 1" I0302 14:00:19.602402 7 event.go:294] "Event occurred" object="dapr-system/dapr-sentry-56879d959f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dapr-sentry-56879d959f-stvf9" I0302 14:00:19.608583 7 event.go:294] "Event occurred" object="dapr-system/dapr-operator-59cb45b4d4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dapr-operator-59cb45b4d4-h92l2" I0302 14:00:19.616247 7 controller.go:611] quota admission added evaluator for: configurations.dapr.io I0302 14:00:19.671189 7 event.go:294] "Event occurred" object="dapr-system/dapr-sidecar-injector-bcd764594" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dapr-sidecar-injector-bcd764594-l6jb7" I0302 14:00:19.709917 7 topology_manager.go:200] "Topology Admit Handler" E0302 14:00:19.716851 7 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4e52dd64-7509-4b5d-8b71-be170f1b7dce" containerName="helm" E0302 14:00:19.716896 7 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4e52dd64-7509-4b5d-8b71-be170f1b7dce" containerName="helm" I0302 14:00:19.726932 7 memory_manager.go:345] "RemoveStaleState removing state" podUID="4e52dd64-7509-4b5d-8b71-be170f1b7dce" containerName="helm" I0302 14:00:19.726961 7 memory_manager.go:345] "RemoveStaleState removing state" podUID="4e52dd64-7509-4b5d-8b71-be170f1b7dce" containerName="helm" I0302 14:00:19.726971 7 event.go:294] "Event occurred" object="dapr-system/dapr-dashboard-695f57dfc7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dapr-dashboard-695f57dfc7-s48pm" I0302 14:00:19.762121 7 topology_manager.go:200] "Topology Admit Handler" I0302 14:00:19.765731 7 topology_manager.go:200] "Topology Admit Handler" I0302 14:00:19.845144 7 event.go:294] "Event occurred" object="dapr-system/dapr-placement-server" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod dapr-placement-server-0 in StatefulSet dapr-placement-server successful" I0302 14:00:19.956865 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bn29\" (UniqueName: \"kubernetes.io/projected/a0b18c82-2f6d-4f1d-8339-c99ab565f38f-kube-api-access-5bn29\") pod \"dapr-sentry-56879d959f-stvf9\" (UID: \"a0b18c82-2f6d-4f1d-8339-c99ab565f38f\") " pod="dapr-system/dapr-sentry-56879d959f-stvf9" I0302 14:00:19.957061 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credentials\" (UniqueName: \"kubernetes.io/secret/a0b18c82-2f6d-4f1d-8339-c99ab565f38f-credentials\") pod \"dapr-sentry-56879d959f-stvf9\" (UID: \"a0b18c82-2f6d-4f1d-8339-c99ab565f38f\") " pod="dapr-system/dapr-sentry-56879d959f-stvf9" I0302 14:00:19.957204 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-creds\" (UniqueName: \"kubernetes.io/secret/d39d8b4a-c44f-4d11-9c7d-cfb2296d4fd8-webhook-creds\") pod \"dapr-operator-59cb45b4d4-h92l2\" (UID: \"d39d8b4a-c44f-4d11-9c7d-cfb2296d4fd8\") " pod="dapr-system/dapr-operator-59cb45b4d4-h92l2" I0302 14:00:19.957287 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7kx\" (UniqueName: \"kubernetes.io/projected/d39d8b4a-c44f-4d11-9c7d-cfb2296d4fd8-kube-api-access-bh7kx\") pod \"dapr-operator-59cb45b4d4-h92l2\" (UID: \"d39d8b4a-c44f-4d11-9c7d-cfb2296d4fd8\") " pod="dapr-system/dapr-operator-59cb45b4d4-h92l2" I0302 14:00:19.957367 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credentials\" (UniqueName: \"kubernetes.io/secret/d39d8b4a-c44f-4d11-9c7d-cfb2296d4fd8-credentials\") pod \"dapr-operator-59cb45b4d4-h92l2\" (UID: \"d39d8b4a-c44f-4d11-9c7d-cfb2296d4fd8\") " pod="dapr-system/dapr-operator-59cb45b4d4-h92l2" I0302 14:00:19.957405 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55d0baa7-ca90-4199-b8aa-a16e3931dcc6-cert\") pod \"dapr-sidecar-injector-bcd764594-l6jb7\" (UID: \"55d0baa7-ca90-4199-b8aa-a16e3931dcc6\") " pod="dapr-system/dapr-sidecar-injector-bcd764594-l6jb7" I0302 14:00:19.957436 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkt2\" (UniqueName: \"kubernetes.io/projected/55d0baa7-ca90-4199-b8aa-a16e3931dcc6-kube-api-access-nhkt2\") pod \"dapr-sidecar-injector-bcd764594-l6jb7\" (UID: \"55d0baa7-ca90-4199-b8aa-a16e3931dcc6\") " pod="dapr-system/dapr-sidecar-injector-bcd764594-l6jb7" I0302 14:00:19.979814 7 topology_manager.go:200] "Topology Admit Handler" I0302 14:00:19.985646 7 trace.go:205] Trace[1479977919]: "Create" url:/apis/discovery.k8s.io/v1/namespaces/dapr-system/endpointslices,user-agent:k3s/v1.24.4+k3s1 (linux/amd64) kubernetes/c3f830e/system:serviceaccount:kube-system:endpointslice-controller,audit-id:0e723e8d-8e04-4201-b864-83de4d1f4cd0,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, /,protocol:HTTP/2.0 (02-Mar-2023 14:00:19.333) (total time: 652ms): Trace[1479977919]: ---"Object stored in database" 651ms (14:00:19.985) Trace[1479977919]: [652.068825ms] [652.068825ms] END I0302 14:00:20.013413 7 topology_manager.go:200] "Topology Admit Handler" I0302 14:00:20.041513 7 trace.go:205] Trace[1358548958]: "Create" url:/api/v1/namespaces/dapr-system/endpoints,user-agent:k3s/v1.24.4+k3s1 (linux/amd64) kubernetes/c3f830e/system:serviceaccount:kube-system:endpoint-controller,audit-id:194246d3-fa7d-43af-beef-c14e71491933,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, /,protocol:HTTP/2.0 (02-Mar-2023 14:00:19.105) (total time: 935ms): Trace[1358548958]: ---"Object stored in database" 935ms (14:00:20.041) Trace[1358548958]: [935.76249ms] [935.76249ms] END I0302 14:00:20.057945 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credentials\" (UniqueName: \"kubernetes.io/secret/f6ad1603-80e8-4d45-9cf8-7181fe1060e3-credentials\") pod \"dapr-placement-server-0\" (UID: \"f6ad1603-80e8-4d45-9cf8-7181fe1060e3\") " pod="dapr-system/dapr-placement-server-0" I0302 14:00:20.058068 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb69t\" (UniqueName: \"kubernetes.io/projected/a9311be0-3f25-409b-ab83-31d8d3e8201f-kube-api-access-zb69t\") pod \"dapr-dashboard-695f57dfc7-s48pm\" (UID: \"a9311be0-3f25-409b-ab83-31d8d3e8201f\") " pod="dapr-system/dapr-dashboard-695f57dfc7-s48pm" I0302 14:00:20.058153 7 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqm6\" (UniqueName: \"kubernetes.io/projected/f6ad1603-80e8-4d45-9cf8-7181fe1060e3-kube-api-access-8wqm6\") pod \"dapr-placement-server-0\" (UID: \"f6ad1603-80e8-4d45-9cf8-7181fe1060e3\") " pod="dapr-system/dapr-placement-server-0" I0302 14:00:21.352432 7 request.go:601] Waited for 1.192805567s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1:6443/api/v1/namespaces/dapr-system/serviceaccounts/dapr-operator/token I0302 14:00:27.767424 7 alloc.go:327] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.43.178.66] I0302 14:00:27.771527 7 alloc.go:327] "allocated clusterIPs" service="default/redis-replicas" clusterIPs=map[IPv4:10.43.176.212] I0302 14:00:27.814168 7 event.go:294] "Event occurred" object="default/redis-master" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim redis-data-redis-master-0 Pod redis-master-0 in StatefulSet redis-master success" I0302 14:00:27.817042 7 event.go:294] "Event occurred" object="default/redis-replicas" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim redis-data-redis-replicas-0 Pod redis-replicas-0 in StatefulSet redis-replicas success" I0302 14:00:27.839268 7 event.go:294] "Event occurred" object="default/redis-data-redis-master-0" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding" I0302 14:00:27.839297 7 event.go:294] "Event occurred" object="default/redis-data-redis-replicas-0" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding" W0302 14:00:27.888679 7 dispatcher.go:180] Failed calling webhook, failing open sidecar-injector.dapr.io: failed calling webhook "sidecar-injector.dapr.io": failed to call webhook: Post "https://dapr-sidecar-injector.dapr-system.svc:443/mutate?timeout=10s": no endpoints available for service "dapr-sidecar-injector" error from daemon in stream: Error grabbing logs: invalid character 'l' after object key: value pair

4.~/.config/farmvibes-ai/kubectl get pods Screenshot (262) Screenshot (263)

lonnes commented 1 year ago

Hi, Nitin. The issue is likely lack of storage in the GitHub VM. If you are able to, I suggest increasing disk space to at least 512GB.

nitinya9av commented 1 year ago

OK Thank You, I will.