Closed kutsyk closed 2 weeks ago
Karmada API Server's IP is: 192.168.49.2, host cluster type is: remote ... ^[[B^[[AUnable to connect to the server: dial tcp 192.168.49.2:5443: i/o timeout
I guess that is because the script is talking to the karmada-apiserver
by it's cluster-ip
which is not accessible outside cluster.
https://github.com/karmada-io/karmada/blob/adef1e59748e1e1d31cb75fffe406b5dd69a66d7/hack/remote-up-karmada.sh#L27-L29
But, I bet you don't have a load balancer on the Minikube cluster, so this script might not fit your case.
Try with helm chart or operator?
@kutsyk thanks for spotting. cc @chaosi-zju to take a look(correct me if not the case, or if we need to add more clear documentation for the script)
Hi @RainbowMango, Thanks for quick reply.
This seems to help and get more things created, still with issues:
(⎈ |minikube:karmada-system) ~/ kga z4h recovery mode
NAME READY STATUS RESTARTS AGE
pod/etcd-0 1/1 Running 0 2m27s
pod/karmada-aggregated-apiserver-547964996-v4lz2 1/1 Running 3 (108s ago) 2m27s
pod/karmada-apiserver-7cf4d87d9-6649l 1/1 Running 1 (114s ago) 2m27s
pod/karmada-controller-manager-55dc8c65f5-q2f6d 0/1 CrashLoopBackOff 4 (27s ago) 2m27s
pod/karmada-kube-controller-manager-d49fcfdd6-kvm65 1/1 Running 3 (104s ago) 2m27s
pod/karmada-post-install-c5gdr 0/1 Error 0 2m1s
pod/karmada-post-install-dxq2n 0/1 Error 0 2m22s
pod/karmada-post-install-fwf5j 0/1 Error 0 2m6s
pod/karmada-post-install-gkvs6 0/1 Error 0 2m26s
pod/karmada-post-install-h922b 0/1 Error 0 2m18s
pod/karmada-post-install-kcs8f 0/1 Error 0 2m10s
pod/karmada-post-install-pcmz5 0/1 Error 0 2m14s
pod/karmada-scheduler-5495865654-flnbb 1/1 Running 0 2m27s
pod/karmada-webhook-67d5c4db4c-fznrf 1/1 Running 0 2m27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/etcd ClusterIP None <none> 2379/TCP,2380/TCP 2m27s
service/etcd-client ClusterIP 10.101.168.149 <none> 2379/TCP 2m27s
service/karmada-aggregated-apiserver ClusterIP 10.108.58.85 <none> 443/TCP 2m27s
service/karmada-apiserver ClusterIP 10.111.73.6 <none> 5443/TCP 2m27s
service/karmada-webhook ClusterIP 10.103.186.138 <none> 443/TCP 2m27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/karmada-aggregated-apiserver 1/1 1 1 2m27s
deployment.apps/karmada-apiserver 1/1 1 1 2m27s
deployment.apps/karmada-controller-manager 0/1 1 0 2m27s
deployment.apps/karmada-kube-controller-manager 1/1 1 1 2m27s
deployment.apps/karmada-scheduler 1/1 1 1 2m27s
deployment.apps/karmada-webhook 1/1 1 1 2m27s
NAME DESIRED CURRENT READY AGE
replicaset.apps/karmada-aggregated-apiserver-547964996 1 1 1 2m27s
replicaset.apps/karmada-apiserver-7cf4d87d9 1 1 1 2m27s
replicaset.apps/karmada-controller-manager-55dc8c65f5 1 1 0 2m27s
replicaset.apps/karmada-kube-controller-manager-d49fcfdd6 1 1 1 2m27s
replicaset.apps/karmada-scheduler-5495865654 1 1 1 2m27s
replicaset.apps/karmada-webhook-67d5c4db4c 1 1 1 2m27s
NAME READY AGE
statefulset.apps/etcd 1/1 2m27s
NAME COMPLETIONS DURATION AGE
job.batch/karmada-post-install 0/1 2m26s 2m26s
events:
4m3s Normal Scheduled pod/karmada-post-install-pcmz5 Successfully assigned karmada-system/karmada-post-install-pcmz5 to minikube
4m2s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "karmada-crds-remedy-bases" : failed to sync configmap cache: timed out waiting for the condition
3m58s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "karmada-static-resources" : failed to sync configmap cache: timed out waiting for the condition
4m2s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "karmada-crds-autoscaling-bases" : failed to sync configmap cache: timed out waiting for the condition
4m2s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "karmada-crds-networking-bases" : failed to sync configmap cache: timed out waiting for the condition
3m58s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "karmada-crds-patches" : failed to sync configmap cache: timed out waiting for the condition
4m Normal Pulled pod/karmada-post-install-pcmz5 Container image "docker.io/bitnami/kubectl:latest" already present on machine
4m Normal Created pod/karmada-post-install-pcmz5 Created container post-install
4m Normal Started pod/karmada-post-install-pcmz5 Started container post-install
3m58s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "kube-api-access-grtfv" : failed to sync configmap cache: timed out waiting for the condition
3m58s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "karmada-crds-policy-bases" : failed to sync configmap cache: timed out waiting for the condition
3m58s Warning FailedMount pod/karmada-post-install-pcmz5 MountVolume.SetUp failed for volume "karmada-crds-work-bases" : failed to sync configmap cache: timed out waiting for the condition
4m15s Normal SuccessfulCreate job/karmada-post-install Created pod: karmada-post-install-gkvs6
karmada-controller-manager:
(⎈ |minikube:karmada-system) ~/ k logs karmada-controller-manager-56f84c7fd9-q8rzs z4h recovery mode
I0416 15:16:02.762284 1 feature_gate.go:249] feature gates: &{map[PropagateDeps:false]}
I0416 15:16:02.762521 1 controllermanager.go:138] karmada-controller-manager version: version.Info{GitVersion:"v1.10.0-preview3-50-gfdad87efc", GitCommit:"fdad87efce1d088c8002856f5ee7586427f1a989", GitTreeState:"clean", BuildDate:"2024-04-16T03:38:19Z", GoVersion:"go1.21.8", Compiler:"gc", Platform:"linux/arm64"}
I0416 15:16:02.779137 1 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:150
I0416 15:16:02.779153 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:150
I0416 15:16:02.880622 1 shared_informer.go:341] caches populated
I0416 15:16:02.880733 1 context.go:160] Starting "federatedHorizontalPodAutoscaler"
I0416 15:16:02.880805 1 context.go:170] Started "federatedHorizontalPodAutoscaler"
W0416 15:16:02.880813 1 context.go:157] "deploymentReplicasSyncer" is disabled
I0416 15:16:02.880817 1 context.go:160] Starting "multiclusterservice"
W0416 15:16:02.880821 1 context.go:167] Skipping "multiclusterservice"
I0416 15:16:02.880824 1 context.go:160] Starting "endpointsliceCollect"
W0416 15:16:02.880827 1 context.go:167] Skipping "endpointsliceCollect"
I0416 15:16:02.880830 1 context.go:160] Starting "remedy"
I0416 15:16:02.880842 1 context.go:170] Started "remedy"
I0416 15:16:02.880845 1 context.go:160] Starting "applicationFailover"
I0416 15:16:02.880866 1 context.go:170] Started "applicationFailover"
I0416 15:16:02.880877 1 context.go:160] Starting "bindingStatus"
I0416 15:16:02.880888 1 context.go:170] Started "bindingStatus"
I0416 15:16:02.880891 1 context.go:160] Starting "unifiedAuth"
I0416 15:16:02.880907 1 context.go:170] Started "unifiedAuth"
I0416 15:16:02.880915 1 context.go:160] Starting "federatedResourceQuotaSync"
I0416 15:16:02.880930 1 context.go:170] Started "federatedResourceQuotaSync"
I0416 15:16:02.880933 1 context.go:160] Starting "clusterStatus"
I0416 15:16:02.881056 1 context.go:170] Started "clusterStatus"
I0416 15:16:02.881066 1 context.go:160] Starting "serviceImport"
I0416 15:16:02.881073 1 context.go:170] Started "serviceImport"
I0416 15:16:02.881076 1 context.go:160] Starting "federatedResourceQuotaStatus"
I0416 15:16:02.881081 1 context.go:170] Started "federatedResourceQuotaStatus"
I0416 15:16:02.881084 1 context.go:160] Starting "gracefulEviction"
I0416 15:16:02.881093 1 context.go:170] Started "gracefulEviction"
I0416 15:16:02.881096 1 context.go:160] Starting "cronFederatedHorizontalPodAutoscaler"
I0416 15:16:02.881103 1 context.go:170] Started "cronFederatedHorizontalPodAutoscaler"
I0416 15:16:02.881110 1 context.go:160] Starting "endpointsliceDispatch"
W0416 15:16:02.881114 1 context.go:167] Skipping "endpointsliceDispatch"
I0416 15:16:02.881116 1 context.go:160] Starting "workStatus"
I0416 15:16:02.881169 1 context.go:170] Started "workStatus"
I0416 15:16:02.881178 1 context.go:160] Starting "binding"
I0416 15:16:02.881188 1 context.go:170] Started "binding"
I0416 15:16:02.881191 1 context.go:160] Starting "execution"
I0416 15:16:02.881198 1 context.go:170] Started "execution"
I0416 15:16:02.881201 1 context.go:160] Starting "namespace"
I0416 15:16:02.881221 1 context.go:170] Started "namespace"
I0416 15:16:02.881232 1 context.go:160] Starting "serviceExport"
I0416 15:16:02.881263 1 context.go:170] Started "serviceExport"
I0416 15:16:02.881272 1 context.go:160] Starting "endpointSlice"
I0416 15:16:02.881277 1 context.go:170] Started "endpointSlice"
W0416 15:16:02.881279 1 context.go:157] "hpaScaleTargetMarker" is disabled
I0416 15:16:02.881282 1 context.go:160] Starting "cluster"
I0416 15:16:02.964797 1 request.go:629] Waited for 65.804792ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/certificates.k8s.io/v1?timeout=32s
I0416 15:16:02.990773 1 request.go:629] Waited for 91.761875ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/storage.k8s.io/v1?timeout=32s
I0416 15:16:03.014870 1 request.go:629] Waited for 115.853625ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/storage.k8s.io/v1beta1?timeout=32s
I0416 15:16:03.039807 1 request.go:629] Waited for 140.792917ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/admissionregistration.k8s.io/v1?timeout=32s
I0416 15:16:03.065603 1 request.go:629] Waited for 166.556208ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/networking.k8s.io/v1?timeout=32s
I0416 15:16:03.090752 1 request.go:629] Waited for 191.710542ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/apiextensions.k8s.io/v1?timeout=32s
I0416 15:16:03.115498 1 request.go:629] Waited for 216.442417ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/scheduling.k8s.io/v1?timeout=32s
I0416 15:16:03.140718 1 request.go:629] Waited for 241.561083ms due to client-side throttling, not priority and fairness, request: GET:https://karmada-apiserver.karmada-system.svc.cluster.local:5443/apis/policy/v1?timeout=32s
E0416 15:16:03.143693 1 context.go:163] Error starting "cluster"
F0416 15:16:03.143722 1 controllermanager.go:807] error starting controllers: [no matches for kind "ResourceBinding" in version "work.karmada.io/v1alpha2", no matches for kind "ClusterResourceBinding" in version "work.karmada.io/v1alpha2"]
karmada-post-install:
(⎈ |minikube:karmada-system) ~/ k logs karmada-post-install-kcs8f z4h recovery mode
+ kubectl apply -k /crds --kubeconfig /etc/kubeconfig
# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
error: error validating "/crds": error validating data: failed to download openapi: Get "https://karmada-apiserver.karmada-system.svc.cluster.local:5443/openapi/v2?timeout=32s": dial tcp 10.111.73.6:5443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
Is it something that is already known?
Or should I investigate my local setup restrictions?
Thanks
Investigated a bit more, here are logs from karmada-apiserver-d664cfffb-khwqq | grep "/storage.*k.*controller-manager"
:
I0416 15:24:52.810020 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1?timeout=32s" latency="474.25µs" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="62d49a5c-0348-4896-a556-ec6c11ae089d" srcIP="172.17.0.1:56508" apf_pl="exempt" apf_fs="exempt" apf_execution_time="71.291µs" resp=200
I0416 15:24:52.810623 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1?timeout=32s" latency="564.583µs" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="72cca01f-87e7-4db6-8452-7bb1883bdf13" srcIP="172.17.0.1:56508" apf_pl="exempt" apf_fs="exempt" apf_execution_time="45.125µs" resp=200
I0416 15:24:52.921772 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1?timeout=32s" latency="784.542µs" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="c0123c23-65ca-4ae6-b41a-eca166c0c8f1" srcIP="172.17.0.1:56508" apf_pl="exempt" apf_fs="exempt" apf_execution_time="164.625µs" resp=200
I0416 15:24:52.922378 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1?timeout=32s" latency="1.159125ms" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="a946f85f-c33f-495a-ba47-3bf91c064c67" srcIP="172.17.0.1:56508" apf_pl="exempt" apf_fs="exempt" apf_execution_time="95.708µs" resp=200
I0416 15:24:52.976976 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1?timeout=32s" latency="1.004542ms" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="963ffa18-0b52-4967-b30a-88b431b4f0b7" srcIP="172.17.0.1:56508" apf_pl="exempt" apf_fs="exempt" apf_execution_time="96.542µs" resp=200
I0416 15:24:53.001653 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1?timeout=32s" latency="1.19625ms" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="ade7c523-b83e-488e-92cd-b29018355954" srcIP="172.17.0.1:56508" apf_pl="exempt" apf_fs="exempt" apf_execution_time="82.583µs" resp=200
I0416 15:25:00.448005 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1" latency="475.291µs" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="f1be7ef7-682b-48f3-9dcc-81da213e729c" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="214.375µs" resp=200
I0416 15:25:00.450170 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1" latency="4.414625ms" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="1d75ce10-13dd-47c6-ac0a-1016713dba1f" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="341.292µs" resp=200
I0416 15:25:30.466453 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1" latency="534.75µs" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="4839b68d-b958-4d59-84a3-fe2722ba0b9b" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="94.75µs" resp=200
I0416 15:25:30.469830 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1" latency="4.404834ms" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="45da3d2c-4283-4b0d-90dd-b1540bc953c9" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="141.041µs" resp=200
I0416 15:25:57.065659 1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=1177&timeout=5m57s&timeoutSeconds=357&watch=true" latency="5m57.002406038s" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/shared-informers" audit-ID="caaa6c38-e711-497c-b436-db46607a9907" srcIP="172.17.0.1:30053" apf_pl="exempt" apf_fs="exempt" apf_init_latency="556.583µs" apf_execution_time="559.667µs" resp=200
I0416 15:25:57.213584 1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=1177&timeout=5m57s&timeoutSeconds=357&watch=true" latency="5m57.002784496s" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/shared-informers" audit-ID="1f7057cb-8298-4575-8aa7-a510d2d5d790" srcIP="172.17.0.1:30053" apf_pl="exempt" apf_fs="exempt" apf_init_latency="154.25µs" apf_execution_time="155.5µs" resp=200
I0416 15:26:00.482656 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1" latency="392.541µs" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="eab4b442-4cd3-401e-b8b6-5edc9eda4307" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="163.5µs" resp=200
I0416 15:26:00.486306 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1" latency="2.197792ms" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="765d53d7-3099-42a0-b784-d45853e297af" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="195.875µs" resp=200
I0416 15:26:19.811297 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1?timeout=32s" latency="460.25µs" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="483c3a7f-b2ff-4b8f-86b1-2355dcf65da0" srcIP="172.17.0.1:43442" apf_pl="exempt" apf_fs="exempt" apf_execution_time="51.958µs" resp=200
I0416 15:26:19.813159 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1?timeout=32s" latency="1.35075ms" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="622994aa-54a7-4ee2-a3dd-458e97ceae6d" srcIP="172.17.0.1:43442" apf_pl="exempt" apf_fs="exempt" apf_execution_time="186.542µs" resp=200
I0416 15:26:19.925196 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1?timeout=32s" latency="1.077042ms" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="16a83553-3a97-4009-9965-0f2d09ab81bb" srcIP="172.17.0.1:43442" apf_pl="exempt" apf_fs="exempt" apf_execution_time="100.334µs" resp=200
I0416 15:26:19.925546 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1?timeout=32s" latency="1.146458ms" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="faeaba58-3788-480d-98ae-f83cb0176489" srcIP="172.17.0.1:43442" apf_pl="exempt" apf_fs="exempt" apf_execution_time="146.208µs" resp=200
I0416 15:26:19.934142 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1?timeout=32s" latency="622.833µs" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="8e6e1146-8657-4ee8-81c2-8dede4687e00" srcIP="172.17.0.1:43442" apf_pl="exempt" apf_fs="exempt" apf_execution_time="53.375µs" resp=200
I0416 15:26:20.178738 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1?timeout=32s" latency="976.666µs" userAgent="karmada-controller-manager/v0.0.0 (linux/arm64) kubernetes/$Format" audit-ID="20cb843d-2488-4cb5-936c-02a59a6cc3be" srcIP="172.17.0.1:43442" apf_pl="exempt" apf_fs="exempt" apf_execution_time="99.334µs" resp=200
I0416 15:26:30.502521 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1" latency="301.167µs" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="52ac821b-729a-4411-bc87-ca4700a82bdc" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="165.584µs" resp=200
I0416 15:26:30.502914 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1" latency="460.125µs" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="cfeb4eee-c9a0-42b1-a9e5-6e8551640b01" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="101.542µs" resp=200
I0416 15:27:00.520722 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1" latency="2.458583ms" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="deaf4b90-0411-4289-93e6-a3d34e9df3f8" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="245µs" resp=200
I0416 15:27:00.524544 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1" latency="5.034625ms" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="0dbb36ff-b336-4a09-8ec3-3df8f9be4a4b" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="233µs" resp=200
I0416 15:27:30.532788 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1beta1" latency="1.568166ms" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="b45a8ee3-835f-4d18-bacc-6bac649f22ff" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="121µs" resp=200
I0416 15:27:30.534577 1 httplog.go:132] "HTTP" verb="GET" URI="/apis/storage.k8s.io/v1" latency="3.827917ms" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/system:serviceaccount:kube-system:generic-garbage-collector" audit-ID="70c3f9b0-79a0-458d-9c9c-e3174fa30a92" srcIP="172.17.0.1:21770" apf_pl="workload-high" apf_fs="kube-system-service-accounts" apf_iseats=1 apf_fseats=0 apf_additionalLatency="0s" apf_execution_time="129.667µs" resp=200
I0416 15:27:36.605597 1 httplog.go:132] "HTTP" verb="WATCH" URI="/apis/storage.k8s.io/v1/csistoragecapacities?allowWatchBookmarks=true&resourceVersion=1177&timeout=7m37s&timeoutSeconds=457&watch=true" latency="7m37.004370166s" userAgent="kube-controller-manager/v1.26.12 (linux/arm64) kubernetes/df63cd7/shared-informers" audit-ID="df197a9e-b297-4527-9528-65bc28bd30be" srcIP="172.17.0.1:30053" apf_pl="exempt" apf_fs="exempt" apf_init_latency="61.791µs" apf_execution_time="62µs" resp=200
Things look to be fine.
Figured out that the issue has been there because post-install
job has failed to quickly, had just to recreate it and things worked fine:
(⎈ |minikube:karmada-system) ~/projects/karmada/ [tags/v1.10.0-alpha.0*] kgp z4h recovery mode
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 51m
karmada-aggregated-apiserver-547964996-v4lz2 1/1 Running 3 (51m ago) 51m
karmada-apiserver-d664cfffb-khwqq 1/1 Running 0 27m
karmada-controller-manager-5f7fc4cdc7-x6tj4 1/1 Running 0 112s
karmada-kube-controller-manager-d49fcfdd6-kvm65 1/1 Running 6 (26m ago) 51m
karmada-post-install-fxrnt 0/1 Completed 0 2m5s
karmada-post-install-h9w85 0/1 Error 0 2m8s
karmada-scheduler-5495865654-flnbb 1/1 Running 0 51m
karmada-webhook-67d5c4db4c-fznrf 1/1 Running 0 51m
All issues with installation and configuration on my side has been resolved. This can be closed, thanks.
I have Macbook Pro M1 with minikube installed and I'm trying to configure Karmada but it doesn't work.
Command that I use to run
minikube
:All things are running fine and minikube works without issues, after that I pulled this repo and executed:
hack/remote-up-karmada.sh ~/.kube/config minikube
Eventually script creates next set of resources:
Which is basically failed installation due to different reasons.
etcd
has next logs:karmada-apiserver
seems finekarmada-metrics-adapter-7cb5c88c-qw2rn
logs:In my understanding all things are configured as per doc and as should be, anything I am missing?