Open kiemtcb opened 6 months ago
my netstat
result is also:
but, it doesn't matter, I think it is not the rootcause for your failure of controllers.
Can you try following commands:
please provide the corresponding logs to me, thanks~
besides, if you are new to Karmada, you may stuck in this problem, click
please check if the problem lie here
my
netstat
result is also:but, it doesn't matter, I think it is not the rootcause for your failure of controllers.
Can you try following commands:
- kubectl --context karmada-host get pods -A kubectl --context karmada-host describe po ${podName} -n ${namespace}
- kind export logs --name karmada-host .
please provide the corresponding logs to me, thanks~
Thank you for your reply, when i run
hack/local-up-karmada.sh
I got this log and then the script exit after many retry check karmada-apiserver ready.:
Apply dynamic rendered apiserver service in /tmp/tmp.XoEqaSKAKi/karmada-apiserver.yaml.
deployment.apps/karmada-apiserver created
service/karmada-apiserver created
wait the karmada-apiserver ready...
error: timed out waiting for the condition on pods/karmada-apiserver-5cb5b97b96-lrq89
kubectl --context=karmada-host wait --for=condition=Ready --timeout=30s pods -l app=karmada-apiserver -n karmada-system failed, retrying(1 times)
error: timed out waiting for the condition on pods/karmada-apiserver-5cb5b97b96-lrq89
kubectl --context=karmada-host wait --for=condition=Ready --timeout=30s pods -l app=karmada-apiserver -n karmada-system failed, retrying(2 times)
error: timed out waiting for the condition on pods/karmada-apiserver-5cb5b97b96-lrq89
kubectl --context=karmada-host wait --for=condition=Ready --timeout=30s pods -l app=karmada-apiserver -n karmada-system failed, retrying(3 times)
...
when i check pod then i got this
NAMESPACE NAME READY STATUS RESTARTS AGE
karmada-system etcd-0 1/1 Running 0 3m9s
karmada-system karmada-apiserver-5cb5b97b96-lrq89 0/1 Running 1 (33s ago) 2m54s
Describe pod karmada-apiserver-5cb5b97b96-lrq89
Normal Scheduled 77s default-scheduler Successfully assigned karmada-system/karmada-apiserver-5cb5b97b96-lrq89 to karmada-host-control-plane
Normal Pulling 77s kubelet Pulling image "registry.k8s.io/kube-apiserver:v1.25.4"
Normal Pulled 63s kubelet Successfully pulled image "registry.k8s.io/kube-apiserver:v1.25.4" in 13.635667286s (13.635683536s including waiting)
Normal Created 63s kubelet Created container karmada-apiserver
Normal Started 63s kubelet Started container karmada-apiserver
Warning Unhealthy 7s (x4 over 37s) kubelet Liveness probe failed: Get "https://172.18.0.3:5443/livez": net/http: TLS handshake timeout
Warning Unhealthy 2s (x6 over 53s) kubelet Readiness probe failed: Get "https://172.18.0.3:5443/readyz": net/http: TLS handshake timeout
logs pod karmada-apiserver-5cb5b97b96-lrq89
I1225 14:00:53.563195 1 server.go:563] external host was not specified, using 172.18.0.3
I1225 14:00:53.564203 1 server.go:161] Version: v1.25.4
I1225 14:00:53.564353 1 server.go:163] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1225 14:00:53.977988 1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I1225 14:00:53.979520 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1225 14:00:53.979541 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1225 14:00:53.981042 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1225 14:00:53.981061 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Hope you can help me.
Thank you
I send you logs when i exec your command
kind export logs --name karmada-host .
besides, if you are new to Karmada, you may stuck in this problem, click
please check if the problem lie here
i tried your solution but it's still not working
I have tried to curl https://172.18.0.3:5443/readyz -v and i have this results
* Trying 172.18.0.3:5443...
* TCP_NODELAY set
* Connected to 172.18.0.3 (172.18.0.3) port 5443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: Connection reset by peer in connection to 172.18.0.3:5443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to 172.18.0.3:5443
i just have noticed that in folder /etc/karmada/
of kind karmada-host-control-plane, there is nothing in there and it doesn't exist.
Is this normal or abnormal guys?
I have tried to curl https://172.18.0.3:5443/readyz -v and i have this results
In my machine, the result is:
➜ ~ curl -ivk https://172.18.0.2:5443/readyz
* Trying 172.18.0.2:5443...
* TCP_NODELAY set
* Connected to 172.18.0.2 (172.18.0.2) port 5443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=karmada-apiserver
* start date: Dec 15 09:40:00 2023 GMT
* expire date: Dec 13 09:40:00 2028 GMT
* issuer: CN=karmada
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x562e2a456300)
> GET /readyz HTTP/2
> Host: 172.18.0.2:5443
> user-agent: curl/7.68.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 200
HTTP/2 200
< audit-id: f1d60144-0f35-4f06-aeaa-f9bb3c03e47b
audit-id: f1d60144-0f35-4f06-aeaa-f9bb3c03e47b
< cache-control: no-cache, private
cache-control: no-cache, private
< content-type: text/plain; charset=utf-8
content-type: text/plain; charset=utf-8
< x-content-type-options: nosniff
x-content-type-options: nosniff
< x-kubernetes-pf-flowschema-uid: 24114cf7-1447-4926-9274-81f8ccea757e
x-kubernetes-pf-flowschema-uid: 24114cf7-1447-4926-9274-81f8ccea757e
< x-kubernetes-pf-prioritylevel-uid: 93a87b30-c277-49ed-ae2a-bec85d216c62
x-kubernetes-pf-prioritylevel-uid: 93a87b30-c277-49ed-ae2a-bec85d216c62
< content-length: 2
content-length: 2
< date: Tue, 26 Dec 2023 02:40:44 GMT
date: Tue, 26 Dec 2023 02:40:44 GMT
<
* Connection #0 to host 172.18.0.2 left intact
ok#
i just have noticed that in folder
/etc/karmada/
of kind karmada-host-control-plane, there is nothing in there and it doesn't exist.Is this normal or abnormal guys?
In my machine, it is normal:
➜ ~ docker exec -it karmada-host-control-plane bash
root@karmada-host-control-plane:/# ls /etc/ | grep karmada
root@karmada-host-control-plane:/#
Can you try executing kind delete clusters --all; rm -rf ~/.karmada/; rm -rf ~/.kube/*.config; rm -rf /etc/karmada
and reinstall it.
You should check each command before execute it to avoid unnessary disrupt
Can you try executing
kind delete clusters --all; rm -rf ~/.karmada/; rm -rf ~/.kube/*.config; rm -rf /etc/karmada
and reinstall it.You should check each command before execute it to avoid unnessary disrupt
I have tried but still it's not working. Pod/karmada-apiserver can't ready
I have tried to bypass wait_pod_ready then all pods are crashloopback
I really don't know why but when i use kubectl karmada init
with kind, it works perfectly.
You do have a tricky problem, I didn't encounter such error.
What's your host machine info? which version linux release?
Kind version?
You do have a tricky problem, I didn't encounter such error.
What's your host machine info? which version linux release?
Kind version?
My PC have 8 core CPU intel and 16GB Ram. I am using Ubuntu 20.04.6 LTS and kind v0.20.0 go1.20.4 linux/amd64
would you like try replace kind v0.20.0 to kind 0.19.0 (check kind version by kind version
)
I used to handle a problem (https://github.com/karmada-io/karmada/issues/3308#issuecomment-1711304115),it related to kind v0.20.0 and ubuntu 20.04. Although the phenomenon of your problem is different, there is no better idea of how to solve it, so why not give it a try?
Hi @kiemtcb, how did your issue progress later? Was it resolved?
Hi, I have tried but it's still not working.
the command "init" works perfectly, so right now i am not using script bash anymore.
But thank you for supporting me. 🥰🥰🥰
ok, if one day I find the root cause, I'll sync the solution to me ASAP~
Please provide an in-depth description of the question you have:
Hello, i am using your guide line to bootstrap Karmada:
But i ended up in a situation that your controllers can't ready, then i check port listening on kind container and i see that your controllers only listen ipv6 not ipv4?
Could you help me to solve this issue and make it run normally on my computer ??
Thanks
What do you think about this question?:
Environment: