Closed csantanapr closed 3 years ago
Tried to switch to 1 control plane still same issues
k patch cm -n mink-system config-leader-election --type merge -p '{"data":{"buckets":"1"}}'
k scale statefulset/controlplane -n mink-system --replicas 0
k scale statefulset/controlplane -n mink-system --replicas 1
k describe pod -n mink-system controlplane-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned mink-system/controlplane-0 to mink-control-plane
Normal Pulled 91s kubelet, mink-control-plane Container image "docker.io/mattmoor/webhook:v0.19.0@sha256:55bfa4b48ce646f4494e44f1aeb0592a609a4740b1f7e192ae1a14e8de5c1f2d" already present on machine
Normal Created 91s kubelet, mink-control-plane Created container controller
Normal Started 91s kubelet, mink-control-plane Started container controller
Normal Pulled 91s kubelet, mink-control-plane Container image "docker.io/mattmoor/contour:v0.19.0@sha256:ac55a5ed2778574c8c7a715d7adab68d261c6bac6182c5cbd3913ec3a403f7f2" already present on machine
Normal Created 91s kubelet, mink-control-plane Created container contour-external
Normal Started 91s kubelet, mink-control-plane Started container contour-external
Normal Pulled 91s kubelet, mink-control-plane Container image "docker.io/mattmoor/contour:v0.19.0@sha256:ac55a5ed2778574c8c7a715d7adab68d261c6bac6182c5cbd3913ec3a403f7f2" already present on machine
Normal Created 91s kubelet, mink-control-plane Created container contour-internal
Normal Started 91s kubelet, mink-control-plane Started container contour-internal
Warning Unhealthy 83s (x8 over 90s) kubelet, mink-control-plane Readiness probe failed: Get "https://10.244.0.15:8443/": dial tcp 10.244.0.15:8443: connect: connection refused
Warning Unhealthy 83s (x8 over 90s) kubelet, mink-control-plane Liveness probe failed: Get "https://10.244.0.15:8443/": dial tcp 10.244.0.15:8443: connect: connection refused
logs
k logs statefulset/controlplane -n mink-system -c controller
2020/12/05 01:29:22 Registering 13 clients
2020/12/05 01:29:22 Registering 10 informer factories
2020/12/05 01:29:22 Registering 44 informers
2020/12/05 01:29:22 Registering 34 controllers
{"level":"info","ts":"2020-12-05T01:29:22.287Z","caller":"logging/config.go:110","msg":"Successfully created the logger."}
{"level":"info","ts":"2020-12-05T01:29:22.287Z","caller":"logging/config.go:111","msg":"Logging level set to: info"}
{"level":"info","ts":"2020-12-05T01:29:22.287Z","logger":"controller","caller":"profiling/server.go:59","msg":"Profiling enabled: false","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.292Z","logger":"controller","caller":"leaderelection/context.go:43","msg":"Running with StatefulSet leader election","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.293Z","logger":"controller.configuration-controller","caller":"configuration/controller.go:57","msg":"Setting up ConfigMap receivers","commit":"ec3ac2b","knative.dev/controller":"configuration-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.293Z","logger":"controller.configuration-controller","caller":"configuration/controller.go:70","msg":"Setting up event handlers","commit":"ec3ac2b","knative.dev/controller":"configuration-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.293Z","logger":"controller.labeler-controller","caller":"labeler/controller.go:64","msg":"Setting up ConfigMap receivers","commit":"ec3ac2b","knative.dev/controller":"labeler-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.294Z","logger":"controller.labeler-controller","caller":"labeler/controller.go:77","msg":"Setting up event handlers","commit":"ec3ac2b","knative.dev/controller":"labeler-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.318Z","logger":"controller.revision-controller","caller":"revision/controller.go:119","msg":"Setting up event handlers","commit":"ec3ac2b","knative.dev/controller":"revision-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.318Z","logger":"controller.route-controller","caller":"route/controller.go:97","msg":"Setting up event handlers","commit":"ec3ac2b","knative.dev/controller":"route-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.318Z","logger":"controller.serverlessservice-controller","caller":"serverlessservice/controller.go:64","msg":"Setting up event handlers","commit":"ec3ac2b","knative.dev/controller":"serverlessservice-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.318Z","logger":"controller.service-controller","caller":"service/controller.go:53","msg":"Setting up ConfigMap receivers","commit":"ec3ac2b","knative.dev/controller":"service-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller.service-controller","caller":"service/controller.go:68","msg":"Setting up event handlers","commit":"ec3ac2b","knative.dev/controller":"service-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller.revision-gc-controller","caller":"gc/controller.go:53","msg":"Setting up event handlers","commit":"ec3ac2b","knative.dev/controller":"revision-gc-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller.revision-gc-controller","caller":"gc/controller.go:66","msg":"Setting up ConfigMap receivers with resync func","commit":"ec3ac2b","knative.dev/controller":"revision-gc-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller.revision-gc-controller","caller":"gc/controller.go:75","msg":"Setting up ConfigMap receivers","commit":"ec3ac2b","knative.dev/controller":"revision-gc-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller.hpa-class-podautoscaler-controller","caller":"hpa/controller.go:73","msg":"Setting up ConfigMap receivers","commit":"ec3ac2b","knative.dev/controller":"hpa-class-podautoscaler-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller.hpa-class-podautoscaler-controller","caller":"hpa/controller.go:86","msg":"Setting up hpa-class event handlers","commit":"ec3ac2b","knative.dev/controller":"hpa-class-podautoscaler-controller"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller","caller":"domainmapping/controller.go:58","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller","caller":"contour/controller.go:69","msg":"Setting up ConfigMap receivers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.319Z","logger":"controller","caller":"contour/controller.go:83","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.320Z","logger":"controller","caller":"apiserversource/controller.go:72","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.322Z","logger":"controller","caller":"pingsource/controller.go:80","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.323Z","logger":"controller","caller":"containersource/controller.go:56","msg":"Setting up event handlers.","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.323Z","logger":"controller","caller":"crd/controller.go:58","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.323Z","logger":"controller","caller":"channel/controller.go:49","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.323Z","logger":"controller","caller":"subscription/controller.go:54","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.323Z","logger":"controller","caller":"parallel/controller.go:55","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.323Z","logger":"controller","caller":"sequence/controller.go:56","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.324Z","logger":"controller","caller":"namespace/controller.go:56","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.324Z","logger":"controller","caller":"mtbroker/controller.go:80","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.324Z","logger":"controller","caller":"trigger/controller.go:67","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.324Z","logger":"controller","caller":"eventtype/controller.go:50","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.324Z","logger":"controller","caller":"parallel/controller.go:55","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.325Z","logger":"controller","caller":"sequence/controller.go:56","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.325Z","logger":"controller","caller":"sinkbinding/controller.go:91","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.325Z","logger":"controller","caller":"taskrun/controller.go:97","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.325Z","logger":"controller","caller":"pipelinerun/controller.go:95","msg":"Setting up event handlers","commit":"ec3ac2b"}
{"level":"info","ts":"2020-12-05T01:29:22.325Z","logger":"controller","caller":"certificate/controller.go:63","msg":"Setting up event handlers.","commit":"ec3ac2b"}
I find this puzzling because we run a LOT on kind ~constantly. This is how the actions are setting up kind: https://github.com/mattmoor/mink/blob/master/hack/setup-kind.sh
@mattmoor i also try the github action you point with the exact kind.yaml and same result it doesn't work on kind running on MacOS
Same setup 1 controller and 3 workers
This is going to invite enormous output, but you could try changing the log level to debug. If that doesn't show anything obvious, then maybe we can find some time Monday or Tuesday to do some live debugging 😅
I was thinking trying on Linux vm with kind and also not using mink install and install latest from master Do you have a nightly yaml I ca use?
I don't have any sort of nightly release set up 😞
As a heads up I'm changing the default replica count to 1 here, and adding --replicas N
, --domain
, and --disable-imc
to let folks customize stuff a bit more: https://github.com/mattmoor/mink/pull/329
If you are feeling bold, you should be able to ./hack/build.sh --install
(make sure KO_DOCKER_REPO
is set!)
Just tried on Fedora33 VM
Pods look fine but mink build
doesn't work, and not log output
Is there a -v
flag to get verbose output?
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-27bsv 1/1 Running 0 35m
kube-system coredns-f9fd979d6-4mnqr 1/1 Running 0 35m
kube-system etcd-mink-control-plane 1/1 Running 0 35m
kube-system kindnet-cddfx 1/1 Running 0 35m
kube-system kube-apiserver-mink-control-plane 1/1 Running 0 35m
kube-system kube-controller-manager-mink-control-plane 1/1 Running 0 35m
kube-system kube-proxy-jb4wt 1/1 Running 0 35m
kube-system kube-scheduler-mink-control-plane 1/1 Running 0 35m
local-path-storage local-path-provisioner-78776bfc44-l5jfg 1/1 Running 0 35m
mink-system autoscaler-69bdf8bdcc-nnzqv 1/1 Running 0 33m
mink-system contour-certgen-v1.10.0-hsdhz 0/1 Completed 0 33m
mink-system controlplane-0 3/3 Running 0 33m
mink-system controlplane-1 3/3 Running 0 33m
mink-system controlplane-2 3/3 Running 0 33m
mink-system dataplane-ggksm 5/5 Running 0 33m
mink-system default-domain-2px5k 0/1 Completed 0 12m
mink-system default-domain-8fh6z 0/1 Error 0 33m
mink-system default-domain-sbv76 0/1 Error 0 33m
mink-system imc-controller-5b4bb98678-rcrvh 1/1 Running 0 32m
mink-system imc-dispatcher-7c97b64748-8mckz 1/1 Running 0 32m
Running build
[vagrant@fedora33 helloworld-go]$ mink build
Error: task Task 0 create has not started yet or pod for task not yet available
Usage:
mink build --image IMAGE [flags]
On MacOS with debug enable I only see one fatal log
{"level":"fatal","ts":"2020-12-09T18:50:44.674Z","logger":"controller","caller":"certificate/controller.go:92","msg":"Error creating OrderManager: 429 urn:ietf:params:acme:error:rateLimited: Error creating new account :: too many registrations for this IP: see https://letsencrypt.org/docs/rate-limits/","commit":"ec3ac2b","stacktrace":"knative.dev/net-http01/pkg/reconciler/certificate.NewController\n\tknative.dev/net-http01@v0.18.1-0.20201106012708-7ee9669a0750/pkg/reconciler/certificate/controller.go:92\nmain.main.func1\n\tgithub.com/mattmoor/mink/cmd/webhook/main.go:178\nknative.dev/pkg/injection/sharedmain.ControllersAndWebhooksFromCtors\n\tknative.dev/pkg@v0.0.0-20201103163404-5514ab0c1fdf/injection/sharedmain/main.go:364\nknative.dev/pkg/injection/sharedmain.MainWithConfig\n\tknative.dev/pkg@v0.0.0-20201103163404-5514ab0c1fdf/injection/sharedmain/main.go:199\nmain.main\n\tgithub.com/mattmoor/mink/cmd/webhook/main.go:123\nruntime.main\n\truntime/proc.go:204"}
I edited config-network with autoTLS: Disabled and still getting this
I'll make this a flag today 😞
@csantanapr do you still have issues with a blank install on KinD if you change the let's encrypt endpoint?
Wondering if we can close this 🤔
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
To reproduce
Create kind cluster
Install mink
Results: It never exits
Debug output