Closed diegonayalazo closed 1 year ago
Related docs issue: https://github.com/knative/docs/issues/4245
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
/remove-lifecycle stale
Right now because of the requests configured in the quickstart set of resources plus the tutorial resources, the total CPU request is ~2.3 and memory is ~1.4Gi
The actual utilization of the cluster is way below about ~0.3 CPU and ~1 Gi memory
Here is the output of cluster I created using quickstart
❯ kubectl resource-capacity --sort cpu.request --pods --util
NODE NAMESPACE POD CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
knative-control-plane * * 2305m (76%) 4600m (153%) 258m (8%) 1440Mi (72%) 4590Mi (230%) 4Mi (0%)
knative-control-plane knative-serving activator-85bd4ddcbb-54kdl 300m (10%) 1000m (33%) 2m (0%) 60Mi (3%) 600Mi (30%) 16Mi (0%)
knative-control-plane kube-system kube-apiserver-knative-control-plane 250m (8%) 0Mi (0%) 77m (2%) 0Mi (0%) 0Mi (0%) 400Mi (20%)
knative-control-plane kube-system kube-controller-manager-knative-control-plane 200m (6%) 0Mi (0%) 23m (0%) 0Mi (0%) 0Mi (0%) 64Mi (3%)
knative-control-plane knative-eventing eventing-webhook-5968f79978-fv66l 100m (3%) 200m (6%) 3m (0%) 50Mi (2%) 200Mi (10%) 23Mi (1%)
knative-control-plane knative-eventing mt-broker-ingress-5ddd6f8b5d-drsb8 100m (3%) 0Mi (0%) 1m (0%) 100Mi (5%) 0Mi (0%) 9Mi (0%)
knative-control-plane knative-serving autoscaler-84fcdc5449-g2j2t 100m (3%) 1000m (33%) 6m (0%) 100Mi (5%) 1000Mi (50%) 24Mi (1%)
knative-control-plane knative-serving controller-6fd5bb86df-7cjl5 100m (3%) 1000m (33%) 4m (0%) 100Mi (5%) 1000Mi (50%) 34Mi (1%)
knative-control-plane kube-system coredns-78fcd69978-f5gdg 100m (3%) 0Mi (0%) 3m (0%) 70Mi (3%) 170Mi (8%) 15Mi (0%)
knative-control-plane kube-system etcd-knative-control-plane 100m (3%) 0Mi (0%) 27m (0%) 100Mi (5%) 0Mi (0%) 62Mi (3%)
knative-control-plane knative-serving domainmapping-webhook-8484d5fd8b-tshr8 100m (3%) 500m (16%) 3m (0%) 100Mi (5%) 500Mi (25%) 16Mi (0%)
knative-control-plane kube-system metrics-server-5794ccf74d-cd5qj 100m (3%) 0Mi (0%) 4m (0%) 200Mi (10%) 0Mi (0%) 20Mi (0%)
knative-control-plane kube-system kube-scheduler-knative-control-plane 100m (3%) 0Mi (0%) 3m (0%) 0Mi (0%) 0Mi (0%) 22Mi (1%)
knative-control-plane knative-serving webhook-97c648588-8hgrt 100m (3%) 500m (16%) 4m (0%) 100Mi (5%) 500Mi (25%) 19Mi (0%)
knative-control-plane knative-eventing mt-broker-filter-574dc4457f-r9t4q 100m (3%) 0Mi (0%) 1m (0%) 100Mi (5%) 0Mi (0%) 11Mi (0%)
knative-control-plane knative-eventing mt-broker-controller-8d979648f-g4fxs 100m (3%) 0Mi (0%) 2m (0%) 100Mi (5%) 0Mi (0%) 17Mi (0%)
knative-control-plane knative-eventing eventing-controller-58875c5478-x9zlm 100m (3%) 0Mi (0%) 6m (0%) 100Mi (5%) 0Mi (0%) 29Mi (1%)
knative-control-plane kube-system kindnet-p55bv 100m (3%) 100m (3%) 1m (0%) 50Mi (2%) 50Mi (2%) 10Mi (0%)
knative-control-plane kube-system coredns-78fcd69978-qk7bg 100m (3%) 0Mi (0%) 2m (0%) 70Mi (3%) 170Mi (8%) 13Mi (0%)
knative-control-plane knative-serving domain-mapping-74d5d688bd-tb475 30m (1%) 300m (10%) 1m (0%) 40Mi (2%) 400Mi (20%) 13Mi (0%)
knative-control-plane default cloudevents-player-00001-deployment-65c477d844-4cbd5 25m (0%) 0Mi (0%) 5m (0%) 0Mi (0%) 0Mi (0%) 21Mi (1%)
knative-control-plane local-path-storage local-path-provisioner-85494db59d-gb44d 0Mi (0%) 0Mi (0%) 2m (0%) 0Mi (0%) 0Mi (0%) 8Mi (0%)
knative-control-plane kube-system kube-proxy-7qfn9 0Mi (0%) 0Mi (0%) 4m (0%) 0Mi (0%) 0Mi (0%) 17Mi (0%)
knative-control-plane knative-serving net-kourier-controller-66bc9d6697-wxwxz 0Mi (0%) 0Mi (0%) 8m (0%) 0Mi (0%) 0Mi (0%) 44Mi (2%)
knative-control-plane knative-eventing imc-controller-86cd7b7857-whg8v 0Mi (0%) 0Mi (0%) 5m (0%) 0Mi (0%) 0Mi (0%) 22Mi (1%)
knative-control-plane knative-eventing imc-dispatcher-7fcb4b5d8c-z7rzl 0Mi (0%) 0Mi (0%) 2m (0%) 0Mi (0%) 0Mi (0%) 15Mi (0%)
knative-control-plane kourier-system 3scale-kourier-gateway-58856c6cc7-spj48 0Mi (0%) 0Mi (0%) 2m (0%) 0Mi (0%) 0Mi (0%) 21Mi (1%)
I was using colima as docker desktop today and the default VM uses 2CPU and 2Gi and kn quickstart didn't work because the CPU requests are higher 2.3 and some pods got stuck in pending.
@psschwei I'm thinking that we should delete the resources from all pods, this will remove all requests for CPU and memory and allow knative to run on cluster as small as 1CPU and 1Gi of memory
if you are using colima you can delete and recreate with 3CPUs
colima delete
colima start --cpu 3
Here is an example used in this github action to delete all resources
curl -L -s $base/serving-core.yaml | yq 'del(.spec.template.spec.containers[]?.resources)' -y | yq 'del(.metadata.annotations."knative.dev/example-checksum")' -y | kubectl apply -f -
In our case, I think we would need to parse the yaml and manipulate before we do the apply to avoid asking the user to install yq
in their computer.
I'd be a little hesitant to go quite that small, as it might cause problems when people are trying other things on quickstart, especially since we're also pitching quickstart as a local development environment.
For CPU it will consume ~0.3 CPU on idle after they finish the tutorial, they can continue using it for other things they will scale to zero. also we are not setting a limit, they will be the ones setting the limits they want to use by adjusting the VM of docker for kind and and minikube config set
TLDR; removing the requests is orthogonal to their ability to run other things in their local cluster
Another issue I found today is that we create the VM in minikube with a flag of 3 CPU
and there is no way to workaround.
Using kn quickstart minikube
😄 [knative] minikube v1.25.1 on Darwin 12.1
✨ Using the docker driver based on user configuration
- Ensure your docker daemon has access to enough CPU/memory resources.
- Docs https://docs.docker.com/docker-for-mac/#resources
⛔ Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 3 is greater than the available cpus of 2
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
/lifecycle frozen
Hi Knative community, thanks for the quickstart plugin. It saves a lot of time setting up the cluster. It would be great to know the amount of memory it will consume beforehand. My laptop is resource limited (8GB, not expandable) and having an IDE, plus the cluster crashed my machine in Linux or sometimes it takes the plugin (kn quickstart kind) up to 20 minutes to start up.
Thank you for all your support!