kubernetes-sigs / kwok

Kubernetes WithOut Kubelet - Simulates thousands of Nodes and Clusters.
https://kwok.sigs.k8s.io
Apache License 2.0
2.57k stars 203 forks source link

Vertical Pod Autoscaler in KWOK Cluster Fails to Adjust CPU Usage #1181

Closed network-charles closed 3 months ago

network-charles commented 3 months ago

How to use it?

What happened?

When a vertical pod autoscaler is installed in a KWOK cluster, it doesn't react when a pod is deployed.

What did you expect to happen?

After using the Auto mode in the VPA definition, I expected it to readjust the pod CPU to the most suitable value, however, it did nothing. Other VPA modes don't work either.

How can we reproduce it (as minimally and precisely as possible)?

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80

vpa.yaml

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: vpa-nginx
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind:       Deployment
    name:       nginx
  updatePolicy:
    updateMode: "Auto"

scale-using-vpa.demo

# Scale pod up & down using VPA

# Set up Cluster
kwokctl create cluster --enable-metrics-server --enable-crds=Metric --enable-crds=ClusterResourceUsage --enable-crds=ResourceUsage

# Apply metrics usage
kubectl apply -f https://github.com/kubernetes-sigs/kwok/releases/download/v0.6.0/metrics-usage.yaml

# Create a node
kwokctl scale node --replicas 1 --param '.allocatable.cpu="4000m"'

# Create deployment
kubectl apply -f deployment.yaml

# Install VPA
git clone https://github.com/kubernetes/autoscaler.git
cd autoscaler/vertical-pod-autoscaler/
./hack/vpa-up.sh
cd ..
cd ..

# Validating Successful VPA Deployment
kubectl get pods -n kube-system | grep vpa

# Deploy VPA
kubectl apply -f vpa.yaml

# Wait 30s
sleep 30

# Check CPU usage
kubectl top pod

Anything else we need to know?

No response

Kwok version

```console $ kwok --version kwok version v0.6.0 go1.22.3 (linux/amd64) $ kwokctl --version kwokctl version v0.6.0 go1.22.3 (linux/amd64) ```

OS version

```console $ cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.4 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.4 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy $ uname -a Linux ip-172-31-45-84 6.5.0-1022-aws #22~22.04.1-Ubuntu SMP Fri Jun 14 16:31:00 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux ```
wzshiming commented 3 months ago

Ah, this is a simulation cluster, can't run the container, you may need a following step to set up VPA

# Get metrics usage
wget https://github.com/kubernetes-sigs/kwok/releases/download/v0.6.0/metrics-usage.yaml

# Set up Cluster
kwokctl create cluster --enable-metrics-server --config ./kwokctl.yaml --config ./metrics-usage.yaml --runtime binary

# Create a node
kwokctl scale node --replicas 1 --param '.allocatable.cpu="4000m"'

# Set up VPA
git clone https://github.com/kubernetes/autoscaler.git

kubectl apply -f ./autoscaler/vertical-pod-autoscaler/deploy/vpa-v1-crd-gen.yaml
kubectl apply -f ./autoscaler/vertical-pod-autoscaler/deploy/vpa-rbac.yaml

# Execute on new terminal
cd autoscaler/vertical-pod-autoscaler/ && NAMESPACE=kube-system go run ./pkg/admission-controller --kubeconfig ~/.kube/config --client-ca-file ~/.kwok/clusters/kwok/pki/ca.crt --tls-cert-file  ~/.kwok/clusters/kwok/pki/admin.crt --tls-private-key  ~/.kwok/clusters/kwok/pki/admin.key --webhook-address https://127.0.0.1 --webhook-port 8080 --register-by-url --port 8080
cd autoscaler/vertical-pod-autoscaler/ && NAMESPACE=kube-system go run ./pkg/recommender --kubeconfig ~/.kube/config
cd autoscaler/vertical-pod-autoscaler/ && NAMESPACE=kube-system go run ./pkg/updater --kubeconfig ~/.kube/config
network-charles commented 3 months ago

Thanks, I'll try it.

network-charles commented 3 months ago

The VPA still didn't react. Did you also execute the below? I did.

# Deploy VPA
kubectl apply -f vpa.yaml

# Create deployment
kubectl apply -f deployment.yaml
wzshiming commented 3 months ago

It seems to have reacted, Did you run all 3 processes?

$ kubectl get pod -A 
NAMESPACE   NAME                     READY   STATUS    RESTARTS   AGE
default     nginx-6f999cfffb-dcvjq   1/1     Running   0          2m55s

$ kubectl get vpa 
NAME        MODE   CPU   MEM       PROVIDED   AGE
vpa-nginx   Auto   25m   262144k   True       3m24s

Or you can try the example that comes with vpa

autoscaler/vertical-pod-autoscaler/examples/hamster.yaml
autoscaler/vertical-pod-autoscaler/examples/redis.yaml
network-charles commented 3 months ago

I see where the issue came from, I ran the process commands in one new terminal. I decided to run 3 new terminals, one for each process and it worked.

network-charles commented 3 months ago

So, I just noticed that the VPA didn't update any pod requests in the deployment. I added a request value to the deployment. deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 100m

I waited for 17 minutes.

kubectl get pod 

NAME                    READY   STATUS    RESTARTS   AGE
nginx-77c678789-m8sp4   1/1     Running   0          17m
nginx-77c678789-nwn9r   1/1     Running   0          17m
network-charles commented 3 months ago

Ok, so I noticed that the issue was the below resource policy I added to the VPA manifest.

  resourcePolicy:
    containerPolicies:
      - containerName: '*'
        controlledResources: ["CPU"]

After, removing it, the pods were recreated. However, the new requested CPU wasn't adjusted. The VPA suggested 25m but the requested pod CPU still used 100m

 kubectl get vpa
NAME        MODE   CPU   MEM       PROVIDED   AGE
vpa-nginx   Auto   25m   262144k   True       5m
kubectl get pod -o yaml | grep -i cpu

cpu: 100m
cpu: 100m
wzshiming commented 3 months ago

Ah, I changed the way of installation, try it again, a lot of steps have been changed. don't skip any steps.

https://github.com/kubernetes-sigs/kwok/issues/1181#issuecomment-2242271894

FYI: should be able to quickly change the pod annotation using this command

kubectl patch pod vpa-nginx- --type=json -p='[{"op":"add","path":"/metadata/annotations","value":{"kwok.x-k8s.io/usage-cpu":"800m","kwok.x-k8s.io/usage-memory":"380Mi"}}]'
network-charles commented 3 months ago

Ok

network-charles commented 3 months ago

The below command requires root privilege to listen to port 443 so I added a sudo

cd autoscaler/vertical-pod-autoscaler/ && sudo NAMESPACE=kube-system go run ./pkg/admission-controller --kubeconfig ~/.kube/config --client-ca-file ~/.kwok/clusters/kwok/pki/ca.crt --tls-cert-file  ~/.kwok/clusters/kwok/pki/admin.crt --tls-private-key  ~/.kwok/clusters/kwok/pki/admin.key --port 443

The VPA still did not automatically adjust the CPU request to its recommended value even after recreating the pods.

wzshiming commented 3 months ago
kubectl get vpa

Do you see the values recommended by VPA?

network-charles commented 3 months ago

Yes

wzshiming commented 3 months ago

I guess it could be a security issue on port 443, I changed the port to 8080 and made some changes to the installation.

https://github.com/kubernetes-sigs/kwok/issues/1181#issuecomment-2242271894

wzshiming commented 3 months ago

It all works on my Mac, maybe I've loosened some security restrictions here, but I forget.

network-charles commented 3 months ago

Interesting, ok.

I decided to run all the commands as root, but it still didn't work.

wzshiming commented 3 months ago

Can you export the log of kube-apiserver to let me have a look?

network-charles commented 3 months ago

Yes. kube-apiserver-logs.txt

W0723 13:14:32.749436       1 options.go:297] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0723 13:14:32.759932       1 options.go:221] external host was not specified, using 172.18.0.3
I0723 13:14:32.768909       1 server.go:148] Version: v1.30.2
I0723 13:14:32.768954       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0723 13:14:34.220599       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
I0723 13:14:34.253799       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0723 13:14:34.312475       1 plugins.go:157] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0723 13:14:34.312511       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0723 13:14:34.315658       1 instance.go:299] Using reconciler: lease
I0723 13:14:34.379849       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
W0723 13:14:34.380075       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0723 13:14:34.790513       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
I0723 13:14:34.790898       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
I0723 13:14:34.915903       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
I0723 13:14:35.024651       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
I0723 13:14:35.130029       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
W0723 13:14:35.130367       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.130490       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.131789       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
W0723 13:14:35.133475       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
I0723 13:14:35.136318       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
I0723 13:14:35.174249       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
W0723 13:14:35.174285       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
W0723 13:14:35.174294       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
I0723 13:14:35.193587       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
W0723 13:14:35.193629       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
I0723 13:14:35.195631       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
W0723 13:14:35.195661       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.195668       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.201732       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
W0723 13:14:35.201806       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.201887       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
I0723 13:14:35.202613       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
I0723 13:14:35.205152       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
W0723 13:14:35.209872       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.210081       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.215413       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
W0723 13:14:35.215680       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.215762       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.224252       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
W0723 13:14:35.224503       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
I0723 13:14:35.227138       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
W0723 13:14:35.233749       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.233962       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.235060       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
W0723 13:14:35.235227       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.235348       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.239024       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
W0723 13:14:35.244592       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.244807       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.247392       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
I0723 13:14:35.256872       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
W0723 13:14:35.257233       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
W0723 13:14:35.257318       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
I0723 13:14:35.276422       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
W0723 13:14:35.276669       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
W0723 13:14:35.276773       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
I0723 13:14:35.279519       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
W0723 13:14:35.282957       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0723 13:14:35.283277       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0723 13:14:35.284639       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
W0723 13:14:35.284777       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
I0723 13:14:35.305194       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
W0723 13:14:35.305436       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
I0723 13:14:37.245596       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0723 13:14:37.246069       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/admin.crt::/etc/kubernetes/pki/admin.key"
I0723 13:14:37.246194       1 secure_serving.go:213] Serving securely on [::]:6443
I0723 13:14:37.246542       1 available_controller.go:423] Starting AvailableConditionController
I0723 13:14:37.246558       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0723 13:14:37.246656       1 controller.go:78] Starting OpenAPI AggregationController
I0723 13:14:37.246212       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0723 13:14:37.250201       1 customresource_discovery_controller.go:289] Starting DiscoveryController
I0723 13:14:37.251336       1 gc_controller.go:78] Starting apiserver lease garbage collector
I0723 13:14:37.251534       1 controller.go:80] Starting OpenAPI V3 AggregationController
I0723 13:14:37.251684       1 apf_controller.go:374] Starting API Priority and Fairness config controller
I0723 13:14:37.252847       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0723 13:14:37.252862       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
I0723 13:14:37.253505       1 aggregator.go:163] waiting for initial CRD sync...
I0723 13:14:37.253854       1 controller.go:116] Starting legacy_token_tracking_controller
I0723 13:14:37.253939       1 shared_informer.go:313] Waiting for caches to sync for configmaps
I0723 13:14:37.257429       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/admin.crt::/etc/kubernetes/pki/admin.key"
I0723 13:14:37.263770       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0723 13:14:37.263976       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0723 13:14:37.264213       1 system_namespaces_controller.go:67] Starting system namespaces controller
I0723 13:14:37.274750       1 controller.go:139] Starting OpenAPI controller
I0723 13:14:37.274980       1 controller.go:87] Starting OpenAPI V3 controller
I0723 13:14:37.276042       1 naming_controller.go:291] Starting NamingConditionController
I0723 13:14:37.276301       1 establishing_controller.go:76] Starting EstablishingController
I0723 13:14:37.276510       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0723 13:14:37.276701       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0723 13:14:37.276828       1 crd_finalizer.go:266] Starting CRDFinalizer
I0723 13:14:37.277175       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0723 13:14:37.277548       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
I0723 13:14:37.281550       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0723 13:14:37.452913       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0723 13:14:37.454727       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0723 13:14:37.456900       1 policy_source.go:224] refreshing policies
I0723 13:14:37.464258       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0723 13:14:37.465566       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
I0723 13:14:37.469735       1 controller.go:615] quota admission added evaluator for: namespaces
I0723 13:14:37.478019       1 shared_informer.go:320] Caches are synced for crd-autoregister
I0723 13:14:37.478251       1 aggregator.go:165] initial CRD sync complete...
I0723 13:14:37.478290       1 autoregister_controller.go:141] Starting autoregister controller
I0723 13:14:37.478308       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0723 13:14:37.478316       1 cache.go:39] Caches are synced for autoregister controller
I0723 13:14:37.485129       1 shared_informer.go:320] Caches are synced for configmaps
I0723 13:14:37.496777       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0723 13:14:37.528705       1 shared_informer.go:320] Caches are synced for node_authorizer
I0723 13:14:37.546603       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0723 13:14:37.552214       1 apf_controller.go:379] Running API Priority and Fairness config worker
I0723 13:14:37.552237       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0723 13:14:37.614908       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0723 13:14:37.630923       1 handler.go:286] Adding GroupVersion kwok.x-k8s.io v1alpha1 to ResourceManager
W0723 13:14:37.646497       1 handler_proxy.go:93] no RequestInfo found in the context
E0723 13:14:37.646719       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0723 13:14:37.694865       1 handler.go:286] Adding GroupVersion kwok.x-k8s.io v1alpha1 to ResourceManager
I0723 13:14:37.727004       1 handler.go:286] Adding GroupVersion kwok.x-k8s.io v1alpha1 to ResourceManager
I0723 13:14:37.742236       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0723 13:14:38.265155       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0723 13:14:38.288403       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0723 13:14:38.288427       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0723 13:14:39.188101       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0723 13:14:39.237844       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0723 13:14:39.379528       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.0.0.1"}
W0723 13:14:39.387945       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.0.3]
I0723 13:14:39.389143       1 controller.go:615] quota admission added evaluator for: endpoints
I0723 13:14:39.394037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0723 13:14:39.667739       1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0723 13:14:41.119782       1 handler.go:286] Adding GroupVersion autoscaling.k8s.io v1 to ResourceManager
I0723 13:14:41.120105       1 handler.go:286] Adding GroupVersion autoscaling.k8s.io v1beta2 to ResourceManager
I0723 13:14:41.385651       1 handler.go:286] Adding GroupVersion autoscaling.k8s.io v1 to ResourceManager
I0723 13:14:41.385894       1 handler.go:286] Adding GroupVersion autoscaling.k8s.io v1beta2 to ResourceManager
W0723 13:16:14.567401       1 dispatcher.go:210] Failed calling webhook, failing open vpa.k8s.io: failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
E0723 13:16:14.567438       1 dispatcher.go:214] failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
I0723 13:16:14.567729       1 controller.go:615] quota admission added evaluator for: verticalpodautoscalers.autoscaling.k8s.io
I0723 13:16:14.711442       1 controller.go:615] quota admission added evaluator for: deployments.apps
I0723 13:16:14.724717       1 controller.go:615] quota admission added evaluator for: replicasets.apps
W0723 13:16:14.762747       1 dispatcher.go:210] Failed calling webhook, failing open vpa.k8s.io: failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
E0723 13:16:14.762867       1 dispatcher.go:214] failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
W0723 13:16:14.778434       1 dispatcher.go:210] Failed calling webhook, failing open vpa.k8s.io: failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
E0723 13:16:14.778465       1 dispatcher.go:214] failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
I0723 13:16:22.665446       1 controller.go:615] quota admission added evaluator for: verticalpodautoscalercheckpoints.autoscaling.k8s.io
W0723 13:16:24.831541       1 dispatcher.go:210] Failed calling webhook, failing open vpa.k8s.io: failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
E0723 13:16:24.831579       1 dispatcher.go:214] failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
W0723 13:17:24.883712       1 dispatcher.go:210] Failed calling webhook, failing open vpa.k8s.io: failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
E0723 13:17:24.884067       1 dispatcher.go:214] failed calling webhook "vpa.k8s.io": failed to call webhook: Post "https://vpa-webhook.kube-system.svc:443/?timeout=30s": dial tcp: lookup host.docker.internal on 127.0.0.11:53: no such host
wzshiming commented 3 months ago

Ah, you are using the binary runtime, I changed the installation for that.

https://github.com/kubernetes-sigs/kwok/issues/1181#issuecomment-2242271894

network-charles commented 3 months ago

Ok

network-charles commented 3 months ago

It worked!

kubectl get pod -o yaml | grep -i cpu
cpu: 25m
cpu: 25m