kubeflow / manifests

A repository for Kustomize manifests
Apache License 2.0
835 stars 892 forks source link

Error from server (InternalError): error when creating "STDIN" #2086

Closed MISSEY closed 2 years ago

MISSEY commented 2 years ago

While installing Kubeflow using the following command:

while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done

I am having following errors in a loop :

Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
validatingwebhookconfiguration.admissionregistration.k8s.io/inferenceservice.serving.kubeflow.org configured
validatingwebhookconfiguration.admissionregistration.k8s.io/istiod-istio-system configured
validatingwebhookconfiguration.admissionregistration.k8s.io/trainedmodel.serving.kubeflow.org configured
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": dial tcp 10.96.162.128:443: connect: connection refused
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": dial tcp 10.96.162.128:443: connect: connection refused
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": dial tcp 10.96.162.128:443: connect: connection refused

kubectl version :

kubectl version --v=10
I1210 10:30:01.517955  151960 loader.go:375] Config loaded from file:  /home/sami02/.kube/config
I1210 10:30:01.519059  151960 round_trippers.go:424] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.19.16 (linux/amd64) kubernetes/e37e4ab" 'https://10.249.6.9:6443/version?timeout=32s'
I1210 10:30:01.529846  151960 round_trippers.go:444] GET https://10.249.6.9:6443/version?timeout=32s 200 OK in 10 milliseconds
I1210 10:30:01.529867  151960 round_trippers.go:450] Response Headers:
I1210 10:30:01.529875  151960 round_trippers.go:453]     Cache-Control: no-cache, private
I1210 10:30:01.529885  151960 round_trippers.go:453]     Content-Type: application/json
I1210 10:30:01.529894  151960 round_trippers.go:453]     Content-Length: 265
I1210 10:30:01.529900  151960 round_trippers.go:453]     Date: Fri, 10 Dec 2021 09:30:01 GMT
I1210 10:30:01.533506  151960 request.go:1097] Response Body: {
  "major": "1",
  "minor": "19",
  "gitVersion": "v1.19.16",
  "gitCommit": "e37e4ab4cc8dcda84f1344dda47a97bb1927d074",
  "gitTreeState": "clean",
  "buildDate": "2021-10-27T16:20:18Z",
  "goVersion": "go1.15.15",
  "compiler": "gc",
  "platform": "linux/amd64"
}
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.16", GitCommit:"e37e4ab4cc8dcda84f1344dda47a97bb1927d074", GitTreeState:"clean", BuildDate:"2021-10-27T16:25:59Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.16", GitCommit:"e37e4ab4cc8dcda84f1344dda47a97bb1927d074", GitTreeState:"clean", BuildDate:"2021-10-27T16:20:18Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

Any help would be great !!

Thank you Best

Saurabh

jimthompson5802 commented 2 years ago

@MISSEY I'm new to kubeflow as well so factor that in re: the following. 😄

I basically do the same thing and for the most part it works. I think the messages you are seeing indicate that the custom resources are being constructed/initialized. If you let it run long enough (couple of minutes) it usually completes. Or it as been my experience to-date.

Couple of things to look for or try.

In a separate terminal window have you tried running a kubectl get pod -A to see what pods are initializing.

For the pods initializing, try kubectl describe <pod_name> -n <namespace>. If you may see a message about pulling images. The delay may be due to pulling in the required images.

MISSEY commented 2 years ago

@jimthompson5802

This is what I am getting

      (base) sami02@sami02:/media/sami02/1E4B5BB258FA2EEC/cluster/k8/manifests-1.4.0$ kubectl get pod -A
      NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
      auth               dex-5ddf47d88d-66d8v                       0/1     Pending   0          15m
      cert-manager       cert-manager-7dd5854bb4-r5c2g              0/1     Pending   0          15m
      cert-manager       cert-manager-cainjector-64c949654c-v45bf   0/1     Pending   0          15m
      cert-manager       cert-manager-webhook-6bdffc7c9d-ssc8v      0/1     Pending   0          15m
      istio-system       authservice-0                              0/1     Pending   0          15m
      istio-system       cluster-local-gateway-7bf6b98855-f9mnb     0/1     Pending   0          15m
      istio-system       istio-ingressgateway-78bc678876-cn48w      0/1     Pending   0          15m
      istio-system       istiod-755f4cc457-hrb4z                    0/1     Pending   0          15m
      knative-eventing   eventing-controller-64d97555b-nh62d        0/1     Pending   0          15m
      knative-eventing   eventing-webhook-5c5b8d5c6d-cxzx6          0/1     Pending   0          15m
      knative-eventing   imc-controller-688df5bdb4-45z7c            0/1     Pending   0          15m
      knative-eventing   imc-dispatcher-5dbb47f555-lwqp7            0/1     Pending   0          15m
      knative-eventing   mt-broker-controller-856784c8ff-jj9hj      0/1     Pending   0          15m
      knative-eventing   mt-broker-filter-68fcfcc6c8-rjgpn          0/1     Pending   0          15m
      knative-eventing   mt-broker-ingress-bd54bc995-cdvls          0/1     Pending   0          15m
      kube-system        calico-kube-controllers-558995777d-k7kvj   1/1     Running   0          6h18m
      kube-system        calico-node-fdwc4                          1/1     Running   0          6h18m
      kube-system        coredns-f9fd979d6-tw6qv                    1/1     Running   0          4d20h
      kube-system        coredns-f9fd979d6-x25hr                    1/1     Running   0          4d20h
      kube-system        etcd-sami02                                1/1     Running   0          4d20h
      kube-system        kube-apiserver-sami02                      1/1     Running   0          4d20h
      kube-system        kube-controller-manager-sami02             1/1     Running   0          4d20h
      kube-system        kube-proxy-k69vq                           1/1     Running   0          4d20h
      kube-system        kube-scheduler-sami02                      1/1     Running   0          4d20h
      (base) sami02@sami02:/media/sami02/1E4B5BB258FA2EEC/cluster/k8/manifests-1.4.0$ kubectl get namespace
      NAME               STATUS   AGE
      auth               Active   15m
      cert-manager       Active   15m
      default            Active   4d20h
      istio-system       Active   15m
      knative-eventing   Active   15m
      knative-serving    Active   15m
      kube-node-lease    Active   4d20h
      kube-public        Active   4d20h
      kube-system        Active   4d20h
      kubeflow           Active   15m
      (base) sami02@sami02:/media/sami02/1E4B5BB258FA2EEC/cluster/k8/manifests-1.4.0$ kubectl get pods -n kubeflow
      No resources found in kubeflow namespace.
jimthompson5802 commented 2 years ago

@MISSEY Have you tried a kubectl describe on the pending pods to see what they are waiting on? Example,

kubectl describe -n cert-manager pod cert-manager-7dd5854bb4-r5c2g

I believe you are using this to install kubeflow

while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done

This is the same command I use. I've not experience any issues with deploying pods in the cert-manager, istio-system or knative-eventing namespaces. I've had issue with the kubeflow namespace but not these three.

What is your k8s distribution? Do you know the version?

MISSEY commented 2 years ago

@jimthompson5802

The K8s version is :

       kubeadm version
      kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.16", GitCommit:"e37e4ab4cc8dcda84f1344dda47a97bb1927d074", GitTreeState:"clean", BuildDate:"2021-10-27T16:24:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

Yes I used below command to install Kubeflow 1.4

while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done

The describe shows that 0 nodes are available. How to solve this ?


      sami02@sami02:/media/sami02/1E4B5BB258FA2EEC/cluster/k8/manifests-1.4.0$ kubectl describe -n cert-manager pod cert-manager-7dd5854bb4-r5c2g
      Name:           cert-manager-7dd5854bb4-r5c2g
      Namespace:      cert-manager
      Priority:       0
      Node:           <none>
      Labels:         app=cert-manager
                      app.kubernetes.io/component=controller
                      app.kubernetes.io/instance=cert-manager
                      app.kubernetes.io/name=cert-manager
                      pod-template-hash=7dd5854bb4
      Annotations:    prometheus.io/path: /metrics
                      prometheus.io/port: 9402
                      prometheus.io/scrape: true
      Status:         Pending
      IP:             
      IPs:            <none>
      Controlled By:  ReplicaSet/cert-manager-7dd5854bb4
      Containers:
        cert-manager:
          Image:      quay.io/jetstack/cert-manager-controller:v1.3.1
          Port:       9402/TCP
          Host Port:  0/TCP
          Args:
            --v=2
            --cluster-resource-namespace=$(POD_NAMESPACE)
            --leader-election-namespace=kube-system
          Environment:
            POD_NAMESPACE:  cert-manager (v1:metadata.namespace)
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from cert-manager-token-pjh6n (ro)
      Conditions:
        Type           Status
        PodScheduled   False 
      Volumes:
        cert-manager-token-pjh6n:
          Type:        Secret (a volume populated by a Secret)
          SecretName:  cert-manager-token-pjh6n
          Optional:    false
      QoS Class:       BestEffort
      Node-Selectors:  <none>
      Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                       node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
      Events:
        Type     Reason            Age                   From               Message
        ----     ------            ----                  ----               -------
        Warning  FailedScheduling  108s (x814 over 20h)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
jimthompson5802 commented 2 years ago

@MISSEY We're reaching the limit of what I know about k8s administration. 🤷 I've not used kubeadm so not familiar with how it works in setting up a k8s cluster. So If you don't mind some "shots in the dark", I can suggest what I would do and see what happens.

Let's start with this and see what happens.

MISSEY commented 2 years ago

@jimthompson5802 Thank you so much for your efforts.

Yes, you are right I only have one master node. I tried to initiate a virtual slave node but couldn't.

Thank you again ;)

jimthompson5802 commented 2 years ago

@MISSEY Looks like the kubectl describe node shows the issue. This is the relevant output

Taints:             node-role.kubernetes.io/master:NoSchedule

I understand this tells k8s not to schedule any pods on this node. You cluster has only one node, so nothing will start. This is the issue in the stackoverflow issue I pointed out in my previous post.

The corrective action is found in this comment. Give this a try and let me know if this resolved your issue.

MISSEY commented 2 years ago

Hi @jimthompson5802, it worked, thank you so much. Sorry for the late reply. Was off for a few days.

jimthompson5802 commented 2 years ago

@MISSEY No problem. Hope you had an enjoyable time off. I'm glad the suggestion worked. I appreciate the opportunity to learn something new.

yangy996 commented 2 years ago

Hi @jimthompson5802, it worked, thank you so much. Sorry for the late reply. Was off for a few days.

hello,it's not worked.How to solve it?

ziyou987 commented 2 years ago

Hi @MISSEY , I also encountered the same problem. Please tell me your successful method. Thank you very much!

r-matsuzaka commented 2 years ago

@jimthompson5802 @MISSEY Hello. I had the same issue and checked Taints but the value is none .
I could not understand the reason of this issue and the solution.
Can you explain about that?

$ kubectl describe node minikube
Name:               minikube
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/primary=true
                    minikube.k8s.io/updated_at=2022_08_14T13_40_38_0700
                    minikube.k8s.io/version=v1.26.1
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 14 Aug 2022 13:40:34 +0900
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Sun, 14 Aug 2022 15:26:32 +0900
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sun, 14 Aug 2022 15:21:59 +0900   Sun, 14 Aug 2022 13:40:32 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 14 Aug 2022 15:21:59 +0900   Sun, 14 Aug 2022 13:40:32 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sun, 14 Aug 2022 15:21:59 +0900   Sun, 14 Aug 2022 13:40:32 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sun, 14 Aug 2022 15:21:59 +0900   Sun, 14 Aug 2022 13:40:35 +0900   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.39.71
  Hostname:    minikube
Capacity:
  cpu:                4
  ephemeral-storage:  45604772Ki
  hugepages-2Mi:      0
  memory:             20484460Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  45604772Ki
  hugepages-2Mi:      0
  memory:             20484460Ki
  pods:               110
System Info:
  Machine ID:                 5ce75aba991c4cfdb5dc4ffc8b4a5a2c
  System UUID:                5ce75aba-991c-4cfd-b5dc-4ffc8b4a5a2c
  Boot ID:                    af27df4b-b3c8-4373-8a68-07e806ecbbcd
  Kernel Version:             5.10.57
  OS Image:                   Buildroot 2021.02.12
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.17
  Kubelet Version:            v1.21.14
  Kube-Proxy Version:         v1.21.14
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (58 in total)
  Namespace                   Name                                                      CPU Requests  CPU Limits   Memory Requests  Memory Limits         Age
  ---------                   ----                                                      ------------  ----------   ---------------  -------------         ---
  auth                        dex-559dbcd758-cxcmk                                      0 (0%)        0 (0%)       0 (0%)           0 (0%)                105m
  cert-manager                cert-manager-7b8c77d4bd-pr7nq                             0 (0%)        0 (0%)       0 (0%)           0 (0%)                105m
  cert-manager                cert-manager-cainjector-7c744f57b5-4gz7n                  0 (0%)        0 (0%)       0 (0%)           0 (0%)                105m
  cert-manager                cert-manager-webhook-fcd445bc4-slmls                      0 (0%)        0 (0%)       0 (0%)           0 (0%)                105m
  istio-system                authservice-0                                             0 (0%)        0 (0%)       0 (0%)           0 (0%)                105m
  istio-system                cluster-local-gateway-55ff4696f4-x6flv                    100m (2%)     2 (50%)      128Mi (0%)       1Gi (5%)              105m
  istio-system                istio-ingressgateway-6668f9548d-lczhl                     10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              105m
  istio-system                istiod-6678f9548b-6ss5r                                   10m (0%)      0 (0%)       100Mi (0%)       0 (0%)                105m
  knative-eventing            eventing-controller-8457bd9747-s6bsf                      100m (2%)     0 (0%)       100Mi (0%)       0 (0%)                105m
  knative-eventing            eventing-webhook-69986cfb5d-7wtrh                         100m (2%)     200m (5%)    50Mi (0%)        200Mi (0%)            105m
  knative-serving             activator-7c5cd78566-kjdhv                                310m (7%)     3 (75%)      100Mi (0%)       1624Mi (8%)           104m
  knative-serving             autoscaler-98487645d-67dtx                                110m (2%)     3 (75%)      140Mi (0%)       2024Mi (10%)          104m
  knative-serving             controller-7546f544b7-hzmxd                               110m (2%)     3 (75%)      140Mi (0%)       2024Mi (10%)          104m
  knative-serving             domain-mapping-5d56bfc7d-kh9h9                            40m (1%)      2300m (57%)  80Mi (0%)        1424Mi (7%)           104m
  knative-serving             domainmapping-webhook-696559d49c-ql886                    110m (2%)     2500m (62%)  140Mi (0%)       1524Mi (7%)           104m
  knative-serving             net-istio-controller-c4d469c-w8dgq                        40m (1%)      2300m (57%)  80Mi (0%)        1424Mi (7%)           104m
  knative-serving             net-istio-webhook-855bcb6747-hcv45                        30m (0%)      2200m (55%)  60Mi (0%)        1224Mi (6%)           104m
  knative-serving             webhook-59f9fdd446-zc7st                                  110m (2%)     2500m (62%)  140Mi (0%)       1524Mi (7%)           104m
  kube-system                 coredns-558bd4d5db-8qfql                                  100m (2%)     0 (0%)       70Mi (0%)        170Mi (0%)            105m
  kube-system                 etcd-minikube                                             100m (2%)     0 (0%)       100Mi (0%)       0 (0%)                105m
  kube-system                 kube-apiserver-minikube                                   250m (6%)     0 (0%)       0 (0%)           0 (0%)                105m
  kube-system                 kube-controller-manager-minikube                          200m (5%)     0 (0%)       0 (0%)           0 (0%)                105m
  kube-system                 kube-proxy-6p5bl                                          0 (0%)        0 (0%)       0 (0%)           0 (0%)                105m
  kube-system                 kube-scheduler-minikube                                   100m (2%)     0 (0%)       0 (0%)           0 (0%)                105m
  kube-system                 storage-provisioner                                       0 (0%)        0 (0%)       0 (0%)           0 (0%)                105m
  kubeflow-user-example-com   hogehogehoge-0                                            510m (12%)    2600m (65%)  1064Mi (5%)      2362232012800m (11%)  86m
  kubeflow-user-example-com   ml-pipeline-ui-artifact-7cd897c59f-t5p7b                  20m (0%)      2100m (52%)  110Mi (0%)       1524Mi (7%)           93m
  kubeflow-user-example-com   ml-pipeline-visualizationserver-795f7db965-rcjfx          60m (1%)      2500m (62%)  240Mi (1%)       2Gi (10%)             93m
  kubeflow                    admission-webhook-deployment-c77d48bbb-4qm24              0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    cache-server-56d94f5d78-pxtxm                             10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    centraldashboard-5864f74d99-dsfxz                         10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    jupyter-web-app-deployment-5bc998bcb5-tr4rc               0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    katib-controller-6848d4dd9f-nbjmt                         0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    katib-db-manager-665954948-g6zxd                          0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    katib-mysql-5bf95ddfcc-bq8ft                              0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    katib-ui-56ccff658f-tq5tl                                 0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    kserve-controller-manager-0                               100m (2%)     100m (2%)    200Mi (0%)       300Mi (1%)            104m
  kubeflow                    kserve-models-web-app-5878544ffd-bblpt                    10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    kubeflow-pipelines-profile-controller-5d98fd7b4f-ldhjh    0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    metacontroller-0                                          100m (2%)     1 (25%)      256Mi (1%)       1Gi (5%)              104m
  kubeflow                    metadata-envoy-deployment-5b685dfb7f-fchc7                0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    metadata-grpc-deployment-f8d68f687-shgsb                  10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    metadata-writer-d6498d6b4-chg98                           10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    minio-5b65df66c9-842s9                                    30m (0%)      2 (50%)      140Mi (0%)       1Gi (5%)              104m
  kubeflow                    ml-pipeline-844c786c48-lbvqq                              260m (6%)     2 (50%)      540Mi (2%)       1Gi (5%)              104m
  kubeflow                    ml-pipeline-persistenceagent-5854f86f8b-h8tq9             130m (3%)     2 (50%)      540Mi (2%)       1Gi (5%)              104m
  kubeflow                    ml-pipeline-scheduledworkflow-5dddbf664f-rpccp            10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    ml-pipeline-ui-6bdfc6dbcd-btd75                           20m (0%)      2 (50%)      110Mi (0%)       1Gi (5%)              104m
  kubeflow                    ml-pipeline-viewer-crd-85f6fd557b-ddddp                   10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    ml-pipeline-visualizationserver-7c4885999-srld9           40m (1%)      2 (50%)      540Mi (2%)       1Gi (5%)              104m
  kubeflow                    mysql-5c7f79f986-mq9r7                                    110m (2%)     2 (50%)      840Mi (4%)       1Gi (5%)              104m
  kubeflow                    notebook-controller-deployment-6478d4858c-qd9c9           10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    profiles-deployment-7bc47446fb-mhrxj                      10m (0%)      2 (50%)      40Mi (0%)        1Gi (5%)              104m
  kubeflow                    tensorboard-controller-deployment-f4f555b95-7xdj8         25m (0%)      3 (75%)      168Mi (0%)       1280Mi (6%)           104m
  kubeflow                    tensorboards-web-app-deployment-7578c885f7-f76r6          0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    training-operator-6c9f6fd894-cz2g6                        100m (2%)     100m (2%)    20Mi (0%)        30Mi (0%)             104m
  kubeflow                    volumes-web-app-deployment-7bc5754bd4-mtjv4               0 (0%)        0 (0%)       0 (0%)           0 (0%)                104m
  kubeflow                    workflow-controller-6b9b6c5b46-jzh92                      110m (2%)     2 (50%)      540Mi (2%)       1Gi (5%)              104m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                3635m (90%)   68400m (1710%)
  memory             7136Mi (35%)  41998404812800m (200%)
  ephemeral-storage  0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)
Events:              <none>
petr-larin commented 3 months ago

I ran into the same issue on minikube and the solution was - per kubeflow documentation - increase the memory:

$ minikube start --cpus 4 --memory 8096 --disk-size=40g Notes:

These are the minimum recommended settings on the VM created by minikube for kubeflow deployment

https://v0-2.kubeflow.org/docs/started/getting-started-minikube/

juliusvonkohout commented 3 months ago

Please check https://github.com/kubeflow/manifests/tree/master?tab=readme-ov-file#prerequisites-1