vhive-serverless / vHive

vHive: Open-source framework for serverless experimentation
MIT License
290 stars 90 forks source link

create_one_node_cluster.sh fails in setting up net-istio after cleaning up #218

Closed sosson97 closed 3 years ago

sosson97 commented 3 years ago

Describe the bug Hi. When I tried to re-run create_one_node_cluster.sh after cleaning up vHive, one of k8s object creation failed with following error.

Error from server (InternalError): error when creating "https://github.com/knative/net-istio/releases/download/v0.19.0/release.yaml": Internal error occurred: failed calling webhook "config.webhook.serving.knative.dev": Post "https://webhook.knative-serving.svc:443/config-validation?timeout=10s": dial tcp 10.100.178.253:443: connect: connection refused

I found the script fails at this line, https://github.com/ease-lab/vhive/blob/4ec90bd065eb2460f69be88c2208b761dd8a7eb2/scripts/cluster/setup_master_node.sh#L70

and it seems to fail in creating configmap/config-istio.

kubectl get service -A showed me that webhook service is alive.

NAMESPACE          NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                      AGE
default            kubernetes                   ClusterIP      10.96.0.1        <none>          443/TCP                                                      31m
istio-system       cluster-local-gateway        ClusterIP      10.100.116.106   <none>          15020/TCP,80/TCP,443/TCP                                     30m
istio-system       istio-ingressgateway         LoadBalancer   10.97.187.106    192.168.1.240   15021:32565/TCP,80:30438/TCP,443:31548/TCP,15443:32153/TCP   30m
istio-system       istiod                       ClusterIP      10.97.86.218     <none>          15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP                31m
istio-system       knative-local-gateway        ClusterIP      10.100.190.140   <none>          80/TCP                                                       30m
knative-eventing   broker-filter                ClusterIP      10.108.170.188   <none>          80/TCP,9092/TCP                                              30m
knative-eventing   broker-ingress               ClusterIP      10.108.138.189   <none>          80/TCP,9092/TCP                                              30m
knative-eventing   eventing-webhook             ClusterIP      10.105.90.196    <none>          443/TCP                                                      30m
knative-eventing   imc-dispatcher               ClusterIP      10.108.207.56    <none>          80/TCP                                                       30m
knative-serving    activator-service            ClusterIP      10.101.225.160   <none>          9090/TCP,8008/TCP,80/TCP,81/TCP                              30m
knative-serving    autoscaler                   ClusterIP      10.108.95.18     <none>          9090/TCP,8008/TCP,8080/TCP                                   30m
knative-serving    autoscaler-bucket-00-of-01   ClusterIP      10.104.1.101     <none>          8080/TCP                                                     30m
knative-serving    controller                   ClusterIP      10.97.24.216     <none>          9090/TCP,8008/TCP                                            30m
knative-serving    default-domain-service       ClusterIP      10.96.226.79     <none>          80/TCP                                                       30m
knative-serving    istio-webhook                ClusterIP      10.96.6.185      <none>          9090/TCP,8008/TCP,443/TCP                                    30m
knative-serving    webhook                      ClusterIP      10.100.178.253   <none>          9090/TCP,8008/TCP,443/TCP                                    30m
kube-system        kube-dns                     ClusterIP      10.96.0.10       <none>          53/UDP,53/TCP,9153/TCP                                       31m
registry           docker-registry              ClusterIP      10.103.65.175    <none>          5000/TCP                                                     30m

I tried the cluster setup both on kind cluster and on bare-metal server. But both failed with this error. Rebooting the server didn't fix the issue.

To Reproduce

  1. Create vHive one node cluster following [this](by running ./scripts/github_runner/clean_cri_server.sh).
  2. Clean it up by by running ./scripts/github_runner/clean_cri_server.sh
  3. Repeat 1. create_one_node_cluster.sh will fail.

Expected behavior should setup one node cluster

Log

FULL LOG

``` root@kind-control-plane:/vhive# ./scripts/cluster/create_one_node_cluster.sh +++ dirname ./scripts/cluster/create_one_node_cluster.sh ++ cd ./scripts/cluster ++ pwd + DIR=/vhive/scripts/cluster ++ cd /vhive/scripts/cluster ++ cd .. ++ cd .. ++ pwd + ROOT=/vhive + STOCK_CONTAINERD= + /vhive/scripts/cluster/setup_worker_kubelet.sh + '[' '' == stock-only ']' + CRI_SOCK=/etc/firecracker-containerd/fccd-cri.sock + sudo kubeadm init --ignore-preflight-errors=all --cri-socket /etc/firecracker-containerd/fccd-cri.sock --pod-network-cidr=192.168.0.0/16 [init] Using Kubernetes version: v1.21.0 [preflight] Running pre-flight checks [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.4.0-62-generic OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-62-generic\n", err: exit status 1 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.0.2] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 111.002946 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kind-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: hsrn5d.kw3tjxalatat6wsh [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.18.0.2:6443 --token hsrn5d.kw3tjxalatat6wsh \ --discovery-token-ca-cert-hash sha256:a1458e9f9a0a76dd0e1dbf3aa299f4e62f6ee12df05cb35d96e33c5854a005e4 + mkdir -p /root/.kube + sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config ++ id -u ++ id -g + sudo chown 0:0 /root/.kube/config + '[' 0 -eq 0 ']' + export KUBECONFIG=/etc/kubernetes/admin.conf + KUBECONFIG=/etc/kubernetes/admin.conf + kubectl taint nodes --all node-role.kubernetes.io/master- node/kind-control-plane untainted + /vhive/scripts/cluster/setup_master_node.sh configmap/canal-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created clusterrolebinding.rbac.authorization.k8s.io/canal-calico created daemonset.apps/canal created serviceaccount/canal created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/calico-kube-controllers created Warning: resource configmaps/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/kube-proxy configured namespace/metallb-system created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/controller created podsecuritypolicy.policy/speaker created serviceaccount/controller created serviceaccount/speaker created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created role.rbac.authorization.k8s.io/config-watcher created role.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/config-watcher created rolebinding.rbac.authorization.k8s.io/pod-lister created daemonset.apps/speaker created deployment.apps/controller created secret/memberlist created configmap/config created % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 102 100 102 0 0 185 0 --:--:-- --:--:-- --:--:-- 184 100 4541 100 4541 0 0 6404 0 --:--:-- --:--:-- --:--:-- 6404 Downloading istio-1.7.1 from https://github.com/istio/istio/releases/download/1.7.1/istio-1.7.1-linux-amd64.tar.gz ... Istio 1.7.1 Download Complete! Istio has been successfully downloaded into the istio-1.7.1 folder on your system. Next Steps: See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster. To configure the istioctl client tool for your workstation, add the /vhive/istio-1.7.1/bin directory to your environment path variable with: export PATH="$PATH:/vhive/istio-1.7.1/bin" Begin the Istio pre-installation check by running: istioctl x precheck Need more information? Visit https://istio.io/latest/docs/setup/install/ ✔ Istio core installed ✔ Istiod installed ✔ Addons installed ✔ Ingress gateways installed ✔ Installation complete customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created namespace/knative-serving created clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created clusterrole.rbac.authorization.k8s.io/knative-serving-core created clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created serviceaccount/controller created clusterrole.rbac.authorization.k8s.io/knative-serving-admin created clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged image.caching.internal.knative.dev/queue-proxy created configmap/config-autoscaler created configmap/config-defaults created configmap/config-deployment created configmap/config-domain created configmap/config-features created configmap/config-gc created configmap/config-leader-election created configmap/config-logging created configmap/config-network created configmap/config-observability created configmap/config-tracing created horizontalpodautoscaler.autoscaling/activator created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/activator-pdb created deployment.apps/activator created service/activator-service created deployment.apps/autoscaler created service/autoscaler created deployment.apps/controller created service/controller created horizontalpodautoscaler.autoscaling/webhook created poddisruptionbudget.policy/webhook-pdb created deployment.apps/webhook created service/webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created secret/webhook-certs created namespace/registry created persistentvolume/docker-repo-pv created persistentvolumeclaim/docker-repo-pvc created replicaset.apps/docker-registry-pod created service/docker-registry created daemonset.apps/registry-etc-hosts-update created job.batch/default-domain created service/default-domain-service created clusterrole.rbac.authorization.k8s.io/knative-serving-istio created gateway.networking.istio.io/knative-ingress-gateway created gateway.networking.istio.io/cluster-local-gateway created gateway.networking.istio.io/knative-local-gateway created service/knative-local-gateway created peerauthentication.security.istio.io/webhook created peerauthentication.security.istio.io/istio-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.istio.networking.internal.knative.dev created validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.istio.networking.internal.knative.dev created secret/istio-webhook-certs created deployment.apps/networking-istio created deployment.apps/istio-webhook created service/istio-webhook created Error from server (InternalError): error when creating "https://github.com/knative/net-istio/releases/download/v0.19.0/release.yaml": Internal error occurred: failed calling webhook "config.webhook.serving.knative.dev": Post "https://webhook.knative-serving.svc:443/config-validation?timeout=10s": dial tcp 10.100.178.253:443: connect: connection refused customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev created customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev created customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev created customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev created customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev created customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev created customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev created customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created namespace/knative-eventing created serviceaccount/eventing-controller created clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller created clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created serviceaccount/pingsource-mt-adapter created clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created serviceaccount/eventing-webhook created clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created rolebinding.rbac.authorization.k8s.io/eventing-webhook created clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created configmap/config-br-default-channel created configmap/config-br-defaults created configmap/default-ch-webhook created configmap/config-ping-defaults created configmap/config-leader-election created configmap/config-logging created configmap/config-observability created configmap/config-tracing created deployment.apps/eventing-controller created deployment.apps/pingsource-mt-adapter created horizontalpodautoscaler.autoscaling/eventing-webhook created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/eventing-webhook created deployment.apps/eventing-webhook created service/eventing-webhook created customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged clusterrole.rbac.authorization.k8s.io/addressable-resolver created clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created clusterrole.rbac.authorization.k8s.io/messaging-addressable-resolver created clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created clusterrole.rbac.authorization.k8s.io/eventing-config-reader created clusterrole.rbac.authorization.k8s.io/channelable-manipulator created clusterrole.rbac.authorization.k8s.io/meta-channelable-manipulator created clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin created clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin created clusterrole.rbac.authorization.k8s.io/knative-bindings-namespaced-admin created clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created clusterrole.rbac.authorization.k8s.io/knative-eventing-pingsource-mt-adapter created clusterrole.rbac.authorization.k8s.io/podspecable-binding created clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created clusterrole.rbac.authorization.k8s.io/source-observer created clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller created clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created role.rbac.authorization.k8s.io/knative-eventing-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created secret/eventing-webhook-certs created mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created configmap/config-imc-event-dispatcher created clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created clusterrole.rbac.authorization.k8s.io/imc-controller created serviceaccount/imc-controller created clusterrole.rbac.authorization.k8s.io/imc-dispatcher created service/imc-dispatcher created serviceaccount/imc-dispatcher created clusterrolebinding.rbac.authorization.k8s.io/imc-controller created clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created deployment.apps/imc-controller created deployment.apps/imc-dispatcher created clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller created clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created serviceaccount/mt-broker-filter created clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created serviceaccount/mt-broker-ingress created clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller created clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created deployment.apps/mt-broker-filter created service/broker-filter created deployment.apps/mt-broker-ingress created service/broker-ingress created deployment.apps/mt-broker-controller created horizontalpodautoscaler.autoscaling/broker-ingress-hpa created horizontalpodautoscaler.autoscaling/broker-filter-hpa created NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.97.187.106 192.168.1.240 15021:32565/TCP,80:30438/TCP,443:31548/TCP,15443:32153/TCP 19s ```

sosson97 commented 3 years ago

Oh, the issue is fixed by creating config-istio manually. I just typed this command again at the completion of create_one_node_cluster.sh.

kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.19.0/release.yaml