GoogleCloudPlatform / kubeflow-distribution

Blueprints for Deploying Kubeflow on Google Cloud Platform and Anthos
Apache License 2.0
80 stars 63 forks source link

Endpoint not accessible after deployment 1.6.1 #396

Closed JPBedran closed 1 year ago

JPBedran commented 1 year ago

/kind bug

Hi all, Have been trying to deploy kubeflow but unfortunately after every deployment the endpoint to the UI remains unreachable.

I have had several attempts in fixing this but without any success at it.

Depl. Description:

Problem Description:

Attempts:

Future Attemps:

If you guys can help out or just point in the right direction, that would be much appreciated. Thanks!

gkcalat commented 1 year ago

Hi @JPBedran!

Could you Link you deployment logs (printouts in the terminal)?

  1. What do you get after running:

    nslookup ${KF_NAME}.endpoints.${PROJECT}.cloud.goog

    If you don't have nslookup installed, you can get it by:

    sudo apt-get install dnsutils -y

    or

    sudo yum install bind-utils
  2. What do you have listed in endpoints?

    gcloud endpoints services list
JPBedran commented 1 year ago

Hey @gkcalat, Thanks for the reply!

Sure, the nslookup returns an NXODMAIN error, I havea couple of other attempts and metrics in the descriptions above also.

JPBedran commented 1 year ago

Hey @gkcalat, the full deployment output:

iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-ml unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-monitoringviewer unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-source unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-storage unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-viewer unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-workload-identity-user-ml-pipeline-ui unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-workload-identity-user-ml-pipeline-visualizationserver unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-workload-identity-user-pipeline-runner unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-logging unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-cloudtrace unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-meshtelemetry unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-monitoring-viewer unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-monitoring unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-storage unchanged
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-admin unchanged
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-user unchanged
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-vm unchanged
service.serviceusage.cnrm.cloud.google.com/anthos.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/cloudbuild.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/cloudresourcemanager.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/compute.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/container.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/gkeconnect.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/gkehub.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/iamcredentials.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/iap.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/logging.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/meshca.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/meshconfig.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/meshtelemetry.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/monitoring.googleapis.com unchanged
service.serviceusage.cnrm.cloud.google.com/servicemanagement.googleapis.com unchanged
make[2]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cnrm'
make wait-gcp
make[2]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cnrm'
# Wait for all Google Cloud resources to get created and become ready.
Waiting for iamserviceaccount resources...
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-admin condition met
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-sql condition met
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-user condition met
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-vm condition met
Waiting for iampolicymember resources...
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-bigquery condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-cloudbuild condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-cloudsql condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-dataflow condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-dataproc condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-istio-wi condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-kubeflow-wi condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-logging condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-manages-user condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-metricwriter condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-ml condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-monitoringviewer condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-network condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-servicemanagement condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-source condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-storage condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-viewer condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-admin-workload-identity-user condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-client condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-proxy-wi-user condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-gcs-wi-user condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-bigquery condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-cloudbuild condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-cloudsql condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-dataflow condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-dataproc condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-logging condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-metricwriter condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-ml condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-monitoringviewer condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-source condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-storage condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-viewer condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-workload-identity-user-ml-pipeline-ui condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-workload-identity-user-ml-pipeline-visualizationserver condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-user-workload-identity-user-pipeline-runner condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-logging condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-cloudtrace condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-meshtelemetry condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-monitoring condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-monitoring-viewer condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-vm-policy-storage condition met
Waiting for computeaddress resources...
computeaddress.compute.cnrm.cloud.google.com/kubeflow-ip condition met
Waiting for containercluster resources...
containercluster.container.cnrm.cloud.google.com/kubeflow condition met
make[2]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cnrm'
make create-ctxt
make[2]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cnrm'
PROJECT=<NAME>-kf \
   REGION=us-east1 \
   NAME=kubeflow ../../hack/create_context.sh
+ kubectl config delete-context kubeflow
warning: this removed your active context, use "kubectl config use-context" to select a different one
deleted context kubeflow from /home/jorge_bedran/.kube/config
+ set -ex
+ NAMESPACE=kubeflow
+ gcloud --project=<NAME>-kf container clusters get-credentials --region=us-east1 kubeflow
Fetching cluster endpoint and auth data.
kubeconfig entry generated for kubeflow.
++ kubectl config current-context
+ kubectl config rename-context gke_<NAME>-kf_us-east1_kubeflow kubeflow
Context "gke_<NAME>-kf_us-east1_kubeflow" renamed to "kubeflow".
+ kubectl config set-context --current --namespace=kubeflow
Context "kubeflow" modified.
make[2]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cnrm'
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cnrm'
Build directory: ./build
Component path: asm
Apply component resources: asm
Found Makefile, call 'make apply' of this component Makefile.
make[1]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm'
curl https://storage.googleapis.com/csm-artifacts/asm/asmcli_1.14.1-asm.3-config6 > asmcli;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  195k  100  195k    0     0   777k      0 --:--:-- --:--:-- --:--:--  777k
chmod +x asmcli
rm -rf asm.tar.gz
curl -LJ https://github.com/GoogleCloudPlatform/anthos-service-mesh-packages/archive/refs/tags/1.14.1-asm.3+config6.tar.gz -o asm.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  242k    0  242k    0     0   527k      0 --:--:-- --:--:-- --:--:--  527k
rm -rf ./package
mkdir ./package
tar -xf asm.tar.gz --strip-components=1 -C ./package
./asmcli install \
--project_id <NAME>-kf \
--cluster_name kubeflow \
--cluster_location us-east1 \
--output_dir ./package \
--enable_all \
--ca mesh_ca \
--custom_overlay ./package/asm/istio/options/iap-operator.yaml \
--custom_overlay ./options/ingressgateway-iap.yaml \
--option legacy-default-ingressgateway \
--verbose
2022-11-15T00:19:03.766278 asmcli: Setting up necessary files...
2022-11-15T00:19:03.929295 asmcli: Using /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig as the kubeconfig...
2022-11-15T00:19:04.038139 asmcli: Checking installation tool dependencies...
2022-11-15T00:19:04.400676 asmcli: Fetching/writing GCP credentials to kubeconfig file...
2022-11-15T00:19:04.554055 asmcli: Running: '/usr/bin/gcloud container clusters get-credentials kubeflow --project=<NAME>-kf --zone=us-east1'
2022-11-15T00:19:04.625567 asmcli: -------------
Fetching cluster endpoint and auth data.
kubeconfig entry generated for kubeflow.
2022-11-15T00:19:06.131026 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig config current-context'
2022-11-15T00:19:06.224569 asmcli: -------------
2022-11-15T00:19:06.408514 asmcli: [WARNING]: nc not found, skipping k8s connection verification
2022-11-15T00:19:06.480167 asmcli: [WARNING]: (Installation will continue normally.)
2022-11-15T00:19:06.631314 asmcli: Getting account information...
2022-11-15T00:19:06.820794 asmcli: Running: '/usr/bin/gcloud auth list --project=<NAME>-kf --filter=status:ACTIVE --format=value(account)'
2022-11-15T00:19:06.889596 asmcli: -------------
2022-11-15T00:19:07.955898 asmcli: Running: '/usr/bin/gcloud config get-value auth/impersonate_service_account'
2022-11-15T00:19:08.026585 asmcli: -------------
Your active configuration is: [cloudshell-17455]
(unset)
2022-11-15T00:19:09.262422 asmcli: Running: '/usr/bin/gcloud container clusters list --project=<NAME>-kf --filter=name = kubeflow AND location = us-east1 --format=value(name)'
2022-11-15T00:19:09.333510 asmcli: -------------
WARNING: --filter : operator evaluation is changing for consistency across Google APIs.  name=kubeflow currently does not match but will match in the near future.  Run `gcloud topic filters` for details.
2022-11-15T00:19:11.064904 asmcli: Running: '/usr/bin/kpt version'
2022-11-15T00:19:11.133128 asmcli: -------------
2022-11-15T00:19:11.431591 asmcli: Downloading kpt..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 11.8M  100 11.8M    0     0  13.0M      0 --:--:-- --:--:-- --:--:-- 27.1M
2022-11-15T00:19:12.548938 asmcli: Downloading ASM..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 39.7M  100 39.7M    0     0  20.4M      0  0:00:01  0:00:01 --:--:-- 20.4M
2022-11-15T00:19:14.635528 asmcli: Downloading ASM kpt package...
2022-11-15T00:19:14.786450 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt pkg get --auto-set=false https://github.com/GoogleCloudPlatform/anthos-service-mesh-packages.git/asm@1.14.1-asm.3+config6 asm'
2022-11-15T00:19:14.856107 asmcli: -------------
fetching package "/asm" from "https://github.com/GoogleCloudPlatform/anthos-service-mesh-packages" to "asm/asm"
2022-11-15T00:19:18.452696 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt pkg get --auto-set=false https://github.com/GoogleCloudPlatform/anthos-service-mesh-packages.git/samples@1.14.1-asm.3+config6 samples'
2022-11-15T00:19:18.521656 asmcli: -------------
fetching package "/samples" from "https://github.com/GoogleCloudPlatform/anthos-service-mesh-packages" to "samples/samples"
2022-11-15T00:19:22.019217 asmcli: Verifying cluster registration.
2022-11-15T00:19:29.339001 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow get memberships.hub.gke.io membership -o=json'
2022-11-15T00:19:29.410314 asmcli: -------------
2022-11-15T00:19:30.617270 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow get memberships.hub.gke.io membership -o=jsonpath={.spec.identity_provider}'
2022-11-15T00:19:30.689491 asmcli: -------------
2022-11-15T00:19:31.830199 asmcli: Running: '/usr/bin/gcloud container hub memberships list --project <NAME>-kf --format=json'
2022-11-15T00:19:31.902932 asmcli: -------------
2022-11-15T00:19:33.899378 asmcli: Running: '/usr/bin/gcloud projects describe <NAME>-kf --format value(projectNumber)'
2022-11-15T00:19:33.967915 asmcli: -------------
2022-11-15T00:19:35.175473 asmcli: Verified cluster is registered to <NAME>-kf
2022-11-15T00:19:35.657102 asmcli: Enabling required APIs...
2022-11-15T00:19:35.850365 asmcli: Running: '/usr/bin/gcloud services enable --project=<NAME>-kf mesh.googleapis.com'
2022-11-15T00:19:35.919299 asmcli: -------------
2022-11-15T00:19:38.496912 asmcli: Running: '/usr/bin/gcloud container clusters describe --project=<NAME>-kf --region us-east1 kubeflow --format=json'
2022-11-15T00:19:38.567546 asmcli: -------------
2022-11-15T00:19:40.012426 asmcli: Running: '/usr/bin/gcloud container clusters describe --project=<NAME>-kf --region us-east1 kubeflow --format=json'
2022-11-15T00:19:40.084911 asmcli: -------------
2022-11-15T00:19:41.621435 asmcli: Verifying cluster registration.
2022-11-15T00:19:45.225602 asmcli: Running: '/usr/bin/gcloud container hub memberships list --project <NAME>-kf --format=json'
2022-11-15T00:19:45.295936 asmcli: -------------
2022-11-15T00:19:46.820319 asmcli: Running: '/usr/bin/gcloud projects describe <NAME>-kf --format value(projectNumber)'
2022-11-15T00:19:46.891166 asmcli: -------------
2022-11-15T00:19:48.569736 asmcli: Verified cluster is registered to <NAME>-kf
2022-11-15T00:19:48.640074 asmcli: Verifying cluster registration.
2022-11-15T00:19:52.258251 asmcli: Running: '/usr/bin/gcloud container hub memberships list --project <NAME>-kf --format=json'
2022-11-15T00:19:52.328059 asmcli: -------------
2022-11-15T00:19:53.859028 asmcli: Running: '/usr/bin/gcloud projects describe <NAME>-kf --format value(projectNumber)'
2022-11-15T00:19:53.927744 asmcli: -------------
2022-11-15T00:19:55.586348 asmcli: Verified cluster is registered to <NAME>-kf
2022-11-15T00:19:55.736564 asmcli: Checking for project <NAME>-kf...
2022-11-15T00:19:55.894010 asmcli: Running: '/usr/bin/gcloud projects describe <NAME>-kf --format=value(projectNumber)'
2022-11-15T00:19:55.965983 asmcli: -------------
2022-11-15T00:19:57.566653 asmcli: Reading labels for us-east1/kubeflow...
2022-11-15T00:19:57.721207 asmcli: Running: '/usr/bin/gcloud container clusters describe kubeflow --zone=us-east1 --project=<NAME>-kf --format=value(resourceLabels)[delimiter=","]'
2022-11-15T00:19:57.794728 asmcli: -------------
2022-11-15T00:19:59.232107 asmcli: Querying for core/account...
2022-11-15T00:19:59.384703 asmcli: Running: '/usr/bin/gcloud config get-value core/account'
2022-11-15T00:19:59.462322 asmcli: -------------
Your active configuration is: [cloudshell-17455]
2022-11-15T00:20:00.437797 asmcli: Binding jorge.bedran@<NAME>.io to cluster admin role...
2022-11-15T00:20:00.730254 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow create clusterrolebinding jorge.bedran-cluster-admin-binding --clusterrole=cluster-admin --user=jorge.bedran@<NAME>.io --dry-run=client -o yaml'
2022-11-15T00:20:00.804540 asmcli: -------------
2022-11-15T00:20:01.163245 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow apply -f -'
2022-11-15T00:20:01.235652 asmcli: -------------
clusterrolebinding.rbac.authorization.k8s.io/jorge.bedran-cluster-admin-binding configured
2022-11-15T00:20:03.242724 asmcli: Creating istio-system namespace...
2022-11-15T00:20:03.538412 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow get ns'
2022-11-15T00:20:03.610465 asmcli: -------------
2022-11-15T00:20:10.070051 asmcli: Confirming node pool requirements for <NAME>-kf/us-east1/kubeflow...
2022-11-15T00:20:10.343285 asmcli: Running: '/usr/bin/gcloud container node-pools list --project=<NAME>-kf --region us-east1 --cluster kubeflow --filter     config.machineType.split(sep="-").slice(-1:) >= 4  --format=json'
2022-11-15T00:20:10.413360 asmcli: -------------
2022-11-15T00:20:11.826590 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow version -o json'
2022-11-15T00:20:11.900308 asmcli: -------------
WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1
2022-11-15T00:20:12.409960 asmcli: Checking Istio installations...
2022-11-15T00:20:12.692833 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow get deployment -A --ignore-not-found=true'
2022-11-15T00:20:12.762877 asmcli: -------------
2022-11-15T00:20:13.569360 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow get deployment -n istio-system --ignore-not-found=true'
2022-11-15T00:20:13.642099 asmcli: -------------
2022-11-15T00:20:14.709941 asmcli: Initializing meshconfig API...
2022-11-15T00:20:14.936809 asmcli: Cluster has Membership ID kubeflow-cj432x9u in the Hub of project <NAME>-kf
2022-11-15T00:20:15.085289 asmcli: Running: 'curl --request POST --fail --data {"workloadIdentityPools":["<NAME>-kf.hub.id.goog","<NAME>-kf.svc.id.goog"]} -o /dev/null https://meshconfig.googleapis.com/v1alpha1/projects/<NAME>-kf:initialize --header X-Server-Timeout: 600 --header Content-Type: application/json -K /dev/fd/63'
2022-11-15T00:20:15.127332 asmcli: Running: '/usr/bin/gcloud --project=<NAME>-kf auth print-access-token'
2022-11-15T00:20:15.156378 asmcli: -------------
2022-11-15T00:20:15.202360 asmcli: -------------
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    74    0     3  100    71     12    306 --:--:-- --:--:-- --:--:--   320
2022-11-15T00:20:17.836163 asmcli: Binding user:jorge.bedran@<NAME>.io to required IAM roles...
2022-11-15T00:20:17.990747 asmcli: Running: '/usr/bin/gcloud projects add-iam-policy-binding <NAME>-kf --member user:jorge.bedran@<NAME>.io --role=roles/container.admin --condition=None'
2022-11-15T00:20:18.059953 asmcli: -------------
Updated IAM policy for project [<NAME>-kf].
2022-11-15T00:20:20.722208 asmcli: Running: '/usr/bin/gcloud projects add-iam-policy-binding <NAME>-kf --member user:jorge.bedran@<NAME>.io --role=roles/meshconfig.admin --condition=None'
2022-11-15T00:20:20.792717 asmcli: -------------
Updated IAM policy for project [<NAME>-kf].
2022-11-15T00:20:23.283725 asmcli: Running: '/usr/bin/gcloud projects add-iam-policy-binding <NAME>-kf --member user:jorge.bedran@<NAME>.io --role=roles/servicemanagement.admin --condition=None'
2022-11-15T00:20:23.354604 asmcli: -------------
Updated IAM policy for project [<NAME>-kf].
2022-11-15T00:20:25.940576 asmcli: Running: '/usr/bin/gcloud projects add-iam-policy-binding <NAME>-kf --member user:jorge.bedran@<NAME>.io --role=roles/serviceusage.serviceUsageAdmin --condition=None'
2022-11-15T00:20:26.011588 asmcli: -------------
Updated IAM policy for project [<NAME>-kf].
2022-11-15T00:20:28.660890 asmcli: Running: '/usr/bin/gcloud projects add-iam-policy-binding <NAME>-kf --member user:jorge.bedran@<NAME>.io --role=roles/resourcemanager.projectIamAdmin --condition=None'
2022-11-15T00:20:28.729900 asmcli: -------------
Updated IAM policy for project [<NAME>-kf].
2022-11-15T00:20:31.249203 asmcli: Running: '/usr/bin/gcloud projects add-iam-policy-binding <NAME>-kf --member user:jorge.bedran@<NAME>.io --role=roles/gkehub.admin --condition=None'
2022-11-15T00:20:31.320484 asmcli: -------------
Updated IAM policy for project [<NAME>-kf].
2022-11-15T00:20:34.461276 asmcli: Configuring kpt package...
2022-11-15T00:20:34.907142 asmcli: Running: '/usr/bin/gcloud container clusters describe kubeflow --zone=us-east1 --project=<NAME>-kf --format=value(selfLink, network)'
2022-11-15T00:20:34.978996 asmcli: -------------
2022-11-15T00:20:36.553319 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm gcloud.container.cluster kubeflow'
2022-11-15T00:20:36.626088 asmcli: -------------
asm/
set 16 field(s) of setter "gcloud.container.cluster" to value "kubeflow"
2022-11-15T00:20:38.019275 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm gcloud.core.project <NAME>-kf'
2022-11-15T00:20:38.088250 asmcli: -------------
asm/
set 20 field(s) of setter "gcloud.core.project" to value "<NAME>-kf"
asm/
set 2 field(s) of setter "gcloud.project.projectNumber" to value "<PROJECT_NUMBER>"
2022-11-15T00:20:41.075949 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm gcloud.compute.location us-east1'
2022-11-15T00:20:41.148043 asmcli: -------------
asm/
set 16 field(s) of setter "gcloud.compute.location" to value "us-east1"
2022-11-15T00:20:42.526183 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm gcloud.compute.network <NAME>-kf-default'
2022-11-15T00:20:42.596144 asmcli: -------------
asm/
set 1 field(s) of setter "gcloud.compute.network" to value "<NAME>-kf-default"
2022-11-15T00:20:43.959311 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm gcloud.project.environProjectNumber <PROJECT_NUMBER>'
2022-11-15T00:20:44.031215 asmcli: -------------
asm/
set 3 field(s) of setter "gcloud.project.environProjectNumber" to value "<PROJECT_NUMBER>"
2022-11-15T00:20:45.384477 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.rev asm-1141-3'
2022-11-15T00:20:45.454676 asmcli: -------------
asm/
set 2 field(s) of setter "anthos.servicemesh.rev" to value "asm-1141-3"
2022-11-15T00:20:46.814072 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.tag 1.14.1-asm.3'
2022-11-15T00:20:46.910035 asmcli: -------------
asm/
set 5 field(s) of setter "anthos.servicemesh.tag" to value "1.14.1-asm.3"
2022-11-15T00:20:48.282020 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.trustDomain <NAME>-kf.svc.id.goog'
2022-11-15T00:20:48.352541 asmcli: -------------
asm/
set 3 field(s) of setter "anthos.servicemesh.trustDomain" to value "<NAME>-kf.svc.id.goog"
2022-11-15T00:20:49.707435 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.tokenAudiences istio-ca,<NAME>-kf.svc.id.goog'
2022-11-15T00:20:49.778732 asmcli: -------------
asm/
set 1 field(s) of setter "anthos.servicemesh.tokenAudiences" to value "istio-ca,<NAME>-kf.svc.id.goog"
2022-11-15T00:20:51.160903 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.spiffeBundleEndpoints <NAME>-kf.svc.id.goog|https://storage.googleapis.com/mesh-ca-resources/spiffe_bundle.json'
2022-11-15T00:20:51.230428 asmcli: -------------
asm/
set 1 field(s) of setter "anthos.servicemesh.spiffeBundleEndpoints" to value "<NAME>-kf.svc.id.goog|https://storage.googleapis.com/mesh-ca-resources/spiffe_bundle.json"
2022-11-15T00:20:58.394745 asmcli: Running: '/usr/bin/gcloud container node-pools list --project=<NAME>-kf --region us-east1 --cluster kubeflow --filter     config.machineType.split(sep="-").slice(-1:) >= 0  --format=json'
2022-11-15T00:20:58.464038 asmcli: -------------
2022-11-15T00:20:59.718815 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.created-by asmcli-1.14.1-asm.3.config6'
2022-11-15T00:20:59.787916 asmcli: -------------
asm/
set 3 field(s) of setter "anthos.servicemesh.created-by" to value "asmcli-1.14.1-asm.3.config6"
2022-11-15T00:21:01.415663 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.idp-url https://container.googleapis.com/v1/projects/<NAME>-kf/locations/us-east1/clusters/kubeflow'
2022-11-15T00:21:01.486431 asmcli: -------------
asm/
set 2 field(s) of setter "anthos.servicemesh.idp-url" to value "https://container.googleapis.com/v1/projects/<NAME>-kf/locations/us-east1/clusters/kubeflow"
2022-11-15T00:21:03.287903 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow get deployment -n istio-system --ignore-not-found=true'
2022-11-15T00:21:03.356713 asmcli: -------------
2022-11-15T00:21:04.118556 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow -n istio-system get pod -l app=istiod -o jsonpath={.items[].spec.containers[].env[?(@.name=="REVISION")].value}'
2022-11-15T00:21:04.189777 asmcli: -------------
2022-11-15T00:21:04.953951 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow -n istio-system get configmap istio-asm-1141-3 -o jsonpath={.data.mesh}'
2022-11-15T00:21:05.025998 asmcli: -------------
2022-11-15T00:21:05.663974 asmcli: Running: '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/kpt cfg set asm anthos.servicemesh.trustDomainAliases <NAME>-kf.svc.id.goog <NAME>-kf.hub.id.goog'
2022-11-15T00:21:05.736475 asmcli: -------------
asm/
set 2 field(s) of setter "anthos.servicemesh.trustDomainAliases" to value "<NAME>-kf.svc.id.goog"
2022-11-15T00:21:07.835690 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow get ns istio-system -o json'
2022-11-15T00:21:07.921626 asmcli: -------------
2022-11-15T00:21:08.368780 asmcli: topology.istio.io/network is already set to <NAME>-kf-default and will NOT be overridden.
2022-11-15T00:21:09.342880 asmcli: Installing ASM control plane...
2022-11-15T00:21:09.617921 asmcli: Running: './istio-1.14.1-asm.3/bin/istioctl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow install -f asm/istio/istio-operator.yaml -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-legacy-default-ingressgateway.yaml00 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-legacy-default-ingressgateway.yaml01 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-iap-operator.yaml00 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-iap-operator.yaml01 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-ingressgateway-iap.yaml00 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-ingressgateway-iap.yaml01 --set revision=asm-1141-3 --skip-confirmation'
2022-11-15T00:21:09.687437 asmcli: -------------
components.pilot.k8s.replicaCount should not be set when values.pilot.autoscaleEnabled is true
✔ Istio core installed                
✔ Istiod installed                
✔ Ingress gateways installed                
✔ Installation complete    
Thank you for installing Istio 1.14.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/yEtCbt45FZ3VoDT5A
2022-11-15T00:21:35.740842 asmcli: ...done!
2022-11-15T00:21:36.155300 asmcli: Running: './istio-1.14.1-asm.3/bin/istioctl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow profile dump -f asm/istio/istio-operator.yaml -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-legacy-default-ingressgateway.yaml00 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-legacy-default-ingressgateway.yaml01 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-iap-operator.yaml00 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-iap-operator.yaml01 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-ingressgateway-iap.yaml00 -f /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/overlay-ingressgateway-iap.yaml01'
2022-11-15T00:21:36.229662 asmcli: -------------
components.pilot.k8s.replicaCount should not be set when values.pilot.autoscaleEnabled is true
2022-11-15T00:21:36.699193 asmcli: Running: './istio-1.14.1-asm.3/bin/istioctl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow manifest generate'
2022-11-15T00:21:36.770487 asmcli: -------------
2022-11-15T00:21:37.250452 asmcli: Installing ASM CanonicalService controller in asm-system namespace...
2022-11-15T00:21:37.529794 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow apply -f asm/canonical-service/controller.yaml'
2022-11-15T00:21:37.604517 asmcli: -------------
namespace/asm-system unchanged
customresourcedefinition.apiextensions.k8s.io/canonicalservices.anthos.cloud.google.com configured
role.rbac.authorization.k8s.io/canonical-service-leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/canonical-service-manager-role configured
clusterrole.rbac.authorization.k8s.io/canonical-service-metrics-reader unchanged
serviceaccount/canonical-service-account unchanged
rolebinding.rbac.authorization.k8s.io/canonical-service-leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/canonical-service-manager-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/canonical-service-proxy-rolebinding unchanged
service/canonical-service-controller-manager-metrics-service unchanged
deployment.apps/canonical-service-controller-manager unchanged
2022-11-15T00:21:39.813544 asmcli: Waiting for deployment...
2022-11-15T00:21:40.105794 asmcli: Running: '/usr/bin/kubectl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow wait --for=condition=available --timeout=600s deployment/canonical-service-controller-manager -n asm-system'
2022-11-15T00:21:40.174922 asmcli: -------------
deployment.apps/canonical-service-controller-manager condition met
2022-11-15T00:21:40.821094 asmcli: ...done!
2022-11-15T00:21:41.025843 asmcli:
2022-11-15T00:21:41.096084 asmcli: *****************************
2022-11-15T00:21:41.367618 asmcli: Running: './istio-1.14.1-asm.3/bin/istioctl --kubeconfig /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm_kubeconfig --context gke_<NAME>-kf_us-east1_kubeflow version'
2022-11-15T00:21:41.440888 asmcli: -------------
client version: 1.14.1-asm.3
control plane version: 1.14.1
data plane version: 1.14.1-asm.3 (27 proxies)
2022-11-15T00:21:44.922162 asmcli: *****************************
2022-11-15T00:21:45.002321 asmcli: The ASM control plane installation is now complete.
2022-11-15T00:21:45.074846 asmcli: To enable automatic sidecar injection on a namespace, you can use the following command:
2022-11-15T00:21:45.146773 asmcli: kubectl label namespace <NAMESPACE> istio-injection- istio.io/rev=asm-1141-3 --overwrite
2022-11-15T00:21:45.219348 asmcli: If you use 'istioctl install' afterwards to modify this installation, you will need
2022-11-15T00:21:45.290924 asmcli: to specify the option '--set revision=asm-1141-3' to target this control plane
2022-11-15T00:21:45.360768 asmcli: instead of installing a new one.
2022-11-15T00:21:45.429798 asmcli: To finish the installation, enable Istio sidecar injection and restart your workloads.
2022-11-15T00:21:45.502208 asmcli: For more information, see:
2022-11-15T00:21:45.574347 asmcli: https://cloud.google.com/service-mesh/docs/proxy-injection
2022-11-15T00:21:45.647179 asmcli: The ASM package used for installation can be found at:
2022-11-15T00:21:45.720049 asmcli: /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm
2022-11-15T00:21:45.792919 asmcli: The version of istioctl that matches the installation can be found at:
2022-11-15T00:21:45.866385 asmcli: /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/istio-1.14.1-asm.3/bin/istioctl
2022-11-15T00:21:45.945933 asmcli: A symlink to the istioctl binary can be found at:
2022-11-15T00:21:46.021910 asmcli: /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/istioctl
2022-11-15T00:21:46.131692 asmcli: The combined configuration generated for installation can be found at:
2022-11-15T00:21:46.204735 asmcli: /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm-1141-3-manifest-raw.yaml
2022-11-15T00:21:46.282535 asmcli: The full, expanded set of kubernetes resources can be found at:
2022-11-15T00:21:46.355730 asmcli: /home/jorge_bedran/kubeflow-distribution/kubeflow/asm/package/asm-1141-3-manifest-expanded.yaml
2022-11-15T00:21:46.429694 asmcli: *****************************
2022-11-15T00:21:46.500173 asmcli: Successfully installed ASM.
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/asm'
Build directory: ./build
Component path: common/kubeflow-namespace
Apply component resources: common/kubeflow-namespace
Makefile not found, use kustomize and kubectl to apply resources.
namespace/kubeflow unchanged
Build directory: ./build
Component path: common/istio
Apply component resources: common/istio
Makefile not found, use kustomize and kubectl to apply resources.
gateway.networking.istio.io/kubeflow-gateway unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-istio-admin configured
clusterrole.rbac.authorization.k8s.io/kubeflow-istio-edit unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-istio-view unchanged
Build directory: ./build
Component path: common/config-kubeflow
Apply component resources: common/config-kubeflow
Makefile not found, use kustomize and kubectl to apply resources.
configmap/kubeflow-config unchanged
Build directory: ./build
Component path: common/kubeflow-roles
Apply component resources: common/kubeflow-roles
Makefile not found, use kustomize and kubectl to apply resources.
clusterrole.rbac.authorization.k8s.io/kubeflow-admin configured
clusterrole.rbac.authorization.k8s.io/kubeflow-edit configured
clusterrole.rbac.authorization.k8s.io/kubeflow-kubernetes-admin unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-kubernetes-edit unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-kubernetes-view unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-view configured
Build directory: ./build
Component path: common/cert-manager
Apply component resources: common/cert-manager
Found Makefile, call 'make apply' of this component Makefile.
make[1]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cert-manager'
# Hydrate Common cert-manager
rm -rf ./build && mkdir -p ./build
mkdir -p ./build/cert-manager
mkdir -p ./build/kubeflow-issuer
kustomize build -o ./build/cert-manager ./
kustomize build -o ./build/kubeflow-issuer ./cert-manager-1-5/cert-manager/kubeflow-issuer
# Apply Common cert-manager
kubectl --context=kubeflow apply -f ./build/cert-manager/*v1_namespace_cert-manager.yaml
namespace/cert-manager unchanged
kubectl --context=kubeflow apply -f ./build/cert-manager
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook configured
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook configured
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io configured
deployment.apps/cert-manager-cainjector unchanged
deployment.apps/cert-manager-webhook unchanged
deployment.apps/cert-manager unchanged
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving unchanged
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving configured
service/cert-manager-webhook unchanged
service/cert-manager unchanged
serviceaccount/cert-manager-cainjector unchanged
serviceaccount/cert-manager-webhook unchanged
serviceaccount/cert-manager unchanged
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection unchanged
role.rbac.authorization.k8s.io/cert-manager:leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection configured
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-edit unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-view unchanged
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders unchanged
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews configured
namespace/cert-manager unchanged
kubectl --context=kubeflow -n cert-manager wait --for=condition=Available --timeout=600s deploy cert-manager-webhook
deployment.apps/cert-manager-webhook condition met
kubectl --context=kubeflow -n cert-manager wait --for=condition=Available --timeout=600s deploy cert-manager
deployment.apps/cert-manager condition met
kubectl --context=kubeflow -n cert-manager wait --for=condition=Available --timeout=600s deploy cert-manager-cainjector
deployment.apps/cert-manager-cainjector condition met
# Common kubeflow-issuer
kubectl --context=kubeflow apply -f ./build/kubeflow-issuer
clusterissuer.cert-manager.io/kubeflow-self-signing-issuer unchanged
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/cert-manager'
Build directory: ./build
Component path: contrib/metacontroller
Apply component resources: contrib/metacontroller
Makefile not found, use kustomize and kubectl to apply resources.
customresourcedefinition.apiextensions.k8s.io/compositecontrollers.metacontroller.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/controllerrevisions.metacontroller.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/decoratorcontrollers.metacontroller.k8s.io configured
statefulset.apps/metacontroller configured
clusterrolebinding.rbac.authorization.k8s.io/meta-controller-cluster-role-binding unchanged
serviceaccount/meta-controller-service unchanged
Build directory: ./build
Component path: common/iap-ingress
Apply component resources: common/iap-ingress
Found Makefile, call 'make apply' of this component Makefile.
make[1]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/iap-ingress'
rm -rf ./build && mkdir -p ./build
kustomize build --load-restrictor LoadRestrictionsNone -o ./build ./
./check_oauth_secret.sh
kubectl --context=kubeflow -n istio-system create secret generic kubeflow-oauth --from-literal=client_id=<CLIENT_ID>.apps.googleusercontent.com --from-literal=client_secret=<SECRET> --dry-run -o yaml | kubectl apply -f -
W1115 00:22:17.675138   10695 helpers.go:663] --dry-run is deprecated and can be replaced with --dry-run=client.
secret/kubeflow-oauth configured
kubectl --context=kubeflow apply -f ./build
deployment.apps/cloud-endpoints-enabler created
deployment.apps/iap-enabler created
deployment.apps/whoami-app unchanged
statefulset.apps/backend-updater created
backendconfig.cloud.google.com/iap-backendconfig unchanged
managedcertificate.networking.gke.io/gke-certificate unchanged
ingress.networking.k8s.io/envoy-ingress configured
clusterrole.rbac.authorization.k8s.io/kf-admin-iap unchanged
clusterrolebinding.rbac.authorization.k8s.io/kf-admin-iap unchanged
configmap/envoy-config unchanged
configmap/iap-ingress-config configured
configmap/ingress-bootstrap-config unchanged
service/whoami-app unchanged
serviceaccount/kf-admin unchanged
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/iap-ingress'
Build directory: ./build
Component path: apps/admission-webhook
Apply component resources: apps/admission-webhook
Makefile not found, use kustomize and kubectl to apply resources.
mutatingwebhookconfiguration.admissionregistration.k8s.io/admission-webhook-mutating-webhook-configuration configured
customresourcedefinition.apiextensions.k8s.io/poddefaults.kubeflow.org configured
deployment.apps/admission-webhook-deployment unchanged
certificate.cert-manager.io/admission-webhook-cert unchanged
issuer.cert-manager.io/admission-webhook-selfsigned-issuer unchanged
clusterrole.rbac.authorization.k8s.io/admission-webhook-cluster-role unchanged
clusterrole.rbac.authorization.k8s.io/admission-webhook-kubeflow-poddefaults-admin configured
clusterrole.rbac.authorization.k8s.io/admission-webhook-kubeflow-poddefaults-edit configured
clusterrole.rbac.authorization.k8s.io/admission-webhook-kubeflow-poddefaults-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/admission-webhook-cluster-role-binding unchanged
service/admission-webhook-service unchanged
serviceaccount/admission-webhook-service-account unchanged
Build directory: ./build
Component path: apps/profiles
Apply component resources: apps/profiles
Makefile not found, use kustomize and kubectl to apply resources.
customresourcedefinition.apiextensions.k8s.io/profiles.kubeflow.org configured
deployment.apps/profiles-deployment unchanged
virtualservice.networking.istio.io/profiles-kfam unchanged
clusterrolebinding.rbac.authorization.k8s.io/profiles-cluster-rolebinding unchanged
role.rbac.authorization.k8s.io/profiles-leader-election-role unchanged
rolebinding.rbac.authorization.k8s.io/profiles-leader-election-rolebinding unchanged
authorizationpolicy.security.istio.io/profiles-kfam unchanged
configmap/namespace-labels-data-d9t4922m4c unchanged
configmap/profiles-config-4cgmc4t944 unchanged
service/profiles-kfam unchanged
serviceaccount/profiles-controller-service-account unchanged
Build directory: ./build
Component path: apps/centraldashboard
Apply component resources: apps/centraldashboard
Makefile not found, use kustomize and kubectl to apply resources.
2022/11/15 00:22:28 well-defined vars that were never replaced: CD_REGISTRATION_FLOW,CD_USERID_HEADER,CD_USERID_PREFIX
deployment.apps/centraldashboard unchanged
virtualservice.networking.istio.io/centraldashboard unchanged
clusterrole.rbac.authorization.k8s.io/centraldashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/centraldashboard unchanged
role.rbac.authorization.k8s.io/centraldashboard unchanged
rolebinding.rbac.authorization.k8s.io/centraldashboard unchanged
authorizationpolicy.security.istio.io/central-dashboard unchanged
configmap/centraldashboard-config unchanged
configmap/centraldashboard-parameters unchanged
service/centraldashboard unchanged
serviceaccount/centraldashboard unchanged
Build directory: ./build
Component path: apps/jupyter
Apply component resources: apps/jupyter
Makefile not found, use kustomize and kubectl to apply resources.
2022/11/15 00:22:30 well-defined vars that were never replaced: JWA_USERID_HEADER,JWA_USERID_PREFIX
customresourcedefinition.apiextensions.k8s.io/notebooks.kubeflow.org configured
deployment.apps/jupyter-web-app-deployment unchanged
deployment.apps/notebook-controller-deployment unchanged
virtualservice.networking.istio.io/jupyter-web-app-jupyter-web-app unchanged
clusterrole.rbac.authorization.k8s.io/jupyter-web-app-cluster-role unchanged
clusterrole.rbac.authorization.k8s.io/jupyter-web-app-kubeflow-notebook-ui-admin configured
clusterrole.rbac.authorization.k8s.io/jupyter-web-app-kubeflow-notebook-ui-edit unchanged
clusterrole.rbac.authorization.k8s.io/jupyter-web-app-kubeflow-notebook-ui-view unchanged
clusterrole.rbac.authorization.k8s.io/notebook-controller-kubeflow-notebooks-admin configured
clusterrole.rbac.authorization.k8s.io/notebook-controller-kubeflow-notebooks-edit unchanged
clusterrole.rbac.authorization.k8s.io/notebook-controller-kubeflow-notebooks-view unchanged
clusterrole.rbac.authorization.k8s.io/notebook-controller-role configured
clusterrolebinding.rbac.authorization.k8s.io/jupyter-web-app-cluster-role-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/notebook-controller-role-binding unchanged
role.rbac.authorization.k8s.io/jupyter-web-app-jupyter-notebook-role unchanged
role.rbac.authorization.k8s.io/notebook-controller-leader-election-role unchanged
rolebinding.rbac.authorization.k8s.io/jupyter-web-app-jupyter-notebook-role-binding unchanged
rolebinding.rbac.authorization.k8s.io/notebook-controller-leader-election-rolebinding unchanged
configmap/jupyter-web-app-config-c765bftc87 unchanged
configmap/jupyter-web-app-logos unchanged
configmap/jupyter-web-app-parameters-42k97gcbmb unchanged
configmap/notebook-controller-config-m44cmb547t unchanged
service/jupyter-web-app-service unchanged
service/notebook-controller-service unchanged
serviceaccount/jupyter-web-app-service-account unchanged
serviceaccount/notebook-controller-service-account unchanged
Build directory: ./build
Component path: apps/volumes-web-app
Apply component resources: apps/volumes-web-app
Makefile not found, use kustomize and kubectl to apply resources.
2022/11/15 00:22:35 well-defined vars that were never replaced: VWA_USERID_HEADER,VWA_USERID_PREFIX
deployment.apps/volumes-web-app-deployment unchanged
virtualservice.networking.istio.io/volumes-web-app-volumes-web-app unchanged
clusterrole.rbac.authorization.k8s.io/volumes-web-app-cluster-role unchanged
clusterrole.rbac.authorization.k8s.io/volumes-web-app-kubeflow-volume-ui-admin configured
clusterrole.rbac.authorization.k8s.io/volumes-web-app-kubeflow-volume-ui-edit unchanged
clusterrole.rbac.authorization.k8s.io/volumes-web-app-kubeflow-volume-ui-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/volumes-web-app-cluster-role-binding unchanged
configmap/volumes-web-app-parameters-57h65c44mg unchanged
service/volumes-web-app-service unchanged
serviceaccount/volumes-web-app-service-account unchanged
Build directory: ./build
Component path: apps/tensorboard
Apply component resources: apps/tensorboard
Makefile not found, use kustomize and kubectl to apply resources.
2022/11/15 00:22:37 well-defined vars that were never replaced: TWA_USERID_HEADER,TWA_USERID_PREFIX
customresourcedefinition.apiextensions.k8s.io/tensorboards.tensorboard.kubeflow.org configured
deployment.apps/tensorboard-controller-deployment unchanged
deployment.apps/tensorboards-web-app-deployment unchanged
virtualservice.networking.istio.io/tensorboards-web-app-tensorboards-web-app unchanged
clusterrole.rbac.authorization.k8s.io/tensorboard-controller-manager-role configured
clusterrole.rbac.authorization.k8s.io/tensorboard-controller-metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/tensorboard-controller-proxy-role unchanged
clusterrole.rbac.authorization.k8s.io/tensorboards-web-app-cluster-role unchanged
clusterrole.rbac.authorization.k8s.io/tensorboards-web-app-kubeflow-tensorboard-ui-admin configured
clusterrole.rbac.authorization.k8s.io/tensorboards-web-app-kubeflow-tensorboard-ui-edit unchanged
clusterrole.rbac.authorization.k8s.io/tensorboards-web-app-kubeflow-tensorboard-ui-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/tensorboard-controller-manager-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/tensorboard-controller-proxy-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/tensorboards-web-app-cluster-role-binding unchanged
role.rbac.authorization.k8s.io/tensorboard-controller-leader-election-role unchanged
rolebinding.rbac.authorization.k8s.io/tensorboard-controller-leader-election-rolebinding unchanged
configmap/tensorboard-controller-config-dg89gdkk47 unchanged
configmap/tensorboards-web-app-parameters-642bbg7t66 unchanged
service/tensorboard-controller-controller-manager-metrics-service unchanged
service/tensorboards-web-app-service unchanged
serviceaccount/tensorboard-controller-controller-manager unchanged
serviceaccount/tensorboards-web-app-service-account unchanged
Build directory: ./build
Component path: apps/training-operator
Apply component resources: apps/training-operator
Makefile not found, use kustomize and kubectl to apply resources.
customresourcedefinition.apiextensions.k8s.io/mpijobs.kubeflow.org configured
customresourcedefinition.apiextensions.k8s.io/mxjobs.kubeflow.org configured
customresourcedefinition.apiextensions.k8s.io/pytorchjobs.kubeflow.org configured
customresourcedefinition.apiextensions.k8s.io/tfjobs.kubeflow.org configured
customresourcedefinition.apiextensions.k8s.io/xgboostjobs.kubeflow.org configured
deployment.apps/training-operator unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-training-admin configured
clusterrole.rbac.authorization.k8s.io/kubeflow-training-edit unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-training-view unchanged
clusterrole.rbac.authorization.k8s.io/training-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/training-operator unchanged
service/training-operator unchanged
serviceaccount/training-operator unchanged
Build directory: ./build
Component path: apps/pipelines
Apply component resources: apps/pipelines
Found Makefile, call 'make apply' of this component Makefile.
make[1]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/apps/pipelines'
rm -rf ./build
mkdir -p ./build/k8s
mkdir -p ./build/cnrm
# Hydrate GCP config connector resources
kustomize build -o ./build/cnrm ./cnrm
# Hydrate Kubernetes resources
kustomize build -o ./build/k8s .
2022/11/15 00:22:50 well-defined vars that were never replaced: kfp-app-name,kfp-app-version
kubectl --context=kubeflow-mc apply -f ./build/cnrm
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-client unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-proxy-wi-user unchanged
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-gcs-wi-user unchanged
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-sql unchanged
sqluser.sql.cnrm.cloud.google.com/kubeflow-kfp-root unchanged
storagebucketaccesscontrol.storage.cnrm.cloud.google.com/kubeflow-kfp-gcs-acl unchanged
# Wait for all Google Cloud resources to get created and become ready.
# If this takes long, you can view status by:
kubectl --context=kubeflow-mc get -f ./build/cnrm
# or:
cd kubeflow/apps/pipelines && make status-cnrm
# For resources with READY=False, debug by:
kubectl --context=kubeflow-mc -n <NAME>-kf describe <KIND>/<NAME>

kubectl --context=kubeflow-mc wait --for=condition=Ready --timeout=100s -f ./build/cnrm \
        || kubectl --context=kubeflow-mc get -f ./build/cnrm
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-client condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-proxy-wi-user condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-gcs-wi-user condition met
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-sql condition met
sqluser.sql.cnrm.cloud.google.com/kubeflow-kfp-root condition met
storagebucketaccesscontrol.storage.cnrm.cloud.google.com/kubeflow-kfp-gcs-acl condition met
kubectl --context=kubeflow-mc wait --for=condition=Ready --timeout=500s -f ./build/cnrm
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-client condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-cloudsql-proxy-wi-user condition met
iampolicymember.iam.cnrm.cloud.google.com/kubeflow-kfp-gcs-wi-user condition met
iamserviceaccount.iam.cnrm.cloud.google.com/kubeflow-sql condition met
sqluser.sql.cnrm.cloud.google.com/kubeflow-kfp-root condition met
storagebucketaccesscontrol.storage.cnrm.cloud.google.com/kubeflow-kfp-gcs-acl condition met
kubectl --context=kubeflow apply -f ./build/k8s
customresourcedefinition.apiextensions.k8s.io/clusterworkflowtemplates.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/cronworkflows.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/scheduledworkflows.kubeflow.org unchanged
customresourcedefinition.apiextensions.k8s.io/viewers.kubeflow.org unchanged
customresourcedefinition.apiextensions.k8s.io/workfloweventbindings.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/workflows.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/workflowtaskresults.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/workflowtasksets.argoproj.io unchanged
customresourcedefinition.apiextensions.k8s.io/workflowtemplates.argoproj.io unchanged
deployment.apps/cache-deployer-deployment unchanged
deployment.apps/cache-server configured
deployment.apps/cloudsqlproxy configured
deployment.apps/kubeflow-pipelines-profile-controller unchanged
deployment.apps/metadata-envoy-deployment unchanged
deployment.apps/metadata-grpc-deployment unchanged
deployment.apps/metadata-writer configured
deployment.apps/minio unchanged
deployment.apps/ml-pipeline-persistenceagent configured
deployment.apps/ml-pipeline-scheduledworkflow configured
deployment.apps/ml-pipeline-ui unchanged
deployment.apps/ml-pipeline-viewer-crd configured
deployment.apps/ml-pipeline-visualizationserver unchanged
deployment.apps/ml-pipeline unchanged
deployment.apps/workflow-controller unchanged
compositecontroller.metacontroller.k8s.io/kubeflow-pipelines-profile-controller unchanged
destinationrule.networking.istio.io/metadata-grpc-service unchanged
destinationrule.networking.istio.io/ml-pipeline-minio unchanged
destinationrule.networking.istio.io/ml-pipeline-mysql unchanged
destinationrule.networking.istio.io/ml-pipeline-ui unchanged
destinationrule.networking.istio.io/ml-pipeline-visualizationserver unchanged
destinationrule.networking.istio.io/ml-pipeline unchanged
virtualservice.networking.istio.io/metadata-grpc unchanged
virtualservice.networking.istio.io/ml-pipeline-ui unchanged
clusterrole.rbac.authorization.k8s.io/aggregate-to-kubeflow-pipelines-edit unchanged
clusterrole.rbac.authorization.k8s.io/aggregate-to-kubeflow-pipelines-view unchanged
clusterrole.rbac.authorization.k8s.io/argo-aggregate-to-admin unchanged
clusterrole.rbac.authorization.k8s.io/argo-aggregate-to-edit unchanged
clusterrole.rbac.authorization.k8s.io/argo-aggregate-to-view unchanged
clusterrole.rbac.authorization.k8s.io/argo-cluster-role unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-pipelines-cache-deployer-clusterrole unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-pipelines-cache-role unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-pipelines-edit configured
clusterrole.rbac.authorization.k8s.io/kubeflow-pipelines-metadata-writer-role unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-pipelines-view configured
clusterrole.rbac.authorization.k8s.io/ml-pipeline-persistenceagent-role unchanged
clusterrole.rbac.authorization.k8s.io/ml-pipeline-scheduledworkflow-role unchanged
clusterrole.rbac.authorization.k8s.io/ml-pipeline-ui unchanged
clusterrole.rbac.authorization.k8s.io/ml-pipeline-viewer-controller-role unchanged
clusterrole.rbac.authorization.k8s.io/ml-pipeline unchanged
clusterrolebinding.rbac.authorization.k8s.io/argo-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubeflow-pipelines-cache-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubeflow-pipelines-cache-deployer-clusterrolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubeflow-pipelines-metadata-writer-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/ml-pipeline-persistenceagent-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/ml-pipeline-scheduledworkflow-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/ml-pipeline-ui unchanged
clusterrolebinding.rbac.authorization.k8s.io/ml-pipeline-viewer-crd-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/ml-pipeline unchanged
role.rbac.authorization.k8s.io/argo-role unchanged
role.rbac.authorization.k8s.io/kubeflow-pipelines-cache-deployer-role unchanged
role.rbac.authorization.k8s.io/kubeflow-pipelines-cache-role unchanged
role.rbac.authorization.k8s.io/kubeflow-pipelines-metadata-writer-role unchanged
role.rbac.authorization.k8s.io/ml-pipeline-persistenceagent-role unchanged
role.rbac.authorization.k8s.io/ml-pipeline-scheduledworkflow-role unchanged
role.rbac.authorization.k8s.io/ml-pipeline-ui unchanged
role.rbac.authorization.k8s.io/ml-pipeline-viewer-controller-role unchanged
role.rbac.authorization.k8s.io/ml-pipeline unchanged
role.rbac.authorization.k8s.io/pipeline-runner unchanged
rolebinding.rbac.authorization.k8s.io/argo-binding unchanged
rolebinding.rbac.authorization.k8s.io/kubeflow-pipelines-cache-binding unchanged
rolebinding.rbac.authorization.k8s.io/kubeflow-pipelines-cache-deployer-rolebinding unchanged
rolebinding.rbac.authorization.k8s.io/kubeflow-pipelines-metadata-writer-binding unchanged
rolebinding.rbac.authorization.k8s.io/ml-pipeline-persistenceagent-binding unchanged
rolebinding.rbac.authorization.k8s.io/ml-pipeline-scheduledworkflow-binding unchanged
rolebinding.rbac.authorization.k8s.io/ml-pipeline-ui unchanged
rolebinding.rbac.authorization.k8s.io/ml-pipeline-viewer-crd-binding unchanged
rolebinding.rbac.authorization.k8s.io/ml-pipeline unchanged
rolebinding.rbac.authorization.k8s.io/pipeline-runner-binding unchanged
priorityclass.scheduling.k8s.io/workflow-controller unchanged
authorizationpolicy.security.istio.io/metadata-grpc-service unchanged
authorizationpolicy.security.istio.io/minio-service unchanged
authorizationpolicy.security.istio.io/ml-pipeline-ui unchanged
authorizationpolicy.security.istio.io/ml-pipeline-visualizationserver unchanged
authorizationpolicy.security.istio.io/ml-pipeline unchanged
authorizationpolicy.security.istio.io/mysql unchanged
authorizationpolicy.security.istio.io/service-cache-server unchanged
configmap/kfp-launcher unchanged
configmap/kubeflow-pipelines-profile-controller-code-hdk828hd6c unchanged
configmap/kubeflow-pipelines-profile-controller-env-5252m69c4c unchanged
configmap/metadata-grpc-configmap unchanged
configmap/ml-pipeline-ui-configmap unchanged
configmap/persistenceagent-config-hkgkmd64bh unchanged
configmap/pipeline-api-server-config-dc9hkg52h6 unchanged
configmap/pipeline-install-config unchanged
configmap/workflow-controller-configmap unchanged
secret/mlpipeline-minio-artifact unchanged
secret/mysql-secret configured
service/cache-server unchanged
service/kubeflow-pipelines-profile-controller unchanged
service/metadata-envoy-service unchanged
service/metadata-grpc-service unchanged
service/minio-service unchanged
service/ml-pipeline-ui unchanged
service/ml-pipeline-visualizationserver unchanged
service/ml-pipeline unchanged
service/mysql unchanged
service/workflow-controller-metrics unchanged
serviceaccount/argo unchanged
serviceaccount/kubeflow-pipelines-cache-deployer-sa unchanged
serviceaccount/kubeflow-pipelines-cache unchanged
serviceaccount/kubeflow-pipelines-cloudsql-proxy unchanged
serviceaccount/kubeflow-pipelines-container-builder unchanged
serviceaccount/kubeflow-pipelines-metadata-writer unchanged
serviceaccount/kubeflow-pipelines-minio-gcs-gateway unchanged
serviceaccount/kubeflow-pipelines-viewer unchanged
serviceaccount/metadata-grpc-server unchanged
serviceaccount/ml-pipeline-persistenceagent unchanged
serviceaccount/ml-pipeline-scheduledworkflow unchanged
serviceaccount/ml-pipeline-ui unchanged
serviceaccount/ml-pipeline-viewer-crd-service-account unchanged
serviceaccount/ml-pipeline-visualizationserver unchanged
serviceaccount/ml-pipeline unchanged
serviceaccount/pipeline-runner unchanged
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/apps/pipelines'
Build directory: ./build
Component path: common/knative
Apply component resources: common/knative
Found Makefile, call 'make apply' of this component Makefile.
make[1]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/knative'
rm -rf ./build && mkdir -p ./build
kustomize build -o ./build ./
kubectl --context=kubeflow apply -f ././build/*v1_namespace_knative-serving.yaml
namespace/knative-serving unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_certificates.networking.internal.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_configurations.serving.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_images.caching.internal.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_ingresses.networking.internal.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_metrics.autoscaling.internal.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_podautoscalers.autoscaling.internal.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_revisions.serving.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_routes.serving.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_serverlessservices.networking.internal.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
kubectl --context=kubeflow apply -f ././build/*v1_customresourcedefinition_services.serving.knative.dev.yaml
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
kubectl --context=kubeflow apply --recursive=true -f ./build
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.domainmapping.serving.knative.dev configured
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.istio.networking.internal.knative.dev unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev configured
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.istio.networking.internal.knative.dev unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.domainmapping.serving.knative.dev configured
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev configured
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
service/knative-local-gateway unchanged
deployment.apps/activator configured
deployment.apps/autoscaler configured
deployment.apps/controller configured
deployment.apps/domain-mapping unchanged
deployment.apps/domainmapping-webhook unchanged
deployment.apps/net-istio-controller unchanged
deployment.apps/net-istio-webhook unchanged
deployment.apps/webhook unchanged
horizontalpodautoscaler.autoscaling/activator unchanged
horizontalpodautoscaler.autoscaling/webhook unchanged
image.caching.internal.knative.dev/queue-proxy unchanged
gateway.networking.istio.io/knative-ingress-gateway unchanged
gateway.networking.istio.io/knative-local-gateway unchanged
poddisruptionbudget.policy/activator-pdb configured
poddisruptionbudget.policy/webhook-pdb configured
peerauthentication.security.istio.io/domainmapping-webhook unchanged
peerauthentication.security.istio.io/net-istio-webhook unchanged
peerauthentication.security.istio.io/webhook unchanged
configmap/config-autoscaler unchanged
configmap/config-defaults unchanged
configmap/config-deployment unchanged
configmap/config-domain unchanged
configmap/config-features unchanged
configmap/config-gc unchanged
configmap/config-istio unchanged
configmap/config-leader-election unchanged
configmap/config-logging unchanged
configmap/config-network unchanged
configmap/config-observability unchanged
configmap/config-tracing unchanged
secret/domainmapping-webhook-certs unchanged
secret/net-istio-webhook-certs unchanged
secret/webhook-certs unchanged
service/activator-service unchanged
service/autoscaler unchanged
service/controller unchanged
service/domainmapping-webhook unchanged
service/net-istio-webhook unchanged
service/webhook unchanged
serviceaccount/controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-aggregated-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-core unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-istio unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view unchanged
clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-addressable-resolver unchanged
clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin unchanged
namespace/knative-serving unchanged
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/knative'
Build directory: ./build
Component path: contrib/kserve
Apply component resources: contrib/kserve
Found Makefile, call 'make apply' of this component Makefile.
make[1]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/contrib/kserve'
rm -rf ./build && mkdir -p ./build
kustomize build -o ./build ./
2022/11/15 00:23:49 well-defined vars that were never replaced: ingressGateway
# Apply App kserve
# To resolve https://github.com/kubeflow/gcp-blueprints/issues/384,
# we apply runtime manifests after the corresponding CRDs become available
# 1. Move runtime manifests into a separate subdirectory
mkdir ./build/runtimes
mv ./build/serving*clusterservingruntime* ./build/runtimes/
# 2. Apply the remaining manifests
kubectl --context=kubeflow apply -f ./build
mutatingwebhookconfiguration.admissionregistration.k8s.io/inferenceservice.serving.kserve.io configured
validatingwebhookconfiguration.admissionregistration.k8s.io/inferenceservice.serving.kserve.io configured
validatingwebhookconfiguration.admissionregistration.k8s.io/trainedmodel.serving.kserve.io configured
customresourcedefinition.apiextensions.k8s.io/clusterservingruntimes.serving.kserve.io configured
customresourcedefinition.apiextensions.k8s.io/inferenceservices.serving.kserve.io configured
customresourcedefinition.apiextensions.k8s.io/servingruntimes.serving.kserve.io configured
customresourcedefinition.apiextensions.k8s.io/trainedmodels.serving.kserve.io configured
deployment.apps/kserve-models-web-app configured
statefulset.apps/kserve-controller-manager unchanged
certificate.cert-manager.io/serving-cert unchanged
issuer.cert-manager.io/selfsigned-issuer unchanged
virtualservice.networking.istio.io/kserve-models-web-app unchanged
clusterrole.rbac.authorization.k8s.io/kserve-manager-role configured
clusterrole.rbac.authorization.k8s.io/kserve-models-web-app-cluster-role unchanged
clusterrole.rbac.authorization.k8s.io/kserve-proxy-role unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-kserve-admin configured
clusterrole.rbac.authorization.k8s.io/kubeflow-kserve-edit unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-kserve-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/kserve-manager-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kserve-models-web-app-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kserve-proxy-rolebinding unchanged
role.rbac.authorization.k8s.io/kserve-leader-election-role unchanged
rolebinding.rbac.authorization.k8s.io/kserve-leader-election-rolebinding unchanged
authorizationpolicy.security.istio.io/kserve-models-web-app unchanged
configmap/inferenceservice-config unchanged
configmap/kserve-config unchanged
configmap/kserve-models-web-app-config-87f7mg8b2f unchanged
secret/kserve-webhook-server-secret unchanged
service/kserve-controller-manager-metrics-service unchanged
service/kserve-controller-manager-service unchanged
service/kserve-models-web-app unchanged
service/kserve-webhook-server-service unchanged
serviceaccount/kserve-controller-manager unchanged
serviceaccount/kserve-models-web-app unchanged
# 3. Wait until CRDs become available or exit in 30s
kubectl wait --for condition=established --timeout=30s crd/clusterservingruntimes.serving.kserve.io
customresourcedefinition.apiextensions.k8s.io/clusterservingruntimes.serving.kserve.io condition met
# 4. Apply runtime manifests
kubectl --context=kubeflow apply -f ./build/runtimes
clusterservingruntime.serving.kserve.io/kserve-lgbserver unchanged
clusterservingruntime.serving.kserve.io/kserve-mlserver unchanged
clusterservingruntime.serving.kserve.io/kserve-paddleserver unchanged
clusterservingruntime.serving.kserve.io/kserve-pmmlserver unchanged
clusterservingruntime.serving.kserve.io/kserve-sklearnserver unchanged
clusterservingruntime.serving.kserve.io/kserve-tensorflow-serving unchanged
clusterservingruntime.serving.kserve.io/kserve-torchserve unchanged
clusterservingruntime.serving.kserve.io/kserve-tritonserver unchanged
clusterservingruntime.serving.kserve.io/kserve-xgbserver unchanged
# Patch knative configmap
kubectl --context=kubeflow patch cm config-domain --namespace knative-serving --type merge -p '{"data":{"kubeflow.endpoints.<NAME>-kf.cloud.goog": ""}}'
configmap/config-domain patched (no change)
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/contrib/kserve'
Build directory: ./build
Component path: apps/katib
Apply component resources: apps/katib
Makefile not found, use kustomize and kubectl to apply resources.
mutatingwebhookconfiguration.admissionregistration.k8s.io/katib.kubeflow.org configured
validatingwebhookconfiguration.admissionregistration.k8s.io/katib.kubeflow.org configured
customresourcedefinition.apiextensions.k8s.io/experiments.kubeflow.org unchanged
customresourcedefinition.apiextensions.k8s.io/suggestions.kubeflow.org unchanged
customresourcedefinition.apiextensions.k8s.io/trials.kubeflow.org unchanged
deployment.apps/katib-controller unchanged
deployment.apps/katib-db-manager unchanged
deployment.apps/katib-mysql unchanged
deployment.apps/katib-ui unchanged
certificate.cert-manager.io/katib-webhook-cert unchanged
issuer.cert-manager.io/katib-selfsigned-issuer unchanged
virtualservice.networking.istio.io/katib-ui unchanged
clusterrole.rbac.authorization.k8s.io/katib-controller unchanged
clusterrole.rbac.authorization.k8s.io/katib-ui unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-katib-admin configured
clusterrole.rbac.authorization.k8s.io/kubeflow-katib-edit unchanged
clusterrole.rbac.authorization.k8s.io/kubeflow-katib-view unchanged
clusterrolebinding.rbac.authorization.k8s.io/katib-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/katib-ui unchanged
configmap/katib-config unchanged
configmap/trial-templates unchanged
persistentvolumeclaim/katib-mysql unchanged
secret/katib-mysql-secrets unchanged
service/katib-controller unchanged
service/katib-db-manager unchanged
service/katib-mysql unchanged
service/katib-ui unchanged
serviceaccount/katib-controller unchanged
serviceaccount/katib-ui unchanged
make -C common/iap-ingress pod-reset
make[1]: Entering directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/iap-ingress'
kubectl wait deployments/iap-enabler -n istio-system --for=condition=available --timeout=30s
deployment.apps/iap-enabler condition met
kubectl wait deployments/cloud-endpoints-enabler -n istio-system --for=condition=available --timeout=30s
deployment.apps/cloud-endpoints-enabler condition met
kubectl rollout status --watch --timeout=30s -n istio-system statefulset/backend-updater
partitioned roll out complete: 1 new pods have been updated...
sleep 90
# Kick the IAP pod because we will reset the policy and need to patch it.
# TODO(https://github.com/kubeflow/gcp-blueprints/issues/14)
kubectl --context=kubeflow -n istio-system delete deployment iap-enabler
deployment.apps "iap-enabler" deleted
# Kick the backend updater pod, because information might be outdated after the apply.
# https://github.com/kubeflow/gcp-blueprints/issues/160
kubectl --context=kubeflow -n istio-system delete statefulset backend-updater
statefulset.apps "backend-updater" deleted
# Kick the cloud-endpoints-enabler deployment
kubectl --context=kubeflow -n istio-system delete deployment cloud-endpoints-enabler
deployment.apps "cloud-endpoints-enabler" deleted
make[1]: Leaving directory '/home/jorge_bedran/kubeflow-distribution/kubeflow/common/iap-ingress'

The values are , , and are correct, just substituted them for public use.

There were a few more lines, but all stated unchanged.

gkcalat commented 1 year ago

Thanks! Can we try checking the log of cloud-endpoints-enabler pod?

It might be better to change variable, removing this line, and running (this will create a new kubeflow cluster):

mkdir temp
cd temp
git clone https://github.com/GoogleCloudPlatform/kubeflow-distribution.git 
cd kubeflow-distribution
git checkout tags/v1.6.1 -b v1.6.1
cd kubeflow

gcloud services enable \
  serviceusage.googleapis.com \
  compute.googleapis.com \
  container.googleapis.com \
  iam.googleapis.com \
  servicemanagement.googleapis.com \
  cloudresourcemanager.googleapis.com \
  ml.googleapis.com \
  iap.googleapis.com \
  sqladmin.googleapis.com \
  meshconfig.googleapis.com \
  krmapihosting.googleapis.com \
  servicecontrol.googleapis.com \
  endpoints.googleapis.com

bash ./pull-upstream.sh
bash ./kpt-set.sh
make apply-kcc
make apply

Once deployed you can get the logs:

kubectl get pods --namespace istio-system    # find the pod name
kubectl logs <POD_NAME> --namespace istio-system
JPBedran commented 1 year ago

Hey @gkcalat, Np, im doing all that now, just as a note, in my attempts, ive commented the delete cloud-endpoints-enabler, and running kubectl -n istio-system logs deployment/cloud-endpoints-enable i get "GET /healthz HTTP/1.1" 200 -"

Ill redo it now, and let you know what I get from the new depl.

JPBedran commented 1 year ago

Hey @gkcalat, done, and now I see the difference hahaha.

running kubectl logs cloud-endpoints-enabler-575954dd89-58x98 -n istio-system

+ '[' -z istio-system ']'
+ '[' -z istio-ingressgateway ']'
+ '[' -z envoy-ingress ']'
+ '[' -z kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog ']'
+++ dirname /var/envoy-config/setup_cloudendpoints.sh
++ cd /var/envoy-config
++ pwd
+ __dir=/var/envoy-config
++ curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/project/project-id
+ PROJECT=<PROJECT>-kf
+ '[' -z <PROJECT>-kf ']'
++ curl -s -H 'Metadata-Flavor: Google' http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id
+ PROJECT_NUM=<PROJECT_ID>
+ '[' -z <PROJECT_ID> ']'
+ '[' '!' -z '' ']'
+ gcloud config list
[component_manager]
disable_update_check = true
[core]
account = kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com
disable_usage_reporting = true
project = <PROJECT>-kf
[metrics]
environment = github_docker_image

Your active configuration is: [default]
+ gcloud auth list
                   Credentialed Accounts
ACTIVE  ACCOUNT
*       kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com

To set the active account, run:
    $ gcloud config set account `ACCOUNT`

+ true
+ set_endpoint
++ kubectl --namespace=istio-system get svc istio-ingressgateway -o 'jsonpath={.spec.ports[?(@.name=="http2")].nodePort}'
+ NODE_PORT=31224
+ echo '[DEBUG] node port is 31224'
+ BACKEND_NAME=
+ [[ -z '' ]]
[DEBUG] node port is 31224
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo++ 
grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
[DEBUG] backend name is 
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
[DEBUG] fetching backends info with envoy-ingress: 
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
[DEBUG] backend name is 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
[DEBUG] backend name is 
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
[DEBUG] backend name is 
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
[DEBUG] backend name is 
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ ++ echo
grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
[DEBUG] backend name is 
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ ++ grep -o 'k8s-be-31224--[0-9a-z]\+'echo

+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
[DEBUG] backend name is 
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
[DEBUG] backend name is 
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is 
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
[DEBUG] fetching backends info with envoy-ingress: 
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: 
+ BACKENDS=
+ echo '[DEBUG] fetching backends info with envoy-ingress: '
++ echo
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=
+ echo '[DEBUG] backend name is '
+ sleep 2
[DEBUG] backend name is 
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}
+ BACKENDS='{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
+ echo '[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
++ echo '{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=k8s-be-31224--cf59b45c6c98a43f
+ echo '[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f
+ [[ -z k8s-be-31224--cf59b45c6c98a43f ]]
+ BACKEND_ID=
+ [[ -z '' ]]
++ gcloud compute --project=<PROJECT>-kf backend-services list --filter=name~k8s-be-31224--cf59b45c6c98a43f '--format=value(id)'
[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f
+ BACKEND_ID=1823365163963571188
+ echo '[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
+ [[ -z 1823365163963571188 ]]
+ echo BACKEND_ID=1823365163963571188
+ JWT_AUDIENCE=/projects/<PROJECT_ID>/global/backendServices/1823365163963571188
BACKEND_ID=1823365163963571188
++ kubectl get ingress --all-namespaces
++ grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'
+ INGRESS_TARGET_IP=34.111.217.6
+ echo '[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog'
+ echo '[DEBUG] INGRESS_TARGET_IP = 34.111.217.6'
+ echo '[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188'
[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
[DEBUG] INGRESS_TARGET_IP = 34.111.217.6
[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188
+ sed 's|JWT_AUDIENCE|/projects/<PROJECT_ID>/global/backendServices/1823365163963571188|;s|ENDPOINT_NAME|kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog|;s|INGRESS_TARGET_IP|34.111.217.6|' /var/envoy-config/swagger_template.yaml
+ gcloud endpoints services deploy openapi.yaml
ERROR: (gcloud.endpoints.services.deploy) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog] (or it may not exist): Service 'kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog' not found or permission denied.
+ gcloud services enable kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
ERROR: (gcloud.services.enable) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:enable] (or it may not exist): Not found or permission denied for service(s): kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog.
Help Token: AZWD64os9zpij-Q3qjzeNHW8WB5YlLpx7zZWQXlJvtbpdua03aFYU9mO_BamEzguofetl67FP0tt5Jz5DtE7xVEKcAka_j8vx4CXPLvMWc2ACLYr
- '@type': type.googleapis.com/google.rpc.PreconditionFailure
  violations:
  - subject: ?error_code=220002&services=kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
    type: googleapis.com
- '@type': type.googleapis.com/google.rpc.ErrorInfo
  domain: servicemanagement.googleapis.com
  metadata:
    services: kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
  reason: SERVICE_CONFIG_NOT_FOUND_OR_PERMISSION_DENIED
+ gcloud endpoints services add-iam-policy-binding kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/servicemanagement.serviceController
bindings:
- members:
  - serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com
  role: roles/servicemanagement.serviceController
etag: BwXtf48Erzg=
version: 1
+ gcloud projects add-iam-policy-binding <PROJECT>-kf --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/cloudtrace.agent
ERROR: (gcloud.projects.add-iam-policy-binding) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access project [<PROJECT>-kf:setIamPolicy] (or it may not exist): Policy update access denied.
+ echo 'Sleeping 30 seconds...'
+ sleep 30
Sleeping 30 seconds...
+ true
+ set_endpoint
++ kubectl --namespace=istio-system get svc istio-ingressgateway -o 'jsonpath={.spec.ports[?(@.name=="http2")].nodePort}'
+ NODE_PORT=31224
+ echo '[DEBUG] node port is 31224'
+ BACKEND_NAME=
+ [[ -z '' ]]
[DEBUG] node port is 31224
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS='{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
+ echo '[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}
++ echo '{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=k8s-be-31224--cf59b45c6c98a43f
+ echo '[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f
+ [[ -z k8s-be-31224--cf59b45c6c98a43f ]]
+ BACKEND_ID=
+ [[ -z '' ]]
++ gcloud compute --project=<PROJECT>-kf backend-services list --filter=name~k8s-be-31224--cf59b45c6c98a43f '--format=value(id)'
+ BACKEND_ID=1823365163963571188
+ echo '[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f
+ [[ -z 1823365163963571188 ]]
+ echo BACKEND_ID=1823365163963571188
+ JWT_AUDIENCE=/projects/<PROJECT_ID>/global/backendServices/1823365163963571188
BACKEND_ID=1823365163963571188
++ kubectl get ingress --all-namespaces
++ grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'
+ INGRESS_TARGET_IP=34.111.217.6
+ echo '[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog'
+ echo '[DEBUG] INGRESS_TARGET_IP = 34.111.217.6'
+ echo '[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188'
+ sed 's|JWT_AUDIENCE|/projects/<PROJECT_ID>/global/backendServices/1823365163963571188|;s|ENDPOINT_NAME|kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog|;s|INGRESS_TARGET_IP|34.111.217.6|' /var/envoy-config/swagger_template.yaml
[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
[DEBUG] INGRESS_TARGET_IP = 34.111.217.6
[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188
+ gcloud endpoints services deploy openapi.yaml
ERROR: (gcloud.endpoints.services.deploy) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog] (or it may not exist): Service 'kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog' not found or permission denied.
+ gcloud services enable kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
ERROR: (gcloud.services.enable) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:enable] (or it may not exist): Not found or permission denied for service(s): kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog.
Help Token: AZWD64reMT4r3KmGUAsIW8AQOL-jX4JcxZoTkVe5n9zXKat7VdRNIgsGYgDQnvJhYjDCntj6p3u2Q9tw8kVy0F6n_aEnHhhruxCx_WRXfbtA2K1V
- '@type': type.googleapis.com/google.rpc.PreconditionFailure
  violations:
  - subject: ?error_code=220002&services=kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
    type: googleapis.com
- '@type': type.googleapis.com/google.rpc.ErrorInfo
  domain: servicemanagement.googleapis.com
  metadata:
    services: kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
  reason: SERVICE_CONFIG_NOT_FOUND_OR_PERMISSION_DENIED
+ gcloud endpoints services add-iam-policy-binding kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/servicemanagement.serviceController
bindings:
- members:
  - serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com
  role: roles/servicemanagement.serviceController
etag: BwXtf5FitMQ=
version: 1
+ gcloud projects add-iam-policy-binding <PROJECT>-kf --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/cloudtrace.agent
ERROR: (gcloud.projects.add-iam-policy-binding) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access project [<PROJECT>-kf:setIamPolicy] (or it may not exist): Policy update access denied.
+ echo 'Sleeping 30 seconds...'
+ sleep 30
Sleeping 30 seconds...
+ true
+ set_endpoint
++ kubectl --namespace=istio-system get svc istio-ingressgateway -o 'jsonpath={.spec.ports[?(@.name=="http2")].nodePort}'
[DEBUG] node port is 31224
+ NODE_PORT=31224
+ echo '[DEBUG] node port is 31224'
+ BACKEND_NAME=
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}
+ BACKENDS='{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
+ echo '[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
++ echo '{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=k8s-be-31224--cf59b45c6c98a43f
+ echo '[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f
+ [[ -z k8s-be-31224--cf59b45c6c98a43f ]]
+ BACKEND_ID=
+ [[ -z '' ]]
++ gcloud compute --project=<PROJECT>-kf backend-services list --filter=name~k8s-be-31224--cf59b45c6c98a43f '--format=value(id)'
+ BACKEND_ID=1823365163963571188
+ echo '[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f
+ [[ -z 1823365163963571188 ]]
BACKEND_ID=1823365163963571188
+ echo BACKEND_ID=1823365163963571188
+ JWT_AUDIENCE=/projects/<PROJECT_ID>/global/backendServices/1823365163963571188
++ kubectl get ingress --all-namespaces
++ grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'
+ INGRESS_TARGET_IP=34.111.217.6
+ echo '[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog'
+ echo '[DEBUG] INGRESS_TARGET_IP = 34.111.217.6'
+ echo '[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188'
+ sed 's|JWT_AUDIENCE|/projects/<PROJECT_ID>/global/backendServices/1823365163963571188|;s|ENDPOINT_NAME|kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog|;s|INGRESS_TARGET_IP|34.111.217.6|' /var/envoy-config/swagger_template.yaml
[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
[DEBUG] INGRESS_TARGET_IP = 34.111.217.6
[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188
+ gcloud endpoints services deploy openapi.yaml
ERROR: (gcloud.endpoints.services.deploy) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog] (or it may not exist): Service 'kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog' not found or permission denied.
+ gcloud services enable kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
ERROR: (gcloud.services.enable) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:enable] (or it may not exist): Not found or permission denied for service(s): kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog.
Help Token: AZWD64pNFdr9LW0p4IU4_8r-VBDqvrM8gYilhlsDx7Didvk2o_kpDEQu8tu3nQvxq1MqAzn_8GAh9XbZ7ccEvK1PRCOEjDDkBMtat7wkiZVaOopQ
- '@type': type.googleapis.com/google.rpc.PreconditionFailure
  violations:
  - subject: ?error_code=220002&services=kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
    type: googleapis.com
- '@type': type.googleapis.com/google.rpc.ErrorInfo
  domain: servicemanagement.googleapis.com
  metadata:
    services: kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
  reason: SERVICE_CONFIG_NOT_FOUND_OR_PERMISSION_DENIED
+ gcloud endpoints services add-iam-policy-binding kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/servicemanagement.serviceController
bindings:
- members:
  - serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com
  role: roles/servicemanagement.serviceController
etag: BwXtf5O-1yA=
version: 1
+ gcloud projects add-iam-policy-binding <PROJECT>-kf --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/cloudtrace.agent
ERROR: (gcloud.projects.add-iam-policy-binding) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access project [<PROJECT>-kf:setIamPolicy] (or it may not exist): Policy update access denied.
+ echo 'Sleeping 30 seconds...'
+ sleep 30
Sleeping 30 seconds...
+ true
+ set_endpoint
++ kubectl --namespace=istio-system get svc istio-ingressgateway -o 'jsonpath={.spec.ports[?(@.name=="http2")].nodePort}'
+ NODE_PORT=31224
+ echo '[DEBUG] node port is 31224'
+ BACKEND_NAME=
[DEBUG] node port is 31224
+ [[ -z '' ]]
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
+ BACKENDS='{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
+ echo '[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}
++ echo '{"k8s-be-31224--cf59b45c6c98a43f":"Unknown"}'
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f
+ BACKEND_NAME=k8s-be-31224--cf59b45c6c98a43f
+ echo '[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
+ [[ -z k8s-be-31224--cf59b45c6c98a43f ]]
+ BACKEND_ID=
+ [[ -z '' ]]
++ gcloud compute --project=<PROJECT>-kf backend-services list --filter=name~k8s-be-31224--cf59b45c6c98a43f '--format=value(id)'
+ BACKEND_ID=1823365163963571188
+ echo '[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f
+ [[ -z 1823365163963571188 ]]
+ echo BACKEND_ID=1823365163963571188
+ JWT_AUDIENCE=/projects/<PROJECT_ID>/global/backendServices/1823365163963571188
BACKEND_ID=1823365163963571188
++ kubectl get ingress --all-namespaces
++ grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'
+ INGRESS_TARGET_IP=34.111.217.6
+ echo '[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog'
+ echo '[DEBUG] INGRESS_TARGET_IP = 34.111.217.6'
+ echo '[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188'
+ sed 's|JWT_AUDIENCE|/projects/<PROJECT_ID>/global/backendServices/1823365163963571188|;s|ENDPOINT_NAME|kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog|;s|INGRESS_TARGET_IP|34.111.217.6|' /var/envoy-config/swagger_template.yaml
[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
[DEBUG] INGRESS_TARGET_IP = 34.111.217.6
[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188
+ gcloud endpoints services deploy openapi.yaml
Waiting for async operation operations/serviceConfigs.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:6d99168f-2d49-4790-86bb-3ab1d92d148a to complete...
Operation finished successfully. The following command can describe the Operation details:
 gcloud endpoints operations describe operations/serviceConfigs.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:6d99168f-2d49-4790-86bb-3ab1d92d148a
Waiting for async operation operations/rollouts.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:a05e8dbe-8be8-4370-9466-c937f4c5cc32 to complete...
Operation finished successfully. The following command can describe the Operation details:
 gcloud endpoints operations describe operations/rollouts.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:a05e8dbe-8be8-4370-9466-c937f4c5cc32
Enabling service kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog on project <PROJECT>-kf...
ERROR: (gcloud.endpoints.services.deploy) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:enable] (or it may not exist): Service 'kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog' not found or permission denied.
Help Token: AZWD64pJVwbPhERWrQ03jqIBSRNe98hz2MO8eZCp6mEThkiLpp7LYP2UTNGoxtxTSo63PMZwkHSy-CIGB65gS1vprjsREOL7QLOn4MhtMA0HFh67
- '@type': type.googleapis.com/google.rpc.PreconditionFailure
  violations:
  - subject: ?error_code=110002&service=serviceusage.googleapis.com&service=serviceusage.googleapis.com&permission=serviceusage.services.enable&permission=serviceusage.services.enable&resource=<PROJECT>-kf&resource=<PROJECT>-kf
    type: googleapis.com
- '@type': type.googleapis.com/google.rpc.ErrorInfo
  domain: servicemanagement.googleapis.com
  metadata:
    permission: serviceusage.services.enable,serviceusage.services.enable
    resource: <PROJECT>-kf,<PROJECT>-kf
    service: serviceusage.googleapis.com,serviceusage.googleapis.com
  reason: AUTH_PERMISSION_DENIED
+ gcloud services enable kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
ERROR: (gcloud.services.enable) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:enable] (or it may not exist): Service 'kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog' not found or permission denied.
Help Token: AZWD64q1QjAniceFJV5oXOy_elfHZ3V1gJnZfng6ek3-bfhPyAzwEigwQi_2xPdFLqzoewtH-IoDdekavmLMwhksQxLXwRJIqgVrRRfEDbIVPXCA
- '@type': type.googleapis.com/google.rpc.PreconditionFailure
  violations:
  - subject: ?error_code=110002&service=serviceusage.googleapis.com&service=serviceusage.googleapis.com&permission=serviceusage.services.enable&permission=serviceusage.services.enable&resource=<PROJECT>-kf&resource=<PROJECT>-kf
    type: googleapis.com
- '@type': type.googleapis.com/google.rpc.ErrorInfo
  domain: servicemanagement.googleapis.com
  metadata:
    permission: serviceusage.services.enable,serviceusage.services.enable
    resource: <PROJECT>-kf,<PROJECT>-kf
    service: serviceusage.googleapis.com,serviceusage.googleapis.com
  reason: AUTH_PERMISSION_DENIED
+ gcloud endpoints services add-iam-policy-binding kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/servicemanagement.serviceController
bindings:
- members:
  - serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com
  role: roles/servicemanagement.serviceController
etag: BwXtf5uDgQY=
version: 1
+ gcloud projects add-iam-policy-binding <PROJECT>-kf --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/cloudtrace.agent
ERROR: (gcloud.projects.add-iam-policy-binding) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access project [<PROJECT>-kf:setIamPolicy] (or it may not exist): Policy update access denied.
+ echo 'Sleeping 30 seconds...'
Sleeping 30 seconds...
+ sleep 30
+ true
+ set_endpoint
++ kubectl --namespace=istio-system get svc istio-ingressgateway -o 'jsonpath={.spec.ports[?(@.name=="http2")].nodePort}'
+ NODE_PORT=31224
+ echo '[DEBUG] node port is 31224'
+ BACKEND_NAME=
+ [[ -z '' ]]
[DEBUG] node port is 31224
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}
+ BACKENDS='{"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}'
+ echo '[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}'
++ echo '{"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}'
++ grep -o 'k8s-be-31224--[0-9a-z]\+'
+ BACKEND_NAME=k8s-be-31224--cf59b45c6c98a43f
[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f
+ echo '[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
+ [[ -z k8s-be-31224--cf59b45c6c98a43f ]]
+ BACKEND_ID=
+ [[ -z '' ]]
++ gcloud compute --project=<PROJECT>-kf backend-services list --filter=name~k8s-be-31224--cf59b45c6c98a43f '--format=value(id)'
[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f
+ BACKEND_ID=1823365163963571188
+ echo '[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
+ [[ -z 1823365163963571188 ]]
+ echo BACKEND_ID=1823365163963571188
+ JWT_AUDIENCE=/projects/<PROJECT_ID>/global/backendServices/1823365163963571188
BACKEND_ID=1823365163963571188
++ kubectl ++ grep get -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'
ingress --all-namespaces
[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
[DEBUG] INGRESS_TARGET_IP = 34.111.217.6
[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188
+ INGRESS_TARGET_IP=34.111.217.6
+ echo '[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog'
+ echo '[DEBUG] INGRESS_TARGET_IP = 34.111.217.6'
+ echo '[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188'
+ sed 's|JWT_AUDIENCE|/projects/<PROJECT_ID>/global/backendServices/1823365163963571188|;s|ENDPOINT_NAME|kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog|;s|INGRESS_TARGET_IP|34.111.217.6|' /var/envoy-config/swagger_template.yaml
+ gcloud endpoints services deploy openapi.yaml
Waiting for async operation operations/serviceConfigs.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:0faa4b57-28a1-481d-a807-271628860859 to complete...
Operation finished successfully. The following command can describe the Operation details:
 gcloud endpoints operations describe operations/serviceConfigs.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:0faa4b57-28a1-481d-a807-271628860859
Waiting for async operation operations/rollouts.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:ca599737-7a01-48ce-a524-859529b71acc to complete...
Operation finished successfully. The following command can describe the Operation details:
 gcloud endpoints operations describe operations/rollouts.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:ca599737-7a01-48ce-a524-859529b71acc
Enabling service kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog on project <PROJECT>-kf...
ERROR: (gcloud.endpoints.services.deploy) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:enable] (or it may not exist): Service 'kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog' not found or permission denied.
Help Token: AZWD64ri8trq-0t3gsMLfA7Ik_kp0JS9dKwhrvGTMsceMRmX_Fo6ndEfbyshqrA6sRMxUNbU8CbYbd_nFFSADLCplueDfUrvG3X-Tx9BdPO5vwGL
- '@type': type.googleapis.com/google.rpc.PreconditionFailure
  violations:
  - subject: ?error_code=110002&service=serviceusage.googleapis.com&service=serviceusage.googleapis.com&permission=serviceusage.services.enable&permission=serviceusage.services.enable&resource=<PROJECT>-kf&resource=<PROJECT>-kf
    type: googleapis.com
- '@type': type.googleapis.com/google.rpc.ErrorInfo
  domain: servicemanagement.googleapis.com
  metadata:
    permission: serviceusage.services.enable,serviceusage.services.enable
    resource: <PROJECT>-kf,<PROJECT>-kf
    service: serviceusage.googleapis.com,serviceusage.googleapis.com
  reason: AUTH_PERMISSION_DENIED
+ gcloud services enable kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
ERROR: (gcloud.services.enable) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access service [kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:enable] (or it may not exist): Service 'kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog' not found or permission denied.
Help Token: AZWD64oAj-cC8y2S75k1swMQNkvHPwqWcdzlFgZiF6hOtLYz2yCrJa128E_-MpXqPBXEZy9N1HGAeCRlNGuoxq4qPYo0VZnZSaPvMgbG9YPRBkyW
- '@type': type.googleapis.com/google.rpc.PreconditionFailure
  violations:
  - subject: ?error_code=110002&service=serviceusage.googleapis.com&service=serviceusage.googleapis.com&permission=serviceusage.services.enable&permission=serviceusage.services.enable&resource=<PROJECT>-kf&resource=<PROJECT>-kf
    type: googleapis.com
- '@type': type.googleapis.com/google.rpc.ErrorInfo
  domain: servicemanagement.googleapis.com
  metadata:
    permission: serviceusage.services.enable,serviceusage.services.enable
    resource: <PROJECT>-kf,<PROJECT>-kf
    service: serviceusage.googleapis.com,serviceusage.googleapis.com
  reason: AUTH_PERMISSION_DENIED
+ gcloud endpoints services add-iam-policy-binding kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/servicemanagement.serviceController
bindings:
- members:
  - serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com
  role: roles/servicemanagement.serviceController
etag: BwXtf6I-Y-Q=
version: 1
+ gcloud projects add-iam-policy-binding <PROJECT>-kf --member serviceAccount:kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com --role roles/cloudtrace.agent
ERROR: (gcloud.projects.add-iam-policy-binding) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access project [<PROJECT>-kf:setIamPolicy] (or it may not exist): Policy update access denied.
Sleeping 30 seconds...
+ echo 'Sleeping 30 seconds...'
+ sleep 30
+ true
+ set_endpoint
++ kubectl --namespace=istio-system get svc istio-ingressgateway -o 'jsonpath={.spec.ports[?(@.name=="http2")].nodePort}'
+ NODE_PORT=31224
+ echo '[DEBUG] node port is 31224'
+ BACKEND_NAME=
+ [[ -z '' ]]
[DEBUG] node port is 31224
++ kubectl --namespace=istio-system get ingress envoy-ingress -o 'jsonpath={.metadata.annotations.ingress\.kubernetes\.io/backends}'
[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}
+ BACKENDS='{"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}'
+ echo '[DEBUG] fetching backends info with envoy-ingress: {"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}'
++ echo ++ '{"k8s-be-31224--cf59b45c6c98a43f":"HEALTHY"}'
grep -o 'k8s-be-31224--[0-9a-z]\+'
[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f
+ BACKEND_NAME=k8s-be-31224--cf59b45c6c98a43f
+ echo '[DEBUG] backend name is k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
+ [[ -z k8s-be-31224--cf59b45c6c98a43f ]]
+ BACKEND_ID=
+ [[ -z '' ]]
++ gcloud compute --project=<PROJECT>-kf backend-services list --filter=name~k8s-be-31224--cf59b45c6c98a43f '--format=value(id)'
+ BACKEND_ID=1823365163963571188
+ echo '[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f'
+ sleep 2
[DEBUG] Waiting for backend id PROJECT=<PROJECT>-kf NAMESPACE=istio-system SERVICE=istio-ingressgateway filter=name~k8s-be-31224--cf59b45c6c98a43f
BACKEND_ID=1823365163963571188
+ [[ -z 1823365163963571188 ]]
+ echo BACKEND_ID=1823365163963571188
+ JWT_AUDIENCE=/projects/<PROJECT_ID>/global/backendServices/1823365163963571188
++ kubectl get ingress --all-namespaces
++ grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'
+ INGRESS_TARGET_IP=34.111.217.6
+ echo '[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog'
+ echo '[DEBUG] INGRESS_TARGET_IP = 34.111.217.6'
+ echo '[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188'
+ sed 's|JWT_AUDIENCE|/projects/<PROJECT_ID>/global/backendServices/1823365163963571188|;s|ENDPOINT_NAME|kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog|;s|INGRESS_TARGET_IP|34.111.217.6|' /var/envoy-config/swagger_template.yaml
[DEBUG] ENDPOINT_NAME = kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog
[DEBUG] INGRESS_TARGET_IP = 34.111.217.6
[DEBUG] JWT_AUDIENCE = /projects/<PROJECT_ID>/global/backendServices/1823365163963571188
+ gcloud endpoints services deploy openapi.yaml
Waiting for async operation operations/serviceConfigs.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:720af659-1449-41b4-ae48-295241e20f43 to complete...
Operation finished successfully. The following command can describe the Operation details:
 gcloud endpoints operations describe operations/serviceConfigs.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:720af659-1449-41b4-ae48-295241e20f43
Waiting for async operation operations/rollouts.kubeflow-test.endpoints.<PROJECT>-kf.cloud.goog:8b9bcf74-b891-45ca-bcb1-118991bdd2da to complete...

running gcloud endpoints services list I get:

NAME                                        TITLE
kubeflow-test.endpoints.arex-kf.cloud.goog

From all the errors there that I checked, the only one that prevailed post install was the cloud tracer permission, it wasnt on the IAM list.

gkcalat commented 1 year ago

It seems that kubeflow-test-admin service account is missing in IAM. Do you have your deployment logs?

ERROR: (gcloud.projects.add-iam-policy-binding) User [kubeflow-test-admin@<PROJECT>-kf.iam.gserviceaccount.com] does not have permission to access project [<PROJECT>-kf:setIamPolicy] (or it may not exist): Policy update access denied.

I was not able to reproduce your error. Did you run all of these steps?

You can checkout the troubleshooting guide for IAM here.

JPBedran commented 1 year ago

Hey @gkcalat, yeah, step by step. Your deployment gets an endpoint no problem?

gkcalat commented 1 year ago

Synched with @JPBedran in Slack. It turned out to be the local DNS cache. Other users should not be affected. Closing this issue.