Closed LesterThomas closed 10 months ago
Can we factor in improvement in security a privacy as mentioned on the ODA CA call. Specifically we identified KONG and Hashicorp as potential contributor amongst others.
Can k3s be added into the test list?
AFIK , apart from the Ingress, which we don't use anymore, we don't use any of the deprecated API https://kubernetes.io/docs/reference/using-api/deprecation-guide/ So , for that point, I guess we should have little to no problem From my own tests, the biggest issues tend to be more on
I can think of at least four different ways to create Kubernetes clusters on AWS:
And in any other major Cloud provider, probably the landscape will be similar.
Test environments: Windows 11 Pro 21H2 with WSL2 (Ubuntu 22.04.2 LTS) Rancher Desktop Version: 1.7.0 Kubernetes versions: 1.22.17 | 1.23.17 | 1.24.11 | 1.25.7 Container Engine: dockerd (moby) Traefik: disabled It works on all the versions of k8s listed above.
TL;DR local k3s distribution, rancher desktop or similar
helm plugin install --version "main" https://github.com/Noksa/helm-resolve-deps.git helm repo add jetstack https://charts.jetstack.io helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update kubectl create namespace istio-system helm install istio-base istio/base -n istio-system helm install istiod istio/istiod -n istio-system --wait kubectl create namespace istio-ingress kubectl label namespace istio-ingress istio-injection=enabled helm install istio-ingress istio/gateway -n istio-ingress --wait
kubectl get svc istio-ingress -n istio-ingress --show-labels **After this step the "EXTERNAL-IP" should not be in "pending" state. If it is, then disabling any pre-existing ingress such as "Traefik" should resolve the issue. kubectl label svc istio-ingress -n istio-ingress istio=ingressgateway --overwrite git clone https://github.com/tmforum-oda/oda-canvas-charts.git cd oda-canvas-charts/installation/canvas-oda
helm resolve-deps helm install canvas -n canvas --create-namespace .
User ID: admin Pwd: adpass
Test Environments:
Steps followed :
Start Minikube:
Open CMD in admin mode.
minikube start --no-vtx-check --vm-driver=virtualbox --memory=20gb --cpus=4 --kubernetes-version=v1.25.0
Note: if you have already an existing cluster and now, you’re creating a new cluster; -p
Enable Required Addons: minikube addons list minikube addons enable ingress minikube addons enable ingress-dns minikube addons enable dashboard
Install Istio Follow the instructions described in part3 Istio of Readme.md https://github.com/tmforum-oda/oda-canvas-charts/tree/master/installation
kubectl get svc istio-ingress -n istio-ingress --show-labels
External IP can be in pending status when you list instio-ingress service.
To enable External IP , you should use minikube tunnel
If you are using a specific cluster name, you should add -p
Useful Links: Docker Desktop : https://www.docker.com/products/docker-desktop/ Install Helm3: https://helm.sh/ Install Kubernetes CLI: https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ Download Lens to easily observe kubernetes configurations : https://k8slens.dev/ Download Oracle Virtual Box and Install:http://download.virtualbox.org/virtualbox/7.0.6 with extension -Win.exe
Troubleshooting :
1. Error: INSTALLATION FAILED: failed post-install: timed out waiting for the condition
Check whether images are successfully pulled or not in any of canvas or cert-manager namespaces. (You can directly use Lens and observe relevant pod events )
If there is an image cannot be pulled during canvas installation process, pull it manually using minikube ssh.
Open CMD in admin mode.
• minikube ssh
• docker pull
2. Error: INSTALLATION FAILED: failed post-install: warning: Hook post-install canvas-oda/charts/cert-manager-init/templates/issuer.yaml failed: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://canvas-cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority
Try first to uninstall the chart • helm uninstall -n canvas canvas Delete persistence volume claim • kubectl delete pvc -n canvas data-canvas-postgresql-0 Check whether component and cert-manager namespaces are deleted • kubectl get ns Then manually delete the lease object that causes the problem • kubectl get lease -n kube-system Force the release of the lease without waiting for a timeout • kubectl delete lease cert-manager-cainjector-leader-election -n kube-system Increase leaseWaitTimeonStartup value btw 80-100 in oda-canvas-charts\installation\cert-manager-init\values.yaml Go to canvas-oda directory Re-install Canvas • helm install canvas -n canvas --create-namespace .
helm install canvas -n canvas --create-namespace
Hi @IremGA - When you did the final helm install, did it work?
helm install canvas -n canvas --create-namespace
Hi @IremGA - When you did the final helm install, did it work?
Hi @LesterThomas, yes it worked after applying above steps in an order to solve the problem.
Somehow the lease cert-manager-cainjector-leader-election is causing the problem. after forcing it to lease, canvas installation worked. Only increasing the time value of leaseWaitTimeonStartup is not enough to solve the problem.
Here is the guidance from GPT-4! The cert-manager-cainjector-leader-election component is responsible for electing a leader for the cert-manager controller. This is done using a lease mechanism where a leader is elected for a certain amount of time and then a new leader is elected.
If increasing the leaseWaitTimeOnStartup parameter is not resolving your issue, it's possible that there is another underlying issue. Here are a few suggestions on how to troubleshoot and resolve the issue:
Check if there are any other pods in your cluster that are holding the leader lease. You can use the following command to check if there is an existing leader:
kubectl get endpoints --namespace=<cert-manager-namespace> cert-manager-cainjector -o jsonpath='{.subsets[0].addresses[0].targetRef.name}'
If there is an existing leader, it's possible that it is not releasing the lease properly. You may need to investigate that pod to determine why it is not releasing the lease.
Check if the cert-manager-cainjector pod is able to connect to the Kubernetes API server. You can use the following command to check the logs of the cert-manager-cainjector pod:
kubectl logs cert-manager-cainjector-<pod-id> -n <cert-manager-namespace>
Look for any error messages related to connecting to the API server. If there are errors, you may need to investigate further to determine the root cause.
Check if the Kubernetes API server is healthy. You can use the following command to check the health of the Kubernetes API server:
kubectl get componentstatuses
Look for any components that are in an unhealthy state. If there are any issues with the API server, you may need to resolve those issues before attempting to deploy your Helm chart.
Check if there are any resource constraints that are causing the cert-manager-cainjector pod to be evicted. You can use the following command to check the status of the cert-manager-cainjector pod:
kubectl describe pod cert-manager-cainjector-<pod-id> -n <cert-manager-namespace>
Look for any messages related to the pod being evicted due to resource constraints. If there are issues with resource constraints, you may need to adjust the resource requests and limits for the pod.
Thanks Lester, After successful canvas installation in minikube here are the output of the above commands: There is no endpoint created for cert-manager-cainjector :
$kubectl get endpoints --namespace=cert-manager cert-manager-cainjector -o jsonpath='{.subsets[0].addresses[0].targetRef.name}'
Error from server (NotFound): endpoints "cert-manager-cainjector" not found
Endpoints created under cert-manager namespace are:
$kubectl get endpoints --namespace=cert-manager
NAME ENDPOINTS AGE
canvas-cert-manager 10.244.1.89:9402 18h
canvas-cert-manager-webhook 10.244.1.85:10250 18h
Component statuses:
$kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
No errors are observed when I checked the log pod in canvas-cert-manager-cainjector. Here are the pod details of canvas-cert-manager-cainjector:
$kubectl describe pod canvas-cert-manager-cainjector-57496cfdf9-nxhm4 -n cert-manager
Name: canvas-cert-manager-cainjector-57496cfdf9-nxhm4
Namespace: cert-manager
Priority: 0
Service Account: canvas-cert-manager-cainjector
Node: odaca/192.168.59.106
Start Time: Thu, 16 Mar 2023 15:04:14 +0300
Labels: app=cainjector
app.kubernetes.io/component=cainjector
app.kubernetes.io/instance=canvas
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cainjector
app.kubernetes.io/version=v1.11.0
helm.sh/chart=cert-manager-v1.11.0
pod-template-hash=57496cfdf9
Annotations: <none>
Status: Running
IP: 10.244.1.94
IPs:
IP: 10.244.1.94
Controlled By: ReplicaSet/canvas-cert-manager-cainjector-57496cfdf9
Containers:
cert-manager-cainjector:
Container ID: docker://2cac9528430621b4a3ff0b1ecb998e396ce5707dead7b613f46cb620d7d9f3db
Image: quay.io/jetstack/cert-manager-cainjector:v1.11.0
Image ID: docker-pullable://quay.io/jetstack/cert-manager-cainjector@sha256:5c3eb25b085443b83586a98a1ae07f8364461dfca700e950c30f585efb7474ba
Port: <none>
Host Port: <none>
Args:
--v=2
--leader-election-namespace=kube-system
State: Running
Started: Fri, 17 Mar 2023 08:47:09 +0300
Ready: True
Restart Count: 1
Environment:
POD_NAMESPACE: cert-manager (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t4vbx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-t4vbx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 29m kubelet Pod sandbox changed, it will be killed and re-created.
Warning FailedMount 29m (x3 over 29m) kubelet MountVolume.SetUp failed for volume "kube-api-access-t4vbx" : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/cert-manager/servicea
ccounts/canvas-cert-manager-cainjector/token": dial tcp 192.168.59.106:8443: connect: connection refused
Warning FailedMount 28m kubelet MountVolume.SetUp failed for volume "kube-api-access-t4vbx" : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/cert-manager/servicea
ccounts/canvas-cert-manager-cainjector/token": net/http: TLS handshake timeout
Normal Pulled 28m (x2 over 18h) kubelet Container image "quay.io/jetstack/cert-manager-cainjector:v1.11.0" already present on machine
Normal Created 28m (x2 over 18h) kubelet Created container cert-manager-cainjector
Warning FailedMount 28m kubelet MountVolume.SetUp failed for volume "kube-api-access-t4vbx" : failed to fetch token: serviceaccounts "canvas-cert-manager-cainjector" is forbidden: User "system:node:odaca" ca
nnot create resource "serviceaccounts/token" in API group "" in the namespace "cert-manager": no relationship found between node 'odaca' and this object
Normal Started 28m (x2 over 18h) kubelet Started container cert-manager-cainjector
I successfully installed canvas on microk8s on ubuntu as follows:
$ sudo snap install microk8s --channel 1.24/stable --classic
microk8s (1.24/stable) v1.24.12 from Canonical✓ installed
$ microk8s enable dns helm3 hostpath-storage metallb
Infer repository core for addon dns
Infer repository core for addon helm3
Infer repository core for addon hostpath-storage
Infer repository core for addon metallb
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
DNS is enabled
Enabling Helm 3
Fetching helm version v3.8.0.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 12.9M 100 12.9M 0 0 15.3M 0 --:--:-- --:--:-- --:--:-- 15.2M
Helm 3 is enabled
Enabling default storage class.
WARNING: Hostpath storage is not suitable for production environments.
deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon.
Enabling MetalLB
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 192.168.178.10-192.168.178.19
Applying Metallb manifest
namespace/metallb-system created
secret/memberlist created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
daemonset.apps/speaker created
deployment.apps/controller created
configmap/config created
MetalLB is enabled
The IP addresses for the metalLB are obviously specific to my local network.
After this preparation I installed Istio as per canvas guidelines. "microk8s enable istio" should not be used as it installs an older incompatible version of istio.
$ kubectl create namespace istio-system
namespace/istio-system created
$ helm install istio-base istio/base -n istio-system
NAME: istio-base
LAST DEPLOYED: Sun Mar 26 10:36:12 2023
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Istio base successfully installed!
To learn more about the release, try:
$ helm status istio-base
$ helm get all istio-base
$ helm install istiod istio/istiod -n istio-system --wait
NAME: istiod
LAST DEPLOYED: Sun Mar 26 10:37:47 2023
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
"istiod" successfully installed!
To learn more about the release, try:
$ helm status istiod
$ helm get all istiod
Next steps:
* Deploy a Gateway: https://istio.io/latest/docs/setup/additional-setup/gateway/
* Try out our tasks to get started on common configurations:
* https://istio.io/latest/docs/tasks/traffic-management
* https://istio.io/latest/docs/tasks/security/
* https://istio.io/latest/docs/tasks/policy-enforcement/
* https://istio.io/latest/docs/tasks/policy-enforcement/
* Review the list of actively supported releases, CVE publications and our hardening guide:
* https://istio.io/latest/docs/releases/supported-releases/
* https://istio.io/latest/news/security/
* https://istio.io/latest/docs/ops/best-practices/security/
For further documentation see https://istio.io website
Tell us how your install/upgrade experience went at https://forms.gle/hMHGiwZHPU7UQRWe9
$ kubectl create namespace istio-ingress
namespace/istio-ingress created
$ kubectl label namespace istio-ingress istio-injection=enabled
namespace/istio-ingress labeled
$ helm install istio-ingress istio/gateway -n istio-ingress --wait
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /var/snap/microk8s/4950/credentials/client.config
NAME: istio-ingress
LAST DEPLOYED: Sun Mar 26 10:39:17 2023
NAMESPACE: istio-ingress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
"istio-ingress" successfully installed!
To learn more about the release, try:
$ helm status istio-ingress
$ helm get all istio-ingress
Next steps:
* Deploy an HTTP Gateway: https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/
* Deploy an HTTPS Gateway: https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/
With this preparation the installation of the canvas is straightforward:
$ helm install canvas -n canvas --create-namespace .
NAME: canvas
LAST DEPLOYED: Sun Mar 26 10:46:53 2023
NAMESPACE: canvas
STATUS: deployed
REVISION: 1
TEST SUITE: None
A quick verification shows the running services:
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 22m
kube-system kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 20m
istio-system istiod ClusterIP 10.152.183.9 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 11m
istio-ingress istio-ingress LoadBalancer 10.152.183.123 192.168.178.10 15021:31943/TCP,80:31860/TCP,443:31527/TCP 10m
canvas canvas-postgresql-hl ClusterIP None <none> 5432/TCP 2m42s
canvas canvas-keycloak-headless ClusterIP None <none> 8083/TCP 2m42s
cert-manager canvas-cert-manager-webhook ClusterIP 10.152.183.108 <none> 443/TCP 2m42s
canvas compcrdwebhook NodePort 10.152.183.226 <none> 443:30458/TCP 2m42s
canvas canvas-postgresql ClusterIP 10.152.183.57 <none> 5432/TCP 2m42s
canvas canvas-keycloak LoadBalancer 10.152.183.131 192.168.178.11 8083:31423/TCP 2m42s
canvas seccon NodePort 10.152.183.217 <none> 5000:31613/TCP 2m42s
cert-manager canvas-cert-manager ClusterIP 10.152.183.125 <none> 9402/TCP 2m42s
Running the canvas CTK showed 4 issues after my install:
$ npm test
$
> oda-canvas-ctk@0.0.1 test
> mocha tests.js
********************************************************
Open Digital Architecture - Canvas Test Kit CTK v1alpha1
********************************************************
Basic Kubernetes checks
✔ Can connect to the cluster
1) Cluster is running a supported version: v1.18,v1.19,v1.20,v1.21,v1.22,v1.22+
Mandatory non-functional capabilities
✔ Canvas namespace exists
✔ Components namespace exists
✔ oda.tmforum.org/v1alpha3 APIs CRD exists
✔ oda.tmforum.org/v1alpha3 Components CRD exists
✔ zalando.org/v1 Kopfpeerings CRD exists
✔ zalando.org/v1 Clusterkopfpeerings CRD exists
✔ oda-controller-ingress deployment is running
✔ compcrdwebhook deployment is running
Optional non-functional capabilities
2) canvas-keycloak deployment is running
✔ istio-system namespace exists
✔ istiod deployment is running
3) istio-ingressgateway deployment is running
4) istio-egressgateway deployment is running
11 passing (106ms)
4 failing
1) Basic Kubernetes checks
Cluster is running a supported version: v1.18,v1.19,v1.20,v1.21,v1.22,v1.22+:
AssertionError: v1.24+ must be within supported versions v1.18,v1.19,v1.20,v1.21,v1.22,v1.22+: expected 'v1.24+' to be one of [ 'v1.18', 'v1.19', 'v1.20', …(3) ]
at /Users/koen/kubernetes/tmforum-oda/oda-canvas-ctk/tests.js:36:42
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
2) Optional non-functional capabilities
canvas-keycloak deployment is running:
HttpError: HTTP request failed
at Request._callback (node_modules/@kubernetes/client-node/dist/gen/api/appsV1Api.js:4297:36)
at self.callback (node_modules/request/request.js:185:22)
at Request.emit (node:events:513:28)
at Request.<anonymous> (node_modules/request/request.js:1154:10)
at Request.emit (node:events:513:28)
at IncomingMessage.<anonymous> (node_modules/request/request.js:1076:12)
at Object.onceWrapper (node:events:627:28)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
3) Optional non-functional capabilities
istio-ingressgateway deployment is running:
HttpError: HTTP request failed
at Request._callback (node_modules/@kubernetes/client-node/dist/gen/api/appsV1Api.js:4297:36)
at self.callback (node_modules/request/request.js:185:22)
at Request.emit (node:events:513:28)
at Request.<anonymous> (node_modules/request/request.js:1154:10)
at Request.emit (node:events:513:28)
at IncomingMessage.<anonymous> (node_modules/request/request.js:1076:12)
at Object.onceWrapper (node:events:627:28)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
4) Optional non-functional capabilities
istio-egressgateway deployment is running:
HttpError: HTTP request failed
at Request._callback (node_modules/@kubernetes/client-node/dist/gen/api/appsV1Api.js:4297:36)
at self.callback (node_modules/request/request.js:185:22)
at Request.emit (node:events:513:28)
at Request.<anonymous> (node_modules/request/request.js:1154:10)
at Request.emit (node:events:513:28)
at IncomingMessage.<anonymous> (node_modules/request/request.js:1076:12)
at Object.onceWrapper (node:events:627:28)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
The first one is an easy fix: update the list of supported versions.
The other ones need further investigation. Something in the installation process where the test depend on must have changed. Keycloak seem to be running correctly (I can access the UI) and I haven't tested the ingress yet.
China Mobile Cloud test succeeded.
Test environments infomation: Hardware configuration: 8 Cores/64G RAM/300G Hard Disk Operating System: CentOS7.6 K8s Version: 1.24.12 Components Version:
@peeterko2 wrote:
Running the canvas CTK showed 4 issues after my install:
$ npm test $ 1) Cluster is running a supported version: v1.18,v1.19,v1.20,v1.21,v1.22,v1.22+ 2) canvas-keycloak deployment is running 3) istio-ingressgateway deployment is running 4) istio-egressgateway deployment is running
I can confirm the testresults from with my own tests with the following setup: Ubuntu 22.04 with kubeadm v1.25 & MetalLB & Prometheus.
Exactly followed the instructions for installing the canvas and got the same errors.
As already said above, the version check can easily be fixed by adding 'v1.23', 'v1.24', 'v1.25'.
Using the oda-canvas Helm charts keycloak is installed as a StatefulSet and not as a deployment. So the check should test for StatefulSet and not deployment.
it("canvas-keycloak deployment is running", function (done) {
[*] k8sAppsAPI.readNamespacedStatefulSetStatus('canvas-keycloak', ReleaseNamespace).then((res) => {
let unavailableReplicas = res.body.status.unavailableReplicas
let readyReplicas = res.body.status.readyReplicas
let replicas = res.body.status.replicas
[-] // let availableReplicas = res.body.status.availableReplicas
let updatedReplicas = res.body.status.updatedReplicas
expect(unavailableReplicas, "Number of unavailable replicas").to.be.undefined &&
expect(readyReplicas, "Number of ready replicas").to.deep.equal(replicas) &&
[-] // expect(availableReplicas, "Number of available replicas").to.deep.equal(replicas) &&
expect(updatedReplicas, "Number of up-to-date replicas").to.deep.equal(replicas)
done()
}).catch(done)
})
Following the installation-instructions here:
helm install istio-ingress istio/gateway -n istio-ingress --set labels.app=istio-ingress --set labels.istio=ingressgateway --wait
The istio ingressgateway is deployed into namespace "istio-ingress" and is named "istio-ingress", not "istio-ingressgateway", but it has a label "istio=ingressgateway'.
So changing the namespace and deployment name fixes this test:
it("istio-ingressgateway deployment is running", function (done) {
[*] k8sAppsAPI.readNamespacedDeploymentStatus('istio-ingress', 'istio-ingress').then((res) => {
...
The installation-instructions here do not deploy the egress, so it is not present.
Description
For the end of sprint 2, we will document supported versions of K8s that the canvas has been tested with. We need to test the install of the canvas on the full range of Kubernetes versions, and on as many different environments as possible.
Ideally, we would include in the documentation a walkthrough of installing on different environments with screenshots.
The Kubernetes versions are v1.22 through to v1.24. Possible test environments are;
Feel free to add any other ck8s cluster implementations you would like us to test against in the comments below. Please comment in this issue if you have tested against a particular deployment and I'll update the table above.