Unless you have the latest kubeflex version the getting started script will not work and you will get an error like this "Error: unknown flag: --set-current-for-hosting
Usage:
kflex ctx [flags]"
Steps To Reproduce
bryan@Bryans-MacBook-Pro ~ % bash <(curl -s https://raw.githubusercontent.com/clubanderson/kubestellar/refs/heads/main/scripts/create-kubestellar-demo-env.sh)
KubeStellar Version: 0.25.0-rc.1
Checking that pre-req softwares are installed...
Checking pre-requisites for KubeStellar:
✔ KubeFlex
version: Kubeflex version: v0.6.3.672cc8a 2024-09-23T16:15:47Z
path: /opt/homebrew/bin/kflex
✔ OCM CLI
version: client version :v0.9.0-0-g56e1fc8
server release version :v1.31.0
default bundle version :0.14.0
path: /usr/local/bin/clusteradm
✔ Helm
version: version.BuildInfo{Version:"v3.16.1", GitCommit:"5a5449dc42be07001fd5771d56429132984ab3ab", GitTreeState:"dirty", GoVersion:"go1.23.1"}
path: /opt/homebrew/bin/helm
✔ kubectl
version:
path: /opt/homebrew/bin/kubectl
✔ Docker
version: Docker version 27.2.0, build 3ab4256958
path: /opt/homebrew/bin/docker
✔ Kind
version: kind v0.24.0 go1.22.6 darwin/arm64
path: /opt/homebrew/bin/kind
Starting environment clean up...
Starting cluster clean up...
Cluster space clean up has been completed
Starting context clean up...
Deleting cluster1 context...
deleted context cluster1 from /Users/bryan/.kube/config
Deleting cluster2 context...
deleted context cluster2 from /Users/bryan/.kube/config
Deleting kind-kubeflex context...
warning: this removed your active context, use "kubectl config use-context" to select a different one
deleted context kind-kubeflex from /Users/bryan/.kube/config
Context space clean up completed
Starting the process to install KubeStellar core: kind-kubeflex...
Creating cluster cluster1...
Creating cluster cluster2...
cluster1 creation and context setup complete
Creating KubeFlex cluster with SSL Passthrough
Creating "kubeflex" kind cluster with SSL passthrougn and 9443 port mapping...
Creating cluster "kubeflex" ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kubeflex"
You can now use your cluster with:
kubectl cluster-info --context kind-kubeflex
Thanks for using kind! 😊
Installing an nginx ingress...
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
Pathcing nginx ingress to enable SSL passthrough...
deployment.apps/ingress-nginx-controller patched
Waiting for nginx ingress with SSL passthrough to be ready...
pod/ingress-nginx-controller-5c97fbc9bc-v58vq condition met
Setting context to "kind-kubeflex"...
Switched to context "kind-kubeflex".
Completed KubeFlex cluster with SSL Passthrough
Pulling container images local...
Release "ks-core" does not exist. Installing it now.
0.16.4: Pulling from loft-sh/vcluster
Digest: sha256:84f70425f4dd64a85d4d904c5bdf9e71da79a91068136e4053fb0b54eb068ebb
Status: Image is up to date for ghcr.io/loft-sh/vcluster:0.16.4
ghcr.io/loft-sh/vcluster:0.16.4
Pulled: ghcr.io/kubestellar/kubestellar/core-chart:0.25.0-rc.1
Digest: sha256:7b50ef982b212a94a77019fc6e72ee56bd55c960490b27a813ab00c60dcb1f67
What's next:
View a summary of image vulnerabilities and recommendations → docker scout quickview ghcr.io/loft-sh/vcluster:0.16.4
v0.13.2: Pulling from open-cluster-management/registration-operator
Digest: sha256:f40dd5941772a2602ed008c7cc221db56eef0a1ea0461f1f4945ec57ad8b68ea
Status: Image is up to date for quay.io/open-cluster-management/registration-operator:v0.13.2
quay.io/open-cluster-management/registration-operator:v0.13.2
v1.27.2-k3s1: Pulling from rancher/k3s
Digest: sha256:66d13a1d6f92c7aa41f7734d5e97526a868484071d7467feb69dd868ad653254
Status: Image is up to date for rancher/k3s:v1.27.2-k3s1
docker.io/rancher/k3s:v1.27.2-k3s1
16.0.0-debian-11-r13: Pulling from bitnami/postgresql
Digest: sha256:3331ad89ba2d1af68e36521724440638be3834978ac8288c49e54929357143e6
Status: Image is up to date for bitnami/postgresql:16.0.0-debian-11-r13
docker.io/bitnami/postgresql:16.0.0-debian-11-r13
Image: "ghcr.io/loft-sh/vcluster:0.16.4" with ID "sha256:00428133d55e8b3f1b699f390bc4f0dd79e3a2635aad1d0604f6c8df09803166" not yet present on node "kubeflex-control-plane", loading...
What's next:
View a summary of image vulnerabilities and recommendations → docker scout quickview quay.io/open-cluster-management/registration-operator:v0.13.2
What's next:
View a summary of image vulnerabilities and recommendations → docker scout quickview rancher/k3s:v1.27.2-k3s1
What's next:
View a summary of image vulnerabilities and recommendations → docker scout quickview docker.io/bitnami/postgresql:16.0.0-debian-11-r13
Image: "quay.io/open-cluster-management/registration-operator:v0.13.2" with ID "sha256:26dca6bf6f10501533801d18f6fda2c06c596e39d0567bf67eed78cd4ee396d4" not yet present on node "kubeflex-control-plane", loading...
Image: "docker.io/bitnami/postgresql:16.0.0-debian-11-r13" with ID "sha256:bdc29c2220aa7d3d9ced3674fb26e23c03d5db2f73916efbec9d0be83b905c6d" not yet present on node "kubeflex-control-plane", loading...
Image: "rancher/k3s:v1.27.2-k3s1" with ID "sha256:bcff597c12474a57fdf706694224ddb5eb5b3941163bd24c24a3681962aa5dd6" not yet present on node "kubeflex-control-plane", loading...
NAME: ks-core
LAST DEPLOYED: Mon Oct 28 16:50:16 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
For your convenience you will probably want to add contexts to your
kubeconfig named after the non-host-type control planes (WDSes and
ITSes) that you just created (a host-type control plane is just an
alias for the KubeFlex hosting cluster). You can do that with the
following kflex commands; each creates a context and makes it the
current one. See
https://github.com/kubestellar/kubestellar/blob/main/docs/content/direct/core-chart.md#kubeconfig-files-and-contexts-for-control-planes
(replace "main" with "{{ .Values.KUBESTELLAR_VERSION }}" when
making the next release) for a way to do this without using kflex.
Finally you can use kflex ctx to switch back to the kubeconfig
context for your KubeFlex hosting cluster.
Error: unknown flag: --set-current-for-hosting
Usage:
kflex ctx [flags]
Flags:
-s, --chatty-status chatty status indicator (default true)
-h, --help help for ctx
-k, --kubeconfig string path to kubeconfig file
-v, --verbosity int log level
unknown flag: --set-current-for-hosting
Expected Behavior
you should get this if having the latest version,
"Finally you can use kflex ctx to switch back to the kubeconfig
context for your KubeFlex hosting cluster.
✔ Checking for saved hosting cluster context...
✔ Switching to hosting cluster context...
trying to load new context wds1 from server...
✔ Overwriting existing context for control plane
✔ Switching to context wds1...
✔ Overwriting existing context for control plane
trying to load new context its1 from server...
✔ Switching to context its1..."
Describe the bug
Unless you have the latest kubeflex version the getting started script will not work and you will get an error like this "Error: unknown flag: --set-current-for-hosting Usage: kflex ctx [flags]"
Steps To Reproduce
bryan@Bryans-MacBook-Pro ~ % bash <(curl -s https://raw.githubusercontent.com/clubanderson/kubestellar/refs/heads/main/scripts/create-kubestellar-demo-env.sh) KubeStellar Version: 0.25.0-rc.1 Checking that pre-req softwares are installed... Checking pre-requisites for KubeStellar: ✔ KubeFlex version: Kubeflex version: v0.6.3.672cc8a 2024-09-23T16:15:47Z path: /opt/homebrew/bin/kflex ✔ OCM CLI version: client version :v0.9.0-0-g56e1fc8 server release version :v1.31.0 default bundle version :0.14.0 path: /usr/local/bin/clusteradm ✔ Helm version: version.BuildInfo{Version:"v3.16.1", GitCommit:"5a5449dc42be07001fd5771d56429132984ab3ab", GitTreeState:"dirty", GoVersion:"go1.23.1"} path: /opt/homebrew/bin/helm ✔ kubectl version: path: /opt/homebrew/bin/kubectl ✔ Docker version: Docker version 27.2.0, build 3ab4256958 path: /opt/homebrew/bin/docker ✔ Kind version: kind v0.24.0 go1.22.6 darwin/arm64 path: /opt/homebrew/bin/kind
Starting environment clean up... Starting cluster clean up... Cluster space clean up has been completed
Starting context clean up... Deleting cluster1 context... deleted context cluster1 from /Users/bryan/.kube/config Deleting cluster2 context... deleted context cluster2 from /Users/bryan/.kube/config Deleting kind-kubeflex context... warning: this removed your active context, use "kubectl config use-context" to select a different one deleted context kind-kubeflex from /Users/bryan/.kube/config Context space clean up completed
Starting the process to install KubeStellar core: kind-kubeflex... Creating cluster cluster1... Creating cluster cluster2... cluster1 creation and context setup complete Creating KubeFlex cluster with SSL Passthrough Creating "kubeflex" kind cluster with SSL passthrougn and 9443 port mapping... Creating cluster "kubeflex" ... ✓ Ensuring node image (kindest/node:v1.31.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kubeflex" You can now use your cluster with:
kubectl cluster-info --context kind-kubeflex
Thanks for using kind! 😊 Installing an nginx ingress... namespace/ingress-nginx created serviceaccount/ingress-nginx created serviceaccount/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created configmap/ingress-nginx-controller created service/ingress-nginx-controller created service/ingress-nginx-controller-admission created deployment.apps/ingress-nginx-controller created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created Pathcing nginx ingress to enable SSL passthrough... deployment.apps/ingress-nginx-controller patched Waiting for nginx ingress with SSL passthrough to be ready... pod/ingress-nginx-controller-5c97fbc9bc-v58vq condition met Setting context to "kind-kubeflex"... Switched to context "kind-kubeflex". Completed KubeFlex cluster with SSL Passthrough
Pulling container images local... Release "ks-core" does not exist. Installing it now. 0.16.4: Pulling from loft-sh/vcluster Digest: sha256:84f70425f4dd64a85d4d904c5bdf9e71da79a91068136e4053fb0b54eb068ebb Status: Image is up to date for ghcr.io/loft-sh/vcluster:0.16.4 ghcr.io/loft-sh/vcluster:0.16.4 Pulled: ghcr.io/kubestellar/kubestellar/core-chart:0.25.0-rc.1 Digest: sha256:7b50ef982b212a94a77019fc6e72ee56bd55c960490b27a813ab00c60dcb1f67
What's next: View a summary of image vulnerabilities and recommendations → docker scout quickview ghcr.io/loft-sh/vcluster:0.16.4 v0.13.2: Pulling from open-cluster-management/registration-operator Digest: sha256:f40dd5941772a2602ed008c7cc221db56eef0a1ea0461f1f4945ec57ad8b68ea Status: Image is up to date for quay.io/open-cluster-management/registration-operator:v0.13.2 quay.io/open-cluster-management/registration-operator:v0.13.2 v1.27.2-k3s1: Pulling from rancher/k3s Digest: sha256:66d13a1d6f92c7aa41f7734d5e97526a868484071d7467feb69dd868ad653254 Status: Image is up to date for rancher/k3s:v1.27.2-k3s1 docker.io/rancher/k3s:v1.27.2-k3s1 16.0.0-debian-11-r13: Pulling from bitnami/postgresql Digest: sha256:3331ad89ba2d1af68e36521724440638be3834978ac8288c49e54929357143e6 Status: Image is up to date for bitnami/postgresql:16.0.0-debian-11-r13 docker.io/bitnami/postgresql:16.0.0-debian-11-r13 Image: "ghcr.io/loft-sh/vcluster:0.16.4" with ID "sha256:00428133d55e8b3f1b699f390bc4f0dd79e3a2635aad1d0604f6c8df09803166" not yet present on node "kubeflex-control-plane", loading...
What's next: View a summary of image vulnerabilities and recommendations → docker scout quickview quay.io/open-cluster-management/registration-operator:v0.13.2
What's next: View a summary of image vulnerabilities and recommendations → docker scout quickview rancher/k3s:v1.27.2-k3s1
What's next: View a summary of image vulnerabilities and recommendations → docker scout quickview docker.io/bitnami/postgresql:16.0.0-debian-11-r13 Image: "quay.io/open-cluster-management/registration-operator:v0.13.2" with ID "sha256:26dca6bf6f10501533801d18f6fda2c06c596e39d0567bf67eed78cd4ee396d4" not yet present on node "kubeflex-control-plane", loading... Image: "docker.io/bitnami/postgresql:16.0.0-debian-11-r13" with ID "sha256:bdc29c2220aa7d3d9ced3674fb26e23c03d5db2f73916efbec9d0be83b905c6d" not yet present on node "kubeflex-control-plane", loading... Image: "rancher/k3s:v1.27.2-k3s1" with ID "sha256:bcff597c12474a57fdf706694224ddb5eb5b3941163bd24c24a3681962aa5dd6" not yet present on node "kubeflex-control-plane", loading... NAME: ks-core LAST DEPLOYED: Mon Oct 28 16:50:16 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: For your convenience you will probably want to add contexts to your kubeconfig named after the non-host-type control planes (WDSes and ITSes) that you just created (a host-type control plane is just an alias for the KubeFlex hosting cluster). You can do that with the following
kflex
commands; each creates a context and makes it the current one. See https://github.com/kubestellar/kubestellar/blob/main/docs/content/direct/core-chart.md#kubeconfig-files-and-contexts-for-control-planes (replace "main" with "{{ .Values.KUBESTELLAR_VERSION }}" when making the next release) for a way to do this without usingkflex
.kubectl config delete-context its1 || true kflex ctx its1 kubectl config delete-context wds1 || true kflex ctx wds1
Finally you can use
kflex ctx
to switch back to the kubeconfig context for your KubeFlex hosting cluster. Error: unknown flag: --set-current-for-hosting Usage: kflex ctx [flags]Flags: -s, --chatty-status chatty status indicator (default true) -h, --help help for ctx -k, --kubeconfig string path to kubeconfig file -v, --verbosity int log level
unknown flag: --set-current-for-hosting
Expected Behavior
you should get this if having the latest version,
"Finally you can use
kflex ctx
to switch back to the kubeconfig context for your KubeFlex hosting cluster. ✔ Checking for saved hosting cluster context... ✔ Switching to hosting cluster context... trying to load new context wds1 from server... ✔ Overwriting existing context for control plane ✔ Switching to context wds1... ✔ Overwriting existing context for control plane trying to load new context its1 from server... ✔ Switching to context its1..."Additional Context
No response