Open tylerfanelli opened 10 months ago
Hmm, I have used run-local.sh only for tests. However looking at the code, I suspect there needs to an extra config step if the cluster deployed with run-local.sh needs to be used. See the following line - https://github.com/confidential-containers/operator/blob/main/tests/e2e/run-local.sh#L95
export KUBECONFIG=/etc/kubernetes/admin.conf
You can try setting this explicitly and run a basic kubectl command to verify
kubectl get nodes
If you hit any permissions issue, then you can copy the kubeconfig to $HOME/.kube/config
and change the owner before running kubectl command.
@wainersm I think you are active user of run-local.sh :-). Any insights?
I spin up an env with run-local.sh and you can use either of the following approach to work with the cluster. Note that when using run-local.sh, you don't need to install the operator again. run-local.sh already sets up everything based on the latest code. I'll create a PR to make it explicit in the readme.
sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf <cmds>
or
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Then you can run kubectl
as a regular user
kubectl <cmds>
Complete examples
$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes
NAME STATUS ROLES AGE VERSION
fedora39 Ready control-plane 19m v1.24.0
$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
confidential-containers-system cc-operator-controller-manager-ccbbcfdf7-h9j4n 2/2 Running 0 8m17s
confidential-containers-system cc-operator-daemon-install-psqpd 1/1 Running 2 (7m54s ago) 7m59s
confidential-containers-system cc-operator-pre-install-daemon-c6rvl 1/1 Running 0 8m4s
kube-flannel kube-flannel-ds-fz495 1/1 Running 0 18m
kube-system coredns-6d4b75cb6d-chsqz 1/1 Running 0 18m
kube-system coredns-6d4b75cb6d-hnzqb 1/1 Running 0 18m
kube-system etcd-fedora39 1/1 Running 0 19m
kube-system kube-apiserver-fedora39 1/1 Running 0 19m
kube-system kube-controller-manager-fedora39 1/1 Running 0 19m
kube-system kube-proxy-l627b 1/1 Running 0 18m
kube-system kube-scheduler-fedora39 1/1 Running 0 19m
or after copying the kubeconfig file to $HOME/.kube/config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
fedora39 Ready control-plane 19m v1.24.0
@tylerfanelli for now you can use the following to just deploy the cluster using the helper scripts in the operator repo.
Assuming you are in "$HOME/operator/tests/e2e", running the following will setup the cluster:
ansible-playbook -i localhost, -c local --tags untagged ansible/main.yml
export "PATH=$PATH:/usr/local/bin"
sudo -E PATH="$PATH" bash -c './cluster/up.sh'
On successful cluster setup, you'll see the instructions to set kubeconfig, ie
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
There is a good explanation of run-local.sh and kubeconfig setup in the operator development guide
@fitzthum @wainersm, for subsequent releases, I think it would be good to clarify the usage of run-local.sh or remove it altogether from quickstart to avoid confusions.
Yeah maybe we should add a note about
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
to the quickstart guide. I think we mention it somewhere else.
@fitzthum Patches are welcome, I have kcli related recommendations pending as well.
Describe the bug Once
tests/e2e/run-local.sh
is run, the next step in the quickstart guide is to deploy the operator. This fails immediately withThe connection to the server localhost:8080 was refused - did you specify the right host or port?
To Reproduce
Is there extra
kubectl
configuration that needs to be done?@bpradipt @wainersm