confidential-containers / operator

Operator to deploy confidential containers runtime
Apache License 2.0
110 stars 60 forks source link

tests: kubectl misconfiguration after `run-local.sh` #328

Open tylerfanelli opened 8 months ago

tylerfanelli commented 8 months ago

Describe the bug Once tests/e2e/run-local.sh is run, the next step in the quickstart guide is to deploy the operator. This fails immediately with The connection to the server localhost:8080 was refused - did you specify the right host or port?

To Reproduce

$ git clone https://github.com/confidential-containers/operator.git
$ cd operator/tests/e2e
$ ./run-local.sh -r kata-qemu-snp

$ docker images

REPOSITORY                   TAG                    IMAGE ID       CREATED         SIZE
localhost:5000/cc-operator   latest                 48f322b96469   39 hours ago    54.2MB

$ kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.8.0
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Is there extra kubectl configuration that needs to be done?

@bpradipt @wainersm

bpradipt commented 8 months ago

Hmm, I have used run-local.sh only for tests. However looking at the code, I suspect there needs to an extra config step if the cluster deployed with run-local.sh needs to be used. See the following line - https://github.com/confidential-containers/operator/blob/main/tests/e2e/run-local.sh#L95

export KUBECONFIG=/etc/kubernetes/admin.conf

You can try setting this explicitly and run a basic kubectl command to verify

kubectl get nodes

If you hit any permissions issue, then you can copy the kubeconfig to $HOME/.kube/config and change the owner before running kubectl command.

@wainersm I think you are active user of run-local.sh :-). Any insights?

bpradipt commented 8 months ago

I spin up an env with run-local.sh and you can use either of the following approach to work with the cluster. Note that when using run-local.sh, you don't need to install the operator again. run-local.sh already sets up everything based on the latest code. I'll create a PR to make it explicit in the readme.

sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf <cmds>

or

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then you can run kubectl as a regular user

kubectl <cmds>

Complete examples

$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes
NAME       STATUS   ROLES           AGE   VERSION
fedora39   Ready    control-plane   19m   v1.24.0

$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -A
NAMESPACE                        NAME                                             READY   STATUS    RESTARTS        AGE
confidential-containers-system   cc-operator-controller-manager-ccbbcfdf7-h9j4n   2/2     Running   0               8m17s
confidential-containers-system   cc-operator-daemon-install-psqpd                 1/1     Running   2 (7m54s ago)   7m59s
confidential-containers-system   cc-operator-pre-install-daemon-c6rvl             1/1     Running   0               8m4s
kube-flannel                     kube-flannel-ds-fz495                            1/1     Running   0               18m
kube-system                      coredns-6d4b75cb6d-chsqz                         1/1     Running   0               18m
kube-system                      coredns-6d4b75cb6d-hnzqb                         1/1     Running   0               18m
kube-system                      etcd-fedora39                                    1/1     Running   0               19m
kube-system                      kube-apiserver-fedora39                          1/1     Running   0               19m
kube-system                      kube-controller-manager-fedora39                 1/1     Running   0               19m
kube-system                      kube-proxy-l627b                                 1/1     Running   0               18m
kube-system                      kube-scheduler-fedora39                          1/1     Running   0               19m

or after copying the kubeconfig file to $HOME/.kube/config

$ kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
fedora39   Ready    control-plane   19m   v1.24.0
bpradipt commented 8 months ago

@tylerfanelli for now you can use the following to just deploy the cluster using the helper scripts in the operator repo.

Assuming you are in "$HOME/operator/tests/e2e", running the following will setup the cluster:

ansible-playbook -i localhost, -c local --tags untagged ansible/main.yml
export "PATH=$PATH:/usr/local/bin"
sudo -E PATH="$PATH" bash -c './cluster/up.sh'

On successful cluster setup, you'll see the instructions to set kubeconfig, ie

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

There is a good explanation of run-local.sh and kubeconfig setup in the operator development guide

@fitzthum @wainersm, for subsequent releases, I think it would be good to clarify the usage of run-local.sh or remove it altogether from quickstart to avoid confusions.

fitzthum commented 8 months ago

Yeah maybe we should add a note about

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

to the quickstart guide. I think we mention it somewhere else.

ldoktor commented 8 months ago

@fitzthum Patches are welcome, I have kcli related recommendations pending as well.