Open Schille opened 1 year ago
@schwobaseggl Here's my PoC with k3d
.
1) Create cluster
k3d cluster create mycluster --agents 1 -p 8080:80@agent:0 -p 31820:31820/UDP@agent:0
2) Run a Pod with Ubuntu in mycluster
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- infinity
3) Retrieve Kubernetes certificate and SA token from ubuntu
cd /var
tar -cz run/secrets/kubernetes.io -f sa.tar.gz
kubectl cp ubuntu:var/sa.tar.gz ./sa.tar.gz
curl
apt update && apt install -y curl
4) Setup a Gefyra container
gefyra up
gefyra run --image ubuntu --name ubuntu --rm -c "sleep infinity"
5) Copy the Kubernetes certificate and SA token to Gefyra container from the current directory on the host
docker cp sa.tar.gz ubuntu:var/sa.tar.gz
6) Prepare Gefyra container with curl
and the copied data
docker exec -it ubuntu bash
cd /var
tar -xzvf sa.tar.gz
apt update && apt install -y curl
In both shells (K3d running Pod and local Gefyra container), this should work:
APISERVER=https://kubernetes
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
# Explore the API with TOKEN
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.20.0.2:6443"
}
]
}
Here's the PoC for a specific ServiceAccount (when given with gefyra run --sa mysa
) after setting everything up from above.
1) Create a service account, ClusterRole, and ClusterRoleBinding
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysa
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-manager
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: manage-pods
subjects:
- kind: ServiceAccount
name: mysa
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-manager
2) Retrieve ServiceAccount token
SECRET=$(kubectl get serviceaccount mysa -o json | jq -Mr '.secrets[].name | select(contains("token"))')
TOKEN=$(kubectl get secret ${SECRET} -o json | jq -Mr '.data.token' | base64 -d)
3) Put the token to the local Gefyra container
docker exec ubuntu bash -c "echo '`echo $TOKEN`' > /var/run/secrets/kubernetes.io/serviceaccount/token"
4) curl
the API server from local Gefyra container
docker exec -it ubuntu bash
curl
TOKEN=$(cat ${SERVICEACCOUNT}/token)
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/default/pods/
{ "kind": "PodList", "apiVersion": "v1", "metadata": { "resourceVersion": "2498" }, "items": [ { "metadata": { "name": "ubuntu", "namespace": "default", "uid": "4cd343e6-7854-4d63-a21c-c4efe479341b", "resourceVersion": "877", "creationTimestamp": "2023-03-17T16:26:40Z", "annotations": { [...] "startTime": "2023-03-17T16:26:40Z", "containerStatuses": [ { "name": "ubuntu", "state": { "running": { "startedAt": "2023-03-17T16:27:09Z" } }, "lastState": {
},
"ready": true,
"restartCount": 0,
"image": "docker.io/library/ubuntu:latest",
"imageID": "docker.io/library/ubuntu@sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea118ef3babc295a0428a6d21",
"containerID": "containerd://5c7bae9481439bd5e35f08caf7446e47b0f30a963ec06cc5d66f9c445a5b919a",
"started": true
}
],
"qosClass": "BestEffort"
}
}
] }
I'd say for getting the env from a Gefyra phantom Pod in the cluster when gefyra run
ning, we should create a busybox Pod:
apiVersion: v1
kind: Pod
metadata:
name: my-phantom-run-1
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- infinity
And once the Pod is running:
kubectl exec my-phantom-run-1 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=my-phantom-run-1
KUBERNETES_SERVICE_HOST=10.43.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
HOME=/root
While copying the environment to the local Gefyra container, we should modify the HOSTNAME
value to match the local container's name.
@Schille Reproduction with exact steps above. ubuntu 20.04LTS amd gefyra 1.0.4 Docker engine 23.0.1 api version 1.42 Go 1.19.5 containerd 1.6.18 runc 1.1.4 docker-init 0.19.0 k3d 5.4.3 k3s 1.23.6 kubectl client 1.24.1
Create cluster
k3d cluster create mycluster --agents 1 -p 8080:80@agent:0 -p 31820:31820/UDP@agent:0
Cluster spins up w/o issues
Run pod with exact config from your comment
$ kubectl apply -f po.yaml
$ kubectl get po
NAME READY STATUS RESTARTS AGE
ubuntu 1/1 Running 0 53s
All good
.... all steps to 5. with no issues
API curl:
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.21.0.2:6443"
}
]
}
curl: (60) SSL certificate problem: self-signed certificate in certificate chain
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.
7. API curl with dedicated service account
Same symptoms: in k3d pod, I get the PodList, in gefyra docker container, I get
the error from above
8. One notable difference seems to be that the api server sends different certificates to the pod and container respectively:
In k3d pod:
Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_128_GCM_SHA256 Session-ID: 7BADB03C1BA4B4A861429A86A1BA5191CFDD3F8AE91EBB96F2C50F37905F3AE0 Session-ID-ctx: Resumption PSK: FB38CB1E6181712B29AB55A159283B5E4AFA06F8011C4E2321424A1D5BCC7DD0 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 604800 (seconds) TLS session ticket: 0000 - fc a6 ec 25 45 77 5f 32-62 b8 92 45 7a 7c 6e 36 ...%Ew2b..Ez|n6 0010 - ae 1c 1f 27 d9 52 ea ab-be 6a 9f 92 90 59 3b 69 ...'.R...j...Y;i 0020 - 9a 41 5b 87 38 c4 aa 75-55 fa ea 47 2c 27 64 9b .A[.8..uU..G,'d. 0030 - 49 ba e9 03 1a ce 0d 74-fe b4 48 d8 50 2e 33 0d I......t..H.P.3. 0040 - 6f 2f 0e cc a8 0d 33 c4-a6 fb c1 ec 58 63 5b 5e o/....3.....Xc[^ 0050 - 3c f4 12 57 de 61 e9 d6-46 3c 5f f2 fc d7 63 f9 <..W.a..F<...c. 0060 - ae 6d 7c de 86 ac 76 69-09 a1 40 2e fc 15 4b 62 .m|...vi..@...Kb 0070 - 1f .
Start Time: 1679653722
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
Extended master secret: no
Max Early Data: 0
read R BLOCK DONE
In gefyra docker container:
Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_128_GCM_SHA256 Session-ID: CA378D595E4D76C414C28123694911AF76E0070CC228DAE3AA60F64F70E05FE9 Session-ID-ctx: Resumption PSK: 4CA4356C57C0A29F9367CAED2067A61EA16ACD211A827EA6978F04F3AB39FB44 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 604800 (seconds) TLS session ticket: 0000 - a5 31 e1 7c 09 37 6a 8b-92 3c 3c 7c 27 94 8b d6 .1.|.7j..<<|'... 0010 - 5e 51 96 4d 67 6b b3 c7-36 f1 da c5 83 fe 2c c5 ^Q.Mgk..6.....,. 0020 - bd c9 a9 93 b7 e6 cb 2c-aa fa 38 19 59 d8 66 57 .......,..8.Y.fW 0030 - 4f da 79 85 3c 3e 0b fc-7e cb 35 ba 5a a9 92 4a O.y.<>..~.5.Z..J 0040 - 54 f2 23 e0 ab e1 f5 a7-42 93 25 d9 41 3f 06 70 T.#.....B.%.A?.p 0050 - c6 10 7f 53 cc 23 f5 b1-f2 13 f9 91 84 bf 01 8f ...S.#.......... 0060 - 9f 83 c8 9a 90 fb d0 61-20 bf 6c f7 20 23 7a e2 .......a .l. #z. 0070 - e8 .
Start Time: 1679653803
Timeout : 7200 (sec)
Verify return code: 19 (self-signed certificate in certificate chain)
Extended master secret: no
Max Early Data: 0
read R BLOCK DONE
@Schille With the new Laptop and a clean environment, it now works for me, too. Shall we keep this shelved or move forward with implementing it along this way?
What is the new feature about?
At the least, those two are required in order to connect a locally running container with the remote K8s API through an internal path:
KUBERNETES_SERVICE_HOST
- ipKUBERNETES_SERVICE_PORT
- portIn addition, it would be a nice to have to get/assign a service account for the local container.
Why would such a feature be important to you?
When writing applications that communicate with K8s API server, it would be important to make the address available to the locally running container.
Anything else we need to know?
No response