Closed kuzaxak closed 2 years ago
Sorry for misleading, after running vcluster in dev mode I got correct error from HTTP requester:
HTTP/1.1 403 Forbidden
Cache-Control: no-cache, private
Content-Type: application/json
X-Content-Type-Options: nosniff
Date: Sun, 19 Dec 2021 12:34:53 GMT
Content-Length: 415
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"ip-10-67-49-115.eu-west-1.compute.internal\" is forbidden: User \"system:serviceaccount:t-509a3300-567c-4139-abd2-242c09d9bcd5:vc-kube\" cannot create resource \"nodes/proxy\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"name":"ip-10-67-49-115.eu-west-1.compute.internal","kind":"nodes"},"code":403}
I think it will be nice to report that one instead of the hijacking exception.
I restored the way how it works.
From what I found when you run kubectl exec pod_name -- cmd
. Kubectl executes the next API call to the kube-api server.
POST https://0.0.0.0:443/api/v1/namespaces/default/pods/test-5f6778868d-gjtt8/exec?command=ls&command=-la&container=nginx&stderr=true&stdout=true
Kube API server trying to reach kubelet of a fake node where pod is located with a call:
POST node_ip:10250/exec/default/test-5f6778868d-gjtt8/nginx?command=ls&command=-la&error=1&output=1
WithFakeKubelet
function redirects it to 100.64.0.1:443/api/v1/nodes/ip-10-67-49-115.eu-west-1.compute.internal/proxy/exec/default/test-5f6778868d-gjtt8/nginx
Where it fails with 400 HTTP error:
HTTP/1.1 400 Bad Request
Content-Length: 52
Content-Type: text/plain; charset=utf-8
Date: Sun, 19 Dec 2021 15:01:28 GMT
Nevermind, I used kube-api
service instead of vcluster
service as an API endpoint for kubectl.
Thank you, I redownload v0.5.0-beta.0 and use the follow command to create Vcluster
vcluster create my-vcluster -n my-vcluster --distro k8s --kubernetes-version v1.20.13
It's worked, but a little question, Vcluster used k8s.gcr.io/kube-apiserver:v1.20.12
not v1.20.13
When I'm trying to run
kubectl exec
against vcluster pod and receiving an error:In vcluster logs:
I'm not quite sure, it is an expected behaviour or not. Background info:
Version: loftsh/vcluster:0.5.0-beta.0 Distro: k8s
Arguments for vcluster: