Closed danopia closed 1 year ago
It was brought up in another issue that Deno's WebSocket
client does not support attaching arbitrary HTTP headers such as Authorization
when opening a connection: https://github.com/denoland/deno/discussions/9891#discussioncomment-1010206
I also suspect that WebSocket
doesn't allow passing a Deno.HttpClient
which complicates HTTPS validation.
So it's not clear how Deno would be able to connect to basically any Kubernetes control plane with a WebSocket.
Discord says that WebSocketStream
could be in Deno 1.13 and allow for Deno.HttpClient
and/or custom headers.
Regarding kubectl proxy
and WebSocket:
$ kubectl proxy --help | grep exec
--reject-paths='^/api/.*/pods/.*/exec,^/api/.*/pods/.*/attach': Regular expression for paths that the proxy should
reject. Paths specified here will be rejected even accepted by --accept-paths.
The pod/exec and pod/attach APIs are explicitly blocked by default by kubectl proxy
. If you pass a different regex then you'll be able to open a pod/exec WebSocket
just fine through kubectl proxy
.
Given that Deno cannot do WebSocket with an InClusterConfig
/ token auth until arbitrary ws headers are added, kubectl proxy --reject-paths='^-$'
seems like the only way to actually make pod/exec or pod/attach API calls. Perhaps a KubectlProxyRestClient
could launch the proxy process and then wrap KubeConfigRestClient
.
~And maybe that should be done over UNIX sockets.~ (oops: deno doesn't do fetch/ws over unix sockets, deno#8821)
So, given that Deno Websocket is quite limited and Kubernetes doesn't actually like it (preferring SPDY and eventually h2), it seems it would be best to hide the middleman and support the k8s 'tunnel' primitive directly. The only other reason to open a websocket would be a /proxy/ API which is really workload specific and doesn't benefit much from this library. (And maybe the port forward API, I haven't looked into that one yet)
editor's note: tunnels are apparently totally different between SPDY and WS, because in WS the streams are literally just a uint8 tagging and in SPDY they can by dynamically created/closed and are named with individual headers. So in order to have transport transparency the library needs to know what the streams will be addressed. And it seems like under SPDY
portforward
supports dynamically managing concurrent forwards, while under WSportforward
is literally just one pipe per port per entire websocket. Pretty rough all around.
So I'll make this issue here more specific to tunnels, which will then enable kubernetes_apis to return tunnels for attach and exec.
Tunnels will also be easier to polyfill with 'kubectl exec' because there's less pretending to do. Unfortunately things like sizing the remote tty won't be possible in the polyfill.
Tunnels seem to be little more than a bundle of channels: stdin and sizing from the client, then stout/stderr/status from the server. It should be pretty clean to return a bundle of w3c streams and that will be good preparation for WebSocketStream once Deno ships it w/ header/httpclient flexibility.
Turns out that Kubernetes allows for sending bearer-auth tokens via fake WebSocket subprotocols: https://github.com/kubernetes/kubernetes/pull/47740
Marking this closed since 0.6.0 and 0.7.0 have added a workable amount of tunnel support.
Apparently the pod-exec API is WebSocket based, so websockets need to be available for use to make that API work. https://github.com/cloudydeno/deno-kubernetes_apis/issues/2
The
performRequest()
interface allows for request and response streaming, so the primary change here might simply be adding an upgrade flag.The larger work will be supporting
kubectl
API emulation because it will be API-specific. How the websocket andkubectl
both behave will impact how easily the websocket can be emulated on a dev laptop. This will likely be a second pass but might get bundled into the same version if the pod-exec API goes smoothly.