Closed cedws closed 1 year ago
Wrote a quick test program and can't produce the same effect, so looks like a bug in k9s. Closing.
package main
import (
"context"
"flag"
"fmt"
"log"
"time"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
kubeconfig := flag.String("kubeconfig", "", "path to the kubeconfig file")
flag.Parse()
// Use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
log.Fatal(err)
}
// Create a new Kubernetes client
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
for range time.Tick(1 * time.Second) {
// Retrieve the list of pods in the default namespace
pods, err := clientset.CoreV1().Pods("kube-system").List(context.Background(), v1.ListOptions{})
if err != nil {
log.Fatal(err)
}
// Print the name of each pod
for _, pod := range pods.Items {
fmt.Println(pod.Name)
}
}
}
In our corporate environment, we're only able to access our clusters via a HTTP proxy. We set the
proxy-url
field in our configs to do this. It looks something like this:I've noticed that interacting with the cluster through k9s is much slower when this proxy is in use compared to hitting the API server directly
I did a test on my personal machine with just Rancher Desktop and a basic HTTP CONNECT proxy from a gist.
If
proxy-url
is unset, the connections are reused as expected and new connections are not spawned from actions in k9s.When
proxy-url
is in use, I can see that significantly more TCP connections from the client are opened in Wireshark, as if they aren't being reused at all. Most actions in k9s like describing a pod open a new connection. The connections are also left open for a long time.I'm assuming this is a bug in client-go but could also be in k9s, not too sure yet. I would like to try and write a quick test program that retrieves pods or something since kubectl doesn't run for long enough to demonstrate if TCP connection pooling is working properly.