kubeshark / kubeshark

The API traffic analyzer for Kubernetes providing real-time K8s protocol-level visibility, capturing and monitoring all traffic and payloads going in, out and across containers, pods, nodes and clusters. Inspired by Wireshark, purposely built for Kubernetes
https://kubeshark.co
Apache License 2.0
10.97k stars 467 forks source link

Title: Connection Issues and Invalid Reference Errors in Kubeshark #1381

Open modigithub opened 1 year ago

modigithub commented 1 year ago

Deployment on Kubespray Cluster

Description:

I am currently deploying Kubeshark in a Kubespray cluster consisting of 2 worker nodes and 1 master node, running Debian. The installation was done via Helm with the following command:

helm install kubeshark kubeshark/kubeshark -n kubeshark --create-namespace --set tap.release.namespace=kubeshark

Only the necessary ports have been opened, and no special configurations have been made. Despite this, I am encountering issues related to connection refused errors and invalid references.

Here are some sample log messages illustrating these issues:

Set targeted pods request: error="Post \"http://127.0.0.1:8897/pods/set-targeted\": dial tcp 127.0.0.1:8897: connect: connection refused"

error="Put \"http://127.0.0.1:8897/scripts/env\": dial tcp 127.0.0.1:8897: connect: connection refused"

error="invalid reference \"kubeshark/worker:docker.io/kubeshark/worker:latest\""

Environment:

Kubernetes cluster set up using Kubespray 2 worker nodes, 1 master node Debian OS Kubeshark installed via Helm No special configurations have been applied Steps Taken to Resolve:

I have tried looking into these errors but I'm currently unable to determine their cause or how to solve them. The ports required for Kubeshark have been opened and the Helm installation command appears to be correct based on the documentation.

Request:

I would greatly appreciate any insights or advice on how to address these issues. Could you please guide me on how I can resolve these connection and invalid reference errors and get Kubeshark running properly on my cluster?

Thank you for your time.

here are screenshots from the logs: Hub: image worker: image frontend: image

the frontend looks like: image

mertyildiran commented 1 year ago

@modigithub your browser (front-end) is unable to connect to Hub probably because it's inaccessible.

Create two port-forwards:

kubectl port-forward service/kubeshark-hub 8898:80 & \
kubectl port-forward service/kubeshark-front 8899:80 &

then open the UI:

xdg-open http://localhost:8899  # Linux
open http://localhost:8899      # macOS

Alternatively use the proxy command:

kubeshark proxy
modigithub commented 1 year ago

Thank you once again for your support, @mertyildiran.

I wanted to provide some more context about my cluster configuration. I am currently initiating Kubeshark via the command line interface, which retrieves information through kube-proxy. I've adjusted the config in .kubeshark and changed the namespace to kubeshark.

My setup consists of 3 servers and due to their configuration, I cannot access them via localhost. I can only access them through serverIP:port. I'm not sure how to approach the situation under this constraint. Here are the service: image Its only a test-server and will be removed after i understand the problem.

From this point i can open the page: image

Here is the port-forward for testing: image

and here is the log: image

you can see the pods are running: image

Moreover, I'd like to mention that the script you suggested (xdg-open http://localhost:8899) seems to not work in my environment because it's a console-only environment. Here is the error I received when trying to run it:

css Copy code root@SrvModiVersionControl:~# xdg-open http://localhost:8899 /usr/bin/xdg-open: 882: www-browser: not found /usr/bin/xdg-open: 882: links2: not found /usr/bin/xdg-open: 882: elinks: not found /usr/bin/xdg-open: 882: links: not found /usr/bin/xdg-open: 882: lynx: not found /usr/bin/xdg-open: 882: w3m: not found xdg-open: no method available for opening 'http://localhost:8899' Given this, I am hoping for further guidance. How should I adjust my approach under these conditions?

Thank you very much for your ongoing help and patience. I'm looking forward to your response.

leonchik1976 commented 1 year ago

i have exactly same issue, k8s deployed via kubeadm on centos stream, and kubeshark deployed via helm. Only after running "kubectl port-forward" command it start to work, without it - nothing appears. How to make this solution more permanent rather running it each time?

modigithub commented 1 year ago

Dear @mertyildiran,

Firstly, I appreciate the quick response and guidance that you have provided so far. Despite the efforts, I'm still facing some issues that I believe require further clarification.

One of the issues that remain unresolved is related to the docker errors I'm experiencing. Specifically, I keep encountering an error stating "invalid reference 'kubeshark/worker:docker.io/kubeshark/worker:latest'". Would you be able to provide any insights on why this error is happening and how to mitigate it?

Additionally, the port 8897 seems to be causing some trouble. As you may see in the logs I have provided, there are several connection refused errors associated with this port. This happens despite the fact that the services associated with ports 8898 and 8899 have been successfully established as ClusterIPs and are publicly accessible. I'm not sure why the setup for port 8897 should be any different or more problematic. Any advice or guidance on this matter would be greatly appreciated. image

here are my kubeshark services: image

Again, I want to express my gratitude for your assistance and patience in helping me navigate through these issues. I hope that with your expertise, I will be able to successfully deploy Kubeshark in my Kubespray cluster.

Thank you and looking forward to your response.

Best regards, Christoph Höhensteiger

mertyildiran commented 1 year ago

Dear @mertyildiran,

Firstly, I appreciate the quick response and guidance that you have provided so far. Despite the efforts, I'm still facing some issues that I believe require further clarification.

One of the issues that remain unresolved is related to the docker errors I'm experiencing. Specifically, I keep encountering an error stating "invalid reference 'kubeshark/worker:docker.io/kubeshark/worker:latest'". Would you be able to provide any insights on why this error is happening and how to mitigate it?

Additionally, the port 8897 seems to be causing some trouble. As you may see in the logs I have provided, there are several connection refused errors associated with this port. This happens despite the fact that the services associated with ports 8898 and 8899 have been successfully established as ClusterIPs and are publicly accessible. I'm not sure why the setup for port 8897 should be any different or more problematic. Any advice or guidance on this matter would be greatly appreciated. image

here are my kubeshark services: image

Again, I want to express my gratitude for your assistance and patience in helping me navigate through these issues. I hope that with your expertise, I will be able to successfully deploy Kubeshark in my Kubespray cluster.

Thank you and looking forward to your response.

Best regards, Christoph Höhensteiger

The errors about failed HTTP requests there related to a temporary state. The default worker address is 127.0.0.1:8897 which is not the fact inside a Kubernetes cluster. The Hub asynchronously detects the Kubeshark workers that are deployed into the cluster and adds them to a list which represents the state.

The Hub rejects connecting to the workers that are unauthentic meaning that the worker images comes from a source other than docker.io/kubeshark/worker:latest. It does that by checking against the Image ID / digest of the worker container. At this moment the digest has to be 755997cf15082314a770491143f444cecc3f6c3e26331448a0d5284530da0df1. The errors starting with invalid reference are related to this check. Did you alter the field tap.docker.registry in the config? Are you trying to work with a custom build image or are the images mirrored and digest is altered somehow?

The check explained above is a security measure against in cluster spoofing.

Could you please share the network tab of your browser? Such that I can see what requests to the Hub returned what HTTP status codes?

modigithub commented 1 year ago

Dear Mr. Mertyildiran,

Firstly, I would like to express my deep gratitude for your continuous support and instructive directions. Your guidance has been greatly beneficial for me in understanding and tackling the complexities of the project at hand.

To help you understand the situation better, I have reset everything and reproduced the steps. Firstly, I input the command kubeshark config -r, then adjusted the file so that the namespace runs on Kubeshark. image

Secondly, I executed kubeshark tab -n modi. These are the only steps I have performed. image

I am also curious to know whether the two services that I have created on port 8899 and 8898 are necessary or not. I eagerly await your expert advice on this matter. image

To give you an overview, two errors appear on the browser's network. Here are the details of these errors:

Request URL: http://159.69.35.83:8898/pods/targeted?refresh-token=

Referrer Policy: strict-origin-when-cross-origin Authorization: Referrer: http://159.69.35.83:31007/ User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Request URL: https://api.descope.com/v1/auth/refresh image

Request method: POST Status code: 400 Remote Address: 104.18.26.223:443 Referer: http://159.69.35.83:31007/ image image

I hope that these details will provide you with a clear picture of the situation. I am confident that with your support, we can resolve this problem swiftly.

Thank you once again for your invaluable support.

Best regards,

mertyildiran commented 1 year ago

@modigithub what is the IP 159.69.35.83? I guess the IP of the VPS that you're running the Kubeshark CLI. In that case use kubeshark tap --proxy-host 0.0.0.0 so both front-end and hub can be served to public, or;

I think 159.69.35.83:31007 is a result of you port-forwarding front-end. In that case, do the same thing for the kubeshark-hub service to like 159.69.35.83:31006 and set the tap.proxy.hub.port to 31006.

You shouldn't need to create any new services other than the services created by the Kubeshark CLI.

modigithub commented 1 year ago

Hi,

Thank you for your insights, they helped me narrow down the issue. I have found out that I only need to adjust the NodePorts to the ones specified in the ./.kubeshark/config.yaml. this port: image and this port (Kubectl get svc -n kubeshark): image have to be the same

So, in light of this, I have two questions:

Why isn't the NodePort set to the same port as specified in the kubeconfig from the beginning? It seems to me like this would prevent a lot of confusion and potential issues.

If I set the service to the same port as specified in the kubeconfig for the hub and front, everything works as expected. Could you possibly elaborate on why this is the case?

Looking forward to your responses. Thank you.

Best regards, @modigithub