k8spacket / k8spacket

k8spacket - collects TCP traffic and TLS connection metadata in the Kubernetes cluster using eBPF and visualizes in Grafana
Apache License 2.0
1k stars 51 forks source link

dashboards have no page display #38

Closed xzclinux closed 1 year ago

xzclinux commented 1 year ago

The endpoint (/api/graph/data) cannot obtain data after deployment k8spacket version: 1.1.1 granfana version: 9.5.3 k8s version :1.21 Endpoint data return: image k8spacket log: image grafana dashboard: image

k8spacket commented 1 year ago

Hi @xzclinux

Sounds similar to this issue: https://github.com/k8spacket/k8spacket/issues/25#issuecomment-1386837140

Could you share logs of the init-k8spacket container:

k -n k8spacket logs <k8spacket-pod-name> -f -c init-k8spacket 
xzclinux commented 1 year ago

image No log output

xzclinux commented 1 year ago

Do we have a communication group? I have other needs to communicate with you

k8spacket commented 1 year ago

@xzclinux

As I see, you were too fast for that container. That's why there weren't any logs, I suppose.

Could you show logs for running pod but for container init-k8spacket? From your screenshot, it will be:

k -n k8spacket logs k8spacket-z52vz -f -c init-k8spacket 

No, we don't have any communication group. But, if you have any needs not related to an issue, feel free to write an email: k8spacket@gmail.com

xzclinux commented 1 year ago

I checked the yaml file for the new version and the log was written to /dev/termination-log to see if this is something you need to see image image and kernel version: 4.18.0-147 system version: EulerOS 2.0 I don't know if the latest version of k8spacket supports the

k8spacket commented 1 year ago

Hi @xzclinux You are using the Huawei cloud. Please see another issue related to EulerOS: https://github.com/k8spacket/k8spacket-helm-chart/issues/6#issuecomment-1217968681

The problem is probably connected to the wrong command to achieve network interfaces.

Go through the related issue and change the command in helm values: https://github.com/k8spacket/k8spacket-helm-chart/blob/master/charts/k8spacket/values.yaml#L71

Additionally, you can share here the response from the command:

kubectl run -i --restart=Never --rm k8s-packet-debug --image=k8spacket/k8spacket:1.1.1 --overrides='{"kind":"Pod", "apiVersion":"v1", "spec": {"hostNetwork": true}}' -- ip address

Then I'll adjust the command to your needs.

xzclinux commented 1 year ago

It can work normally, would you like to add this command to the chart annotation, thank you very much。

andyzheung commented 1 year ago

I have the same problem... daemonset logs: kubectl logs -n k8spacket k8spacket-9vftq 2023/08/05 05:43:07 Refreshing interfaces for capturing... exit status 1 2023/08/05 05:43:17 Refreshing interfaces for capturing... exit status 1 2023/08/05 05:43:27 Refreshing interfaces for capturing... exit status 1 2023/08/05 05:43:37 Refreshing interfaces for capturing... exit status 1

ip address: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:96:b0:8c brd ff:ff:ff:ff:ff:ff inet 10.250.68.99/22 brd 10.250.71.255 scope global dynamic noprefixroute ens160 valid_lft 58512sec preferred_lft 58512sec inet 10.250.68.220/32 scope global ens160 valid_lft forever preferred_lft forever inet6 fe80::4ccd:6dd3:e5d7:35b5/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: tunl0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:f9:28:d0:e8 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:f9ff:fe28:d0e8/64 scope link valid_lft forever preferred_lft forever 7: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 66:a7:26:f7:25:dd brd ff:ff:ff:ff:ff:ff inet 192.168.70.64/32 scope global vxlan.calico valid_lft forever preferred_lft forever inet6 fe80::64a7:26ff:fef7:25dd/64 scope link valid_lft forever preferred_lft forever 8: cali26afe4700b2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 9: cali7fbbc57996d@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 10: cali0d3ecae97c2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever

@k8spacket

k8spacket commented 1 year ago

Hi @andyzheung

Hmm, I don't understand the part of your logs: exit status 1 🤔

Anyway: What kind of OS are you using under K8S cluster? Could you check if the plugins were downloaded properly by the below command?

k -n k8spacket get pods k8spacket-9vftq -oyaml | yq .status.initContainerStatuses

If you don't have yq tool, then share .status.initContainerStatuses[0].state.terminated.message part of the response here.

It should look similar to:

        Connecting to github.com (***:443)
        wget: note: TLS certificate validation not implemented
        Connecting to objects.githubusercontent.com (***:443)
        saving to 'nodegraph-x86_64.so'
        nodegraph-x86_64.so   20% |******                          | 13.2M  0:00:03 ETA
        nodegraph-x86_64.so   74% |***********************         | 47.0M  0:00:00 ETA
        nodegraph-x86_64.so  100% |********************************| 63.3M  0:00:00 ETA
        'nodegraph-x86_64.so' saved
        Connecting to github.com (***:443)
        Connecting to objects.githubusercontent.com (***:443)
        saving to 'tls-parser-x86_64.so'
        tls-parser-x86_64.so   0% |                                |  176k  0:06:09 ETA
        tls-parser-x86_64.so  43% |*************                   | 27.4M  0:00:02 ETA
        tls-parser-x86_64.so  95% |******************************  | 61.0M  0:00:00 ETA
        tls-parser-x86_64.so 100% |********************************| 63.7M  0:00:00 ETA
        'tls-parser-x86_64.so' saved
andyzheung commented 1 year ago

Hi @andyzheung

Hmm, I don't understand the part of your logs: exit status 1 🤔

Anyway: What kind of OS are you using under K8S cluster? Could you check if the plugins were downloaded properly by the below command?

k -n k8spacket get pods k8spacket-9vftq -oyaml | yq .status.initContainerStatuses

If you don't have yq tool, then share .status.initContainerStatuses[0].state.terminated.message part of the response here.

It should look similar to:

        Connecting to github.com (***:443)
        wget: note: TLS certificate validation not implemented
        Connecting to objects.githubusercontent.com (***:443)
        saving to 'nodegraph-x86_64.so'
        nodegraph-x86_64.so   20% |******                          | 13.2M  0:00:03 ETA
        nodegraph-x86_64.so   74% |***********************         | 47.0M  0:00:00 ETA
        nodegraph-x86_64.so  100% |********************************| 63.3M  0:00:00 ETA
        'nodegraph-x86_64.so' saved
        Connecting to github.com (***:443)
        Connecting to objects.githubusercontent.com (***:443)
        saving to 'tls-parser-x86_64.so'
        tls-parser-x86_64.so   0% |                                |  176k  0:06:09 ETA
        tls-parser-x86_64.so  43% |*************                   | 27.4M  0:00:02 ETA
        tls-parser-x86_64.so  95% |******************************  | 61.0M  0:00:00 ETA
        tls-parser-x86_64.so 100% |********************************| 63.7M  0:00:00 ETA
        'tls-parser-x86_64.so' saved

I use ubuntu 18.04 LTS.

state: terminated: containerID: docker://da436081a8bc80be33ce5ef9688f3800d1fe9f21f92f4419934edd386538d02a exitCode: 0 finishedAt: "2023-08-05T02:50:29Z" message: | Connecting to github.com (20.205.243.166:443) wget: note: TLS certificate validation not implemented Connecting to objects.githubusercontent.com (185.199.109.133:443) saving to 'nodegraph-x86_64.so' nodegraph-x86_64.so 0% | | 15545 1:12:06 ETA nodegraph-x86_64.so 5% |* | 3340k 0:00:36 ETA nodegraph-x86_64.so 16% |* | 10.3M 0:00:15 ETA nodegraph-x86_64.so 29% |***** | 18.8M 0:00:09 ETA nodegraph-x86_64.so 39% |**** | 24.9M 0:00:07 ETA nodegraph-x86_64.so 48% |* | 31.0M 0:00:06 ETA nodegraph-x86_64.so 58% |** | 37.3M 0:00:04 ETA nodegraph-x86_64.so 69% |** | 43.8M 0:00:03 ETA nodegraph-x86_64.so 79% |*** | 50.6M 0:00:02 ETA nodegraph-x86_64.so 90% |**** | 57.3M 0:00:01 ETA nodegraph-x86_64.so 100% |****| 63.3M 0:00:00 ETA 'nodegraph-x86_64.so' saved Connecting to github.com (20.205.243.166:443) Connecting to objects.githubusercontent.com (185.199.111.133:443) saving to 'tls-parser-x86_64.so' tls-parser-x86_64.so 0% | | 85696 0:13:00 ETA tls-parser-x86_64.so 6% |* | 3997k 0:00:30 ETA tls-parser-x86_64.so 21% |** | 13.4M 0:00:11 ETA tls-parser-x86_64.so 34% |* | 22.1M 0:00:07 ETA tls-parser-x86_64.so 45% |** | 29.0M 0:00:05 ETA tls-parser-x86_64.so 56% |** | 35.9M 0:00:04 ETA tls-parser-x86_64.so 68% |*** | 43.3M 0:00:03 ETA tls-parser-x86_64.so 79% |* | 50.9M 0:00:02 ETA tls-parser-x86_64.so 92% |***** | 58.8M 0:00:00 ETA tls-parser-x86_64.so 100% |****| 63.7M 0:00:00 ETA 'tls-parser-x86_64.so' saved reason: Completed startedAt: "2023-08-05T02:50:05Z" @k8spacket

k8spacket commented 1 year ago

Hi @andyzheung

That's great. Plugins are installed properly. Based on your answer https://github.com/k8spacket/k8spacket/issues/38#issuecomment-1666400407

I think the best command for you will be:

command: "ip address | grep @if | sed -E 's/.* (\\w+)@if.*/\\1/' | tr '\\n' ',' | sed 's/.$//'"

Change it in values.yaml and reinstall k8spacket

andyzheung commented 1 year ago

I reinstall the k8spacket.. the daemonset logs like this: 2023/08/10 12:57:58 [nodegraph plugin] Connection: src=10.250.68.105 srcName=pod.vsphere-csi-node-gwtls srcPort=48744 srcNS=vmware-system-csi dst=192.168.77.191 dstName=pod.guestbook-demo-helm-guestbook-7964ff57f6-szcnn dstPort=80 dstNS=argocd-demo closed=true bytesSent=104 bytesReceived=1871 duration=1.6129e-05 2023/08/10 12:58:00 [nodegraph plugin] Connection: src=10.250.68.105 srcName=pod.vsphere-csi-node-gwtls srcPort=41658 srcNS=vmware-system-csi dst=192.168.77.135 dstName=pod.tekton-triggers-core-interceptors-84874b55bb-8mbhq dstPort=8082 dstNS=tekton-pipelines closed=true bytesSent=111 bytesReceived=94 duration=6.192e-06 2023/08/10 12:58:04 [nodegraph plugin] Connection: src=10.250.68.105 srcName=pod.vsphere-csi-node-gwtls srcPort=60044 srcNS=vmware-system-csi dst=192.168.77.183 dstName=pod.argocd-notifications-controller-6f99749496-997hb dstPort=9001 dstNS=argocd closed=true bytesSent=0 bytesReceived=0 duration=2.845e-06 2023/08/10 12:58:04 [nodegraph plugin] Connection: src=10.250.68.105 srcName=pod.vsphere-csi-node-gwtls srcPort=55574 srcNS=vmware-system-csi dst=192.168.77.181 dstName=pod.argocd-repo-server-7cc94f47b7-khbfw dstPort=8084 dstNS=argocd closed=true bytesSent=113 bytesReceived=138 duration=5.252e-06 2023/08/10 12:58:04 [nodegraph plugin] Connection: src=10.250.68.105 srcName=pod.vsphere-csi-node-gwtls srcPort=38622 srcNS=vmware-system-csi dst=192.168.77.187 dstName=pod.argocd-application-controller-0 dstPort=8082 dstNS=argocd closed=true bytesSent=113 bytesReceived=138 duration=5.579e-06

but the node graph panel is still empty: image

image

@k8spacket

k8spacket commented 1 year ago

Hi @andyzheung

Could you check the answer of the endpoint where Grafana expects data?

kubectl run -it --rm --image=curlimages/curl curly -- http://k8spacket.k8spacket.svc.cluster.local:8080/nodegraph/api/graph/data

Additionally, could you check the Grafana datasource for Node Graph API? It should be filled by value

http://k8spacket.k8spacket.svc.cluster.local:8080/nodegraph
Screenshot 2023-08-10 at 15 09 11
k8spacket commented 1 year ago

Hi @andyzheung I will close this issue according to the inactive thread from more than 10 days. Please feel free to reopen it if you have more questions.