derailed / k9s

🐶 Kubernetes CLI To Manage Your Clusters In Style!
https://k9scli.io
Apache License 2.0
26.47k stars 1.66k forks source link

Stream closed EOF for logs #1399

Open aleksrosz opened 2 years ago

aleksrosz commented 2 years ago




Describe the bug A clear and concise description of what the bug is.

To Reproduce Steps to reproduce the behavior:

  1. Go to pod
  2. Click on k9s "l" to check logs
  3. See error "Stream closed EOF for pod-xxx (pod-zzz)"
  4. While I write this issue I see that after few minutes appeared few lines of log messages but on top still is Stream closed EOF

Expected behavior logs from pod. There is issue with few applications. Most of them is working properly.

Versions (please complete the following information):

derailed commented 2 years ago

@aleksrosz Thanks for the issue. I think it will vary depending on the pod and it's status. ie is the pod and its containers ready and running? Are there init containers, etc... I'll double check but in the mean time, please add details here. Tx!!

aleksrosz commented 2 years ago

Pod is runnig since few days. 1/1 Running 0 7d23h There is init container.

bdols commented 2 years ago

I also see this in v0.25.17, the pods are running but they do not have any log activity in the selected timeframe. The "Stream closed EOF" message is in yellow.

derailed commented 2 years ago

@aleksrosz Given the presence of the init container, that message is correct since you are looking at logs at the pod level, all container logs will be streamed out and since the init container is done you will see EOF in the logs from that init container logs. Does this make sense??

bdols commented 2 years ago

maybe the init container logs could be rendered in a split pane above the active pod logs for reference. I'm currently working with a pod that has 3 init containers and 4 active/sidecar containers. Seeing the yellow error message is briefly confusing.

aleksrosz commented 2 years ago

@derailed Hi Sorry for late answer. Yes it make sense and I tested that and this EOF is from init container but now I see another issue. Without init container and this message I can scroll logs and check logs from past. In this case I can only wait for new logs from main container.

Edit. I checked that on 0.25.18v too.

emmanueljourdan commented 2 years ago

This looks like a regression, I can't reproduce with v0.24.15 but it's definitely there with v0.25.18. On arm64 (m1) and on x64.

GeorgKunk commented 2 years ago

I have seen the same problem. Pressing "0" to switch log range from "1m" to "tail" fixed this. Perhaps this should be the default mode.

DerTiedemann commented 2 years ago

Can confirm it happening on manjaro (arch)

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  4
  On-line CPU(s) list:   0-3
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz

Refreshing the logs e.g. opening and closing the view get most recent logs though no live updates inside of the view.

pmareke commented 2 years ago

Hi everyone!

Do we have any news regarding this issue? It's weird because in Lens the same logs are available but now in k9s.

Thanks!

RammusXu commented 2 years ago

I have seen the same problem. Pressing "0" to switch log range from "1m" to "tail" fixed this. Perhaps this should be the default mode.

I'm not sure it's a bug or a feature 🤣. But press "0", then I can see logs.

截圖 2022-06-22 下午1 39 42
AuditeMarlow commented 1 year ago

Unfortunately this is still an issue on the latest version (currently v0.27.3). While pressing "0" does work to show the most recent logs, I end up having to keep pressing "0" to update the view to the latest logs every time.

probably-not commented 1 year ago

For anyone having this issue, try running the corresponding kubectl logs -f <pod-name> command to try to follow the pod logs directly.

I was having this issue on a k3s cluster in my homelab, and it turns out it was not an issue with k9s, but instead an issue where my home cluster's machines had too many open files. When I ran kubectl logs -f <pod-name> directly, I got the following error: failed to create fsnotify watcher: too many open files, which was easily solved by editing my sysctl settings to allow for a higher open file limit and a higher fs.inotify.max_user_instances limit.

yi-ge-dian commented 1 year ago

@probably-not I think you are very right, if others have same situation, you can fix it by following commands

vim /etc/sysctl.conf

fs.inotify.max_user_instances=512
fs.inotify.max_user_watches=262144

sysctl -p

happy coding!

ndevenish commented 11 months ago

I get this message when trying to open "previous" logs. It makes it looks like k9s is broken. Running kubectl logs --previous shows the logs fine.It looks like the default is to "follow", not tail. This doesn't sound like it makes sense for --previous pods, which will always have a closed stream?

(I abandoned k9s when trying before because of this issue; this time I was reminded about kubernetes tui and thought I would see if the bug was fixed. Then I actually searched the issue this time).

MrunmayeeManmadkar commented 8 months ago

image I still get this error with a newer version, does anyone have any idea?

vonhutrong commented 6 months ago

I still get this error. OS: Windows AMD 64 K9s Rev: v0.32.3 K8s Rev: v1.26.14

SudeepGowda55 commented 3 months ago

If anyone still gets that error message, it's not because of k9s. Either the CPUs or the memory allotted to the k8s cluster is not sufficient to run the replicas/pods, run this cmd and check the logs

$ kubectl get events --sort-by=.metadata.creationTimestamp

risinek commented 2 months ago

If anyone still gets that error message, it's not because of k9s. Either the CPUs or the memory allotted to the k8s cluster is not sufficient to run the replicas/pods, run this cmd and check the logs $ kubectl get events --sort-by=.metadata.creationTimestamp

Or check events directly in K9s :events or at very bottom of deployment description :deployment and press d

uozalp commented 2 months ago

I have a similar issue. I use Pinniped for authentication, and I get a “stream canceled” error every 5 minutes when tailing logs.

In my case, it happens every 5 minutes when the mTLS certificate expires.

It must be because of the way k9s is tailing the logs.

It does not happen when using kubectl logs --follow.

cat ~/.config/pinniped/credentials.yaml
apiVersion: config.supervisor.pinniped.dev/v1alpha1
credentials:
- creationTimestamp: "2024-06-13T07:46:07Z"
  credential:
    clientCertificateData: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    clientKeyData: |
      -----BEGIN PRIVATE KEY-----
      -----END PRIVATE KEY-----
    expirationTimestamp: "2024-06-13T07:51:07Z"
  key: e8571687627a8ad811771615a815627264bfb85515ca7208ef5f6eb2aba5b4ab
  lastUsedTimestamp: "2024-06-13T07:49:24Z"
kind: CredentialCache

image