Open mindw opened 5 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
I am experiencing the same thing with the kube-probe
logs for liveness
and readiness
probes
.
We're seeing the same thing in our logs. The probe polling the root endpoint (and receiving a redirect) in rather rapid interval yielding quite a few loglines (thousands per hour).
10.12.28.58 - - [19/Mar/2020:13:34:02 +0000] "GET / HTTP/1.1" 302 138 "-" "kube-probe/1.14+"
10.12.28.58 - - [19/Mar/2020:13:34:03 +0000] "GET /expired HTTP/1.1" 200 43714 "http://10.12.7.211:80/" "kube-probe/1.14+"
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
Hi people, Does anyone find a solution to disable this INFO log? Maybe some flag or config on kubelet?
Disable access_log
in your nginx config
server {
...
access_log off;
...
}
Is there any update on this ?
How others are tackling this ?
We dropped the dashboard from the cluster. It was too difficult to pass security review 🙁
anyone solve this?
Disable
access_log
in your nginx configserver { ... access_log off; ... }
Thanks ! Any ideas on where in the pods filesystem this is ? Then we could override it with a replacement from a configmap etc.. ;-)
Disable
access_log
in your nginx configserver { ... access_log off; ... }
Thanks ! Any ideas on where in the pods filesystem this is ? Then we could override it with a replacement from a configmap etc.. ;-)
@mikementzmaersk @sxwebdev comment seems to be a red herring as it refers to the nginx access log. If you're looking to filter out the logs then adding custom rules to your k8s log collector would be one way to do it (EA fluentd/fluent-bit/beat etc).
Environment
Steps to reproduce
Run the sidecar inside the dashboard pod::
Get logs:
kubectl -n kube-system logs svc/kubernetes-dashboard -c dashboard-metrics-scraper
Observed result
Expected result
An empty log except for warning messages.
Comments