kubernetes-sigs / dashboard-metrics-scraper

Container to scrape, store, and retrieve a window of time from the Metrics Server.
Apache License 2.0
87 stars 39 forks source link

How to disable the access log? #21

Open mindw opened 5 years ago

mindw commented 5 years ago
Environment
Installation method: kubectl apply
Kubernetes version: 1.14
Dashboard version: 2.0.0-b4
Operating system: Linux
Steps to reproduce

Run the sidecar inside the dashboard pod::

        args:
          - --metric-resolution=30s
          - --log-level=warn
        ports:
        - containerPort: 8000
          name: http
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: http
          initialDelaySeconds: 30
          timeoutSeconds: 30

Get logs: kubectl -n kube-system logs svc/kubernetes-dashboard -c dashboard-metrics-scraper

Observed result
10.0.7.85 - - [20/Sep/2019:23:47:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.13"
10.0.7.85 - - [20/Sep/2019:23:47:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.13"
127.0.0.1 - - [20/Sep/2019:23:47:31 +0000] "GET /healthz?timeout=32s HTTP/1.1" 200 25 "" "dashboard/v0.0.0 (linux/amd64) kubernetes/$Format"
Expected result

An empty log except for warning messages.

Comments
fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

mindw commented 4 years ago

/remove-lifecycle stale

danksim commented 4 years ago

I am experiencing the same thing with the kube-probe logs for liveness and readiness probes.

MarcelTon commented 4 years ago

We're seeing the same thing in our logs. The probe polling the root endpoint (and receiving a redirect) in rather rapid interval yielding quite a few loglines (thousands per hour).

10.12.28.58 - - [19/Mar/2020:13:34:02 +0000] "GET / HTTP/1.1" 302 138 "-" "kube-probe/1.14+"
10.12.28.58 - - [19/Mar/2020:13:34:03 +0000] "GET /expired HTTP/1.1" 200 43714 "http://10.12.7.211:80/" "kube-probe/1.14+"
fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

mindw commented 4 years ago

/remove-lifecycle stale

maciaszczykm commented 4 years ago

/lifecycle frozen

GrigorievNick commented 2 years ago

Hi people, Does anyone find a solution to disable this INFO log? Maybe some flag or config on kubelet?

sxwebdev commented 2 years ago

Disable access_log in your nginx config

server {
   ...
   access_log off;
   ...
}
nikhilagrawal577 commented 2 years ago

Is there any update on this ?

How others are tackling this ?

mindw commented 2 years ago

We dropped the dashboard from the cluster. It was too difficult to pass security review 🙁

tooptoop4 commented 1 year ago

anyone solve this?

mikementzmaersk commented 1 year ago

Disable access_log in your nginx config

server {
   ...
   access_log off;
   ...
}

Thanks ! Any ideas on where in the pods filesystem this is ? Then we could override it with a replacement from a configmap etc.. ;-)

mindw commented 1 year ago

Disable access_log in your nginx config

server {
   ...
   access_log off;
   ...
}

Thanks ! Any ideas on where in the pods filesystem this is ? Then we could override it with a replacement from a configmap etc.. ;-)

@mikementzmaersk @sxwebdev comment seems to be a red herring as it refers to the nginx access log. If you're looking to filter out the logs then adding custom rules to your k8s log collector would be one way to do it (EA fluentd/fluent-bit/beat etc).