Open spiffxp opened 3 years ago
https://github.com/kubernetes/k8s.io/pull/2133 removed export of logs for k8s-infra-e2e projects as a start
Should survey remaining log churn in audit PR's to track down what can be done
Log noise that seems like it shouldn't be present:
projects/{project}/logs/cloudaudit.googleapis.com%2Fsystem_event
projects/{project}/logs/cloudaudit.googleapis.com%2Fsystem_event
gcloud logging logs list --help
says Only logs that contain log entries are listed.
Every project has at least these two buckets
$ gcloud logging buckets list
LOCATION BUCKET_ID RETENTION_DAYS LIFECYCLE_STATE LOCKED CREATE_TIME UPDATE_TIME
global _Default 30 ACTIVE
global _Required 400 ACTIVE True
And at least these two sinks that route to them
$ gcloud logging sinks list --format=yaml
---
destination: logging.googleapis.com/projects/spiffxp-gke-dev/locations/global/buckets/_Required
filter: LOG_ID("cloudaudit.googleapis.com/activity") OR LOG_ID("externalaudit.googleapis.com/activity")
OR LOG_ID("cloudaudit.googleapis.com/system_event") OR LOG_ID("externalaudit.googleapis.com/system_event")
OR LOG_ID("cloudaudit.googleapis.com/access_transparency") OR LOG_ID("externalaudit.googleapis.com/access_transparency")
name: _Required
---
destination: logging.googleapis.com/projects/spiffxp-gke-dev/locations/global/buckets/_Default
filter: NOT LOG_ID("cloudaudit.googleapis.com/activity") AND NOT LOG_ID("externalaudit.googleapis.com/activity")
AND NOT LOG_ID("cloudaudit.googleapis.com/system_event") AND NOT LOG_ID("externalaudit.googleapis.com/system_event")
AND NOT LOG_ID("cloudaudit.googleapis.com/access_transparency") AND NOT LOG_ID("externalaudit.googleapis.com/access_transparency")
name: _Default
So are we losing system_event
logs because nothing has happened to generate a log entry there in 400 days?
/remove-priority important-longterm /priority backlog It's annoying but it's not really creating a lot of additional review burden for me at this point
/milestone clear
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen
https://github.com/kubernetes/k8s.io/pull/2102 introduced export of
logging
resources to the audit script. Based on review of the first audit job PR that used this (https://github.com/kubernetes/k8s.io/pull/2094) there is some noise we should filter out to ease review burden.empty metrics.json
Currently there are lots of
services/logging/metrics.json
files with content[]
. If there are no metrics, we shouldn't export them.e2e test logs
Logs appear to be showing up for all pods used in e2e tests. For example
audit/projects/k8s-infra-e2e-boskos-010/services/logging/logs.json
has a diff that looks like:We should either choose to ignore/filter these out, or determine how to configure our e2e tests to not send any logs. I swear we had done this a while ago, but we only ever verified by way of costs going down.
/wg k8s-infra /area infra/auditing /priority important-longterm /milestone v1.22