Open alejorod18 opened 3 years ago
Just to clarify, you are talking about access logs generated per request/response to ESPv2, right?
If you want to prevent the access logs from being generated in the first place, it depends on how you configured health checks:
1) Is ESPv2 configured to respond to the health check (via the --healthz
flag)? If so, ESPv2 will never write access logs for the configured healthz endpoint.
2) Is ESPv2 forwarding healthz to your backend, which your backend responds with? If so, then yes ESPv2 will always write access logs for them. We do not have any way to disable this behavior today. Any request that reaches your backend will have an access log associated with it, including health check. We can consider changing this if it's really needed.
If you just want a simpler view and are not worried about storage costs, you can always filter out the health check access logs using the Cloud Logging query language: https://cloud.google.com/logging/docs/view/logging-query-language
Thanks for your quickly response, Is the case one (1) I use --healthz (-z) flag:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: NAMESPACE_NAME
name: cloud-endpoints-esp
spec:
selector:
matchLabels:
app: cloud-endpoints-esp
replicas: 2
template:
metadata:
labels:
app: cloud-endpoints-esp
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:2
args: [
"-s", "HOST",
"--rollout_strategy", "managed",
"-z", "healthz",
"--access_log=/dev/stdout",
"--underscores_in_headers",
"--cors_preset=basic",
"--cors_allow_headers=*",
"--http_request_timeout_s=600"
]
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
But the request to /healthz from google load balancer (GoogleHC) and kubernetes (kube-prove) is showing on the access logs.
I see, you are looking at a different set of access logs. ESPv2 generates access logs to Cloud Endpoints with some rich information per request. You can view them by following this method: https://cloud.google.com/endpoints/docs/openapi/monitoring-your-api#logs
I am not sure which access logs you are viewing in that screenshot. Is it from your backend container? ESPv2 container? Some load balancer?
Can you expand one of the logs and send me the full JSON so I can identify where those logs are coming from?
I doubt we can change this, this access logging is not owned by our team.
This logs are from the ESPv2 container, inside the pod on cloud-endpoints-esp kubernetes deployment, the logs are in the workload/logs section: From cloud logging the logs are the same: Whit the query:
resource.type="k8s_container"
resource.labels.project_id="cargamos-kubernetes"
resource.labels.location="us-east1"
resource.labels.cluster_name="cargamos-kubernetes"
resource.labels.namespace_name="cargamos-dev-1"
labels.k8s-pod/app="cloud-endpoints-esp"
textPayload=~"/healthz"
This is the export json from cloud logging: downloaded-logs-20210714-132057.txt
Ok, so you are viewing logs for your GKE container. https://cloud.google.com/stackdriver/docs/solutions/gke/using-logs#resource_types
Unfortunately, I am not aware of any way to disable this kubernetes logging. It really has nothing to do with Cloud Endpoints or ESPv2, it will log every request by default. I doubt it is configurable.
I suggest you rely on Cloud Endpoints access logs instead, as they are richer (more L7 HTTP/gRPC level information).
Hi,
This problem have any update?
Hi, not from our end. As mentioned in my previous comments, the logs you are viewing are GKE access logs, not Cloud Endpoints logs. You should reach out to the GKE team if you want to request that health checks are not logged.
Another option is to use the Cloud Logging filter to always remove all health check logs. Logs are very cheap and don't use much data, and Cloud Logging filtering is fairly efficient, so I suggest just using this approach.
Hi, those logs are acess logs from the esp:2 container running on kubernetes. Those logs are generated when kubernetes and ingress service send requests to /healthz to now the status of esp:2 container. That is, the logs are generated internally in the ESPv2 container
Please don't close the issue.
Yes, looking at the logs resource labels, it does indeed look like it's coming from ESPv2 container
"resource": {
"type": "k8s_container",
"labels": {
"project_id": "cargamos-kubernetes",
"pod_name": "cloud-endpoints-esp-6cd8885cfb-vtcjt",
"container_name": "esp",
"cluster_name": "cargamos-kubernetes",
"namespace_name": "cargamos-dev-1",
"location": "us-east1"
}
},
But I know for a fact that ESPv2 does not log these:
1) The log format is very different from ESPv2 log format.
2) ESPv2 does not log request data unless you turn on the --enable_debug
startup option. This is disabled by default.
I also checked our internal testing project, and I could not find any such "GET /healthz" logs in the past day of our e2e test runs.
Is it possible you have some other load balancer deployed in front of ESPv2 that is creating these logs and assigning them to the ESPv2 container?
For example, Cloud Run automatically creates Request Logs for any request to your Cloud Run app, and correlates them with the standard logs your app produces. I know you are using GKE, but perhaps there is another component in play that is doing the same?
These logs are written because "--access_log=/dev/stdout". By default there are no access logs in Cloud Logging at all, but with this flag all access logs are written to local stdout and then picked by k8s and delivered to Cloud Logging
Hi, I have ESPv2 deployed on a GKE cluster and in my use case I have a load balancer at the edge of the topology, this load balancer performs health checks on the ESPv2 pod, this pod also has live testing and testing readiness, generating another health check entry on ESPv2 requests.
All this configuration generates a large number of health check access records, is it possible to hide this type of records and only show different access records to health check requests?
I also take the opportunity to thank you and congratulate you for such good support that you give to this wonderful tool.