Open endorama opened 2 years ago
I've made a dashboard like this for this purpose..
@endorama is that ^^ of any relevance for you?
@felix-lessoer sorry for the late reply, I completely missed your comment! This is very interesting and I think it's great to have for any customer that leverage Dataflow ingestion.
I'd be happy to include this in the GCP package
I prepped the slide above so that it should work for every user IF they have Google Cloud Metrics via Metricbeat for all the necessary services + using Log ingestion via Dataflow.
This version shows the stream for Audit, Firewall, VPC Flow and DNS logs. It is using global variables in order to set it up. For the audit stream it e.g. looks for objects that include the word "audit". Which means that the log sink, the topic, the subscription and the dataflow job need to have this in the name. There are also global variables to change the index of the data. It defaults to metric and log at the moment.
I made all expressions aware of missing data. However it still shows error messages when the index pattern does not find any index.
@endorama Please integrate and test it. Also let me know if it works like this. I can also do the same for the other cloud providers.
Google Cloud Log Data Collection Canvas.zip
This is how it looks like using the Workpad above
This is the metricbeat gcp module config for the log sink and dataflow. I also use default pubsub metricset.
- module: gcp
metricsets:
- metrics
project_id: "xxx"
credentials_file_path: "metricbeat-service-account.json"
exclude_labels: false
period: 1m
metrics:
- aligner: ALIGN_NONE
service: dataflow
metric_types:
- "job/backlog_elements"
- "job/status"
- "job/per_stage_system_lag"
- "job/pubsub/read_count"
- "job/pubsub/write_count"
- module: gcp
metricsets:
- metrics
project_id: "xxx"
credentials_file_path: "metricbeat-service-account.json"
exclude_labels: false
period: 1m
metrics:
- aligner: ALIGN_NONE
service: logging
metric_types:
- "exports/log_entry_count"
- "exports/error_count"
- "exports/byte_count"
@andresrc FYI
Hi! We just realized that we haven't looked into this issue in a while. We're sorry! We're labeling this issue as Stale
to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1
. Thank you for your contribution!
@SubhrataK @lalit-satapathy can you take a look a this issue? Thanks
Hi! We just realized that we haven't looked into this issue in a while. We're sorry! We're labeling this issue as Stale
to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1
. Thank you for your contribution!
From GCP Logging sink docs
gcp
integration supports log ingestion, and we provide Dataflow templates for a cloud native and scalable ingestion experience in Elasticsearch.Through the GCP logging metrics we can provide a view of ingested GCP logs that can be use for monitoring GCP log ingestion into Elasticsearch. To help our customers and users we should provide a dashboard to monitor this use case out of the box.
The use case would be:
This can be even more useful when paired with Dataflow, as ingestion lag would depend on an external platform and I would want to monitor that value.