aws / amazon-managed-service-for-prometheus-roadmap

Amazon Managed Service for Prometheus Public Roadmap
Other
40 stars 3 forks source link

Vended CloudWatch metrics #1

Closed mhausenblas closed 2 years ago

mhausenblas commented 2 years ago

Customers have highlighted that they need visibility into their workspace usage relative to the quotas applied, so they can preemptively increase quotas before getting throttled.

With this feature, we plan to expose the following as vended metrics in Amazon CloudWatch:

Further, we plan to vend the following metrics as service usage metrics:

rma-stripe commented 2 years ago

We are supportive of this feature request, but would like to hear more about the proposal for "service usage metrics". What is a service usage metric, and how is it different from a vended metric that lives within CloudWatch?

DiscardedSamples with a reason dimension per workspace Throttled Alertmanager notifications per workspace Alertmanager failed to send total per workspace Alertmanager alerts received per workspace

We think it's useful to determine the ways that workspaces can break down, but it would be great to hear what service quotas these failure modes are connected to.

DiscardedSample appears to correlate with active series, and active series per metric name, per the Cortex OSS implementation.

On Amazon Managed Service for Prometheus service quotas, however, I do not see any quotas related to notifications. Is there a limit on the number of notifications a workspace can send out in a given amount of time? At what point will we be "throttled"?

How should we distinguish, conceptually, between Alertmanager "receiving", "sending", and "notifying"? Are there other parts of the pipeline we should be aware of?

Under what circumstances would Alertmanager fail to send? If an alert fails, for example, because its query reaches the 12M Query samples limit, would this constitute a "failure to send"?

ampabhi-aws commented 2 years ago

Here is some additional information about DiscardedSamples with the various reasons that will be provided as a dimension:

Reason = Meaning

  1. greater_than_max_sample_age = Discarding log lines which are older than the current time
  2. new-value-for-timestamp = Duplicate samples are sent with a different timestamp than was previously recorded
  3. per_metric_series_limit = User has hit the active series per metric limit
  4. per_user_series_limit = User has hit the total number of active series limit
  5. rate_limited Ingestion = rate limited
  6. sample_out_of_order = Samples are sent with out of order timestamps and cannot be processed by AMP
  7. label_value_too_long = Label value is longer than allowed character limit
  8. max_label_names_per_series = User has hit the label names per metric
  9. missing_metric_name = Metric name is not provided
  10. metric_name_invalid = Invalid metric name provided
  11. label_invalid = Invalid label provided
  12. duplicate_label_names = Duplicate label names provided

The idea behind discarded samples is to actually show you the amount of data that has been throttled or dropped and what reason is associated with it, so you can react with the right limit increase or configuration change.

The Amazon Managed Service for Prometheus (AMP) Alert Manager metrics of failed to send and received are more to help you track the performance of the AMP Alert Manager than to correspond to any quota. We declare a notification as failing to send if the AMP Alert Manager is unable to, after all retries, send the notification from AMP Alert Manager to the downstream receiver, in this case SNS. Some customers we've spoken with have highlighted this is a useful metric to understand if something is misconfigured in their notification pipeline, such as a misconfigured access policy for SNS. The AMP Alert Manager doesn't evaluate any queries as that is done by the alerting rule run by the ruler, so the most common failure conditions for the AMP Alert Manager are failures that occur when resolving the alert manager template or failures that occur when trying to send the notification to downstream receivers, such as SNS.

With regards to modeling the pipeline, I'd model it as follows:

  1. The ruler runs your alerting rule, and evaluates the result
  2. If it evaluates to match a condition, it sends an Alert to the AMP Alert Manager to process.
  3. The AMP Alert Manager modifies the incoming payload based on the templates, and routing rules configured.
  4. Based on the routes, it sends to a downstream receiver (currently SNS).
  5. The downstream receiver, if it successfully receives the message, continues to forward the message to whatever component sits at the other end, usually a PagerDuty or Slack.
Beardface123 commented 2 years ago

I'd like to suggest that CloudWatch receive vended metrics related to metrics dropped due to the dedupe mechanisms around the cluster and __replica__ labels. There is currently no way to validate in AMP that deduplication is happening appropriately, so seeing some sort of counter or indication that a number of metrics sources are being dropped due to the behavior listed here would be very helpful.

mhausenblas commented 2 years ago

ICYMI: https://aws.amazon.com/blogs/mt/introducing-vended-metrics-for-amazon-managed-service-for-prometheus/

ampabhi-aws commented 2 years ago

We recently launched CloudWatch usage metrics, and you can learn more about them here. CloudWatch usage metrics were launched on 5/9.

Additional metrics such as:

will be available in a future update to vended metrics, and for now have been migrated to issue #12.