Closed mhausenblas closed 2 years ago
We are supportive of this feature request, but would like to hear more about the proposal for "service usage metrics". What is a service usage metric, and how is it different from a vended metric that lives within CloudWatch?
DiscardedSamples with a reason dimension per workspace Throttled Alertmanager notifications per workspace Alertmanager failed to send total per workspace Alertmanager alerts received per workspace
We think it's useful to determine the ways that workspaces can break down, but it would be great to hear what service quotas these failure modes are connected to.
DiscardedSample
appears to correlate with active series, and active series per metric name, per the Cortex OSS implementation.
On Amazon Managed Service for Prometheus service quotas, however, I do not see any quotas related to notifications. Is there a limit on the number of notifications a workspace can send out in a given amount of time? At what point will we be "throttled"?
How should we distinguish, conceptually, between Alertmanager "receiving", "sending", and "notifying"? Are there other parts of the pipeline we should be aware of?
Under what circumstances would Alertmanager fail to send? If an alert fails, for example, because its query reaches the 12M Query samples limit, would this constitute a "failure to send"?
Here is some additional information about DiscardedSamples with the various reasons that will be provided as a dimension:
Reason = Meaning
The idea behind discarded samples is to actually show you the amount of data that has been throttled or dropped and what reason is associated with it, so you can react with the right limit increase or configuration change.
The Amazon Managed Service for Prometheus (AMP) Alert Manager metrics of failed to send and received are more to help you track the performance of the AMP Alert Manager than to correspond to any quota. We declare a notification as failing to send if the AMP Alert Manager is unable to, after all retries, send the notification from AMP Alert Manager to the downstream receiver, in this case SNS. Some customers we've spoken with have highlighted this is a useful metric to understand if something is misconfigured in their notification pipeline, such as a misconfigured access policy for SNS. The AMP Alert Manager doesn't evaluate any queries as that is done by the alerting rule run by the ruler, so the most common failure conditions for the AMP Alert Manager are failures that occur when resolving the alert manager template or failures that occur when trying to send the notification to downstream receivers, such as SNS.
With regards to modeling the pipeline, I'd model it as follows:
I'd like to suggest that CloudWatch receive vended metrics related to metrics dropped due to the dedupe mechanisms around the cluster
and __replica__
labels. There is currently no way to validate in AMP that deduplication is happening appropriately, so seeing some sort of counter or indication that a number of metrics sources are being dropped due to the behavior listed here would be very helpful.
We recently launched CloudWatch usage metrics, and you can learn more about them here. CloudWatch usage metrics were launched on 5/9.
Additional metrics such as:
will be available in a future update to vended metrics, and for now have been migrated to issue #12.
Customers have highlighted that they need visibility into their workspace usage relative to the quotas applied, so they can preemptively increase quotas before getting throttled.
With this feature, we plan to expose the following as vended metrics in Amazon CloudWatch:
Further, we plan to vend the following metrics as service usage metrics: