The cloudwatch target has a default for period of 1m but the cloudwatch datasource has a dynamic default for the period. This causes strange behaviour when selecting long time ranges, or working with e.g. S3 metrics which are recorded daily. It probably also makes cloudwatch queries more expensive as they will return more metrics than necessary?
The cloudwatch target has a default for
period
of1m
but the cloudwatch datasource has a dynamic default for the period. This causes strange behaviour when selecting long time ranges, or working with e.g. S3 metrics which are recorded daily. It probably also makes cloudwatch queries more expensive as they will return more metrics than necessary?