grafana / cloudcost-exporter

Prometheus Exporter for Cloud Provider agnostic cost metrics
Apache License 2.0
67 stars 4 forks source link

Make bucket to region matching in cloudcost exporter instead of at PromQL level #77

Open the-it opened 10 months ago

the-it commented 10 months ago

We currently do something like this in our recording rules:

# get usage metric per bucket
    max by (bucket_name, location) (
        last_over_time((stackdriver_gcs_bucket_storage_googleapis_com_storage_total_bytes > 0)[3h:1m])
    )
    / 1024^3
    # joint storage class there 
    * on (bucket_name) group_left (storage_class) (
      max by (bucket_name, storage_class) (
         label_replace(gcp_gcs_bucket_info, "storage_class", "REGIONAL", "storage_class", "STANDARD")
      )
    )
   # multiply with the location/storage_class metric
    * on (location, storage_class) group_left
        max by (location, storage_class) (
            last_over_time(gcp_gcs_storage_hourly_cost[15m])
            * on (location, storage_class)
            (1 - gcp_gcs_storage_discount)
        ) / 60  # hourly_cost -> cost_per_minute

The metric gcp_gcs_bucket_info and gcp_gcs_storage_hourly_cost are emitted by cloudcost exporter. Is it better to just emit a cost metric per bucket and provide the joining already inside of cloudcost_exporter? This would simplify the PromQL to:

# get usage metric per bucket
    max by (bucket_name) (
        last_over_time((stackdriver_gcs_bucket_storage_googleapis_com_storage_total_bytes > 0)[3h:1m])
    )
    / 1024^3
    )
   # multiply with the bucket cost metric
    * on (bucket_name) group_left
        max by (bucket_name) (
            last_over_time(gcp_gcs_storage_hourly_cost[15m])
            * on (bucket_name)
            (1 - gcp_gcs_storage_discount)
        ) / 60  # hourly_cost -> cost_per_minute
Pokom commented 10 months ago

I'm in favor of this simplification, with one caveat: The few times I've gone down this path I've struggled with the mapping of stackdriver_exporter labels to cloudcost_exporter. So we just need to tread carefully and ensure the joins work as expected and can validate the data is the same.