backube / snapscheduler

Scheduled snapshots for Kubernetes persistent volumes
https://backube.github.io/snapscheduler/
GNU Affero General Public License v3.0
261 stars 26 forks source link

Snapshot metrics #121

Open JohnStrunk opened 3 years ago

JohnStrunk commented 3 years ago

Describe the feature you'd like to have. Currently, snapscheduler doesn't provide any metrics related to the snapshots attempted/created. It would be good to provide some stats that could be monitored/alerted

What is the value to the end user? (why is it a priority?) Users that depend on having snapshots to protect their data should have a way to monitor whether those snapshots are being successfully created

How will we know we have a good solution? (acceptance criteria)

Additional context cc: @prasanjit-enginprogam

prasanjit-enginprogam commented 3 years ago

@JohnStrunk: Here are the additional stats that we are requesting for:

1) readyToUse boolean flag based on

our HELM CHART BASED YAML FILES ARE

snapschedule.yaml

apiVersion: snapscheduler.backube/v1
kind: SnapshotSchedule
metadata:
  name: consul-snapshot
  namespace: {{ .Values.namespace }}
spec:
  disabled: {{ .Values.snapshotDisabledFlag }}
  claimSelector:
    matchLabels:
      {{- range $key, $value := .Values.selector }}
        {{ $key }}: {{ $value | quote }}
      {{- end }}
  retention:
    expires: {{ .Values.snapshotExpiry }}
    maxCount: {{ .Values.maxCount }}
  schedule: {{ .Values.schedule }}
  snapshotTemplate:
    lables:
      {{- range $key, $value := .Values.selector }}
        {{ $key }}: {{ $value | quote }}
      {{- end }}
    snapshotClassName: {{ .Values.snapshotClassName }}

snapshotquota.yaml

apiVersion: v1
kind: ResourceQuota
metadata:
  name: volumesnapshotsquota
  namespace: {{ .Values.namespace }}
spec:
  hard:
    count/volumesnapshots.snapshot.storage.k8s.io: {{ .Values.snapshotQuota | quote }}
prasanjit-enginprogam commented 3 years ago

@JohnStrunk : Let me know if this looks okay to you

JohnStrunk commented 3 years ago

I think I'd like to limit the metrics to objects that SnapScheduler actually manages (i.e., not report on all snapshots, just those created from a schedule). Perhaps:

The trick is to get metrics that are useful, not too difficult to implement, and don't have terribly high cardinality for Prometheus.

prasanjit-enginprogam commented 3 years ago

"(i.e., not report on all snapshots, just those created from a schedule)." -- agreed can we report if the snapshot is successful? i think point 1 is really important to us.

readyToUse boolean flag based on

JohnStrunk commented 3 years ago

I was hoping the ready_total vs total would be sufficient for that use case.

Could you explain a bit more about the need for match labels and VSC in the metrics? I'm particularly concerned about encoding the labels. If the labels and the VSC are determined by the SnapshotSchedule object, wouldn't it's name/namespace be sufficient?

prasanjit-enginprogam commented 3 years ago

@JohnStrunk: Here is our use-case, we are backing up few StatefulSet services under a specific namespace and they are identified by the "app" label currently. The ask is to notify if there is a backup failure so that the Ops team can take a look and fix the issue. we are using prometheus to scrape the "metrics" endpoint ---> alertmanager ---> Pagerduty and slack notify.

Currently, there is one single VSC that is tied to "ebs.csi.aws.com" but later we want to connect to different drivers such as EFS and create a separate VSC, so 1-1 mapping.

app-snapshot                    0 6 * * *      168h      15        false      2021-04-13T06:00:00Z   app.kubernetes.io/managed-by=spinnaker,app.kubernetes.io/name=ABC
$ kubectl get SnapshotSchedule -n NAMESPACE -l'app.kubernetes.io/name=ABC'
NAME              SCHEDULE    MAX AGE   MAX NUM   DISABLED   NEXT SNAPSHOT
app-snapshot   0 6 * * *   168h      15        false      2021-04-13T06:00:00Z
$

Now, this snapshot schedule taps to 3 different EBS volumes for the "app" cluster.

we want to get notified if :

  1. one out of these 3 EBS volumes failed to get backed up?
  2. All EBS volumes failed to get backup.
  3. Backup didn't ran for some reason.
JohnStrunk commented 3 years ago

My thought here is that you'd monitor the "app-snapshot" schedule (by filtering on schedule_name) and expect 3 new ready snaps every day. So, it would probably be good to add a corresponding snapshots_ready_total counter as well. The failure of the snapshotting flow itself would have to be detected by it never becoming ready, but there's also a case to be made for adding an error counter, too. That could be incremented if the operator is unable to create the VolumeSnapshot object itself (e.g., quota or rbac problems).

prasanjit-enginprogam commented 3 years ago

@JohnStrunk: agreed. So far the plan looks good. Let me know once the implementation is done. I can test and let u know how it goes.

prasanjit-enginprogam commented 3 years ago

@JohnStrunk: Just a gentle reminder, Are there any updates? to us, having observability is backup is on high priority. At least alerting if there is a failure based on some filters should be a good enough starting point.

JohnStrunk commented 3 years ago

While it's on my list of items I'd like to add, I don't have a timeline for you. I'd be happy to provide guidance if you or one of your colleagues would like to work on a PR for it.

shomeprasanjit commented 2 years ago

Any updates yet @JohnStrunk

neema80 commented 1 year ago

seems like is abandoned

JohnStrunk commented 1 year ago

seems like is abandoned

As I said before... I'd be happy to provide guidance if someone wants to contribute a PR. However, there doesn't seem to be sufficient interest in this feature for anyone to make it happen.

KyriosGN0 commented 1 month ago

Hi @JohnStrunk, i would like to try to implement this, as i understand: there are 4 required metrics snapshots_ready_total, current_snapshots_total, current_snapshots_ready_total, snapshots_total is there anything else i should be tackling this ?

JohnStrunk commented 1 month ago

@KyriosGN0 That seems like a good summary. Thanks for offering to take a look!

mnacharov commented 5 days ago

I hope a more "general metrics solutions"(kube-state-metrics in my case) will finally add support of VolumeSnapshot and VolumeSnapshotContent metrics. And backube/snapscheduler will just continue to create VolumeSnapshots