carvel-dev / ytt

YAML templating tool that works on YAML structure instead of text
https://carvel.dev/ytt
Apache License 2.0
1.68k stars 137 forks source link

ytt: Error: - yamlfragment.SetKey: Not implemented #759

Closed mq2195 closed 2 years ago

mq2195 commented 2 years ago

I am trying to set value of yamlfragment (YAML Map) Big picture: I am trying to template prometheus configuration file. Different alert groups are defined as functions (returning yaml fragments) in .lib.yml files. Some of the values from those yaml fragments need to be replaced with new values (new values are defined in data.values.*)

alertrules_prometheus() and alertrules_host() are returning yamlfragments (alert groups for prometheus configuration)

I am using: ytt version 0.43.0 on Fedora 36

Is there another way? Does overlay module work on yamlfragments?

load("@ytt:data","data")
load("@ytt:assert", "assert")
load("alertrules_prometheus.lib.yml", "alertrules_prometheus")
load("alertrules_host.lib.yml", "alertrules_host")

alertruless = data.values.alertrules.enabled        #! list of alertrules to include
alerts = data.values.alertrules.alerts              #! list of alert names to overwrite one of its nodes (severity)

def alert_all():
  x = []
  for alertrule in alertruless:
    if alertrule == "alertrules_prometheus":
      x.extend(alertrules_prometheus())
    elif alertrule == "alertrules_host":
      x.extend(alertrules_host())
    else:
      assert.fail("undefined alertrules: {}".format(alertrule))
    end
  end

  for alert in alerts:
    print("alert:", alert.name, alert.severity)    #! desired outcome (alert name + new value)
    for group in x:
      print("group name: ", group["name"])  #! alert group - processing
      for i in group["rules"]:
        print("processing:", i["alert"]) #! alert within a group - processing
        if i["alert"] == alert.name:     
          print("bingo", i["labels"]["severity"])    #! we found alert to replace one of its nodes + current value
          i["labels"]["severity"] = alert.severity    <-- problem here
        end
      end
    end
  end

  return x
end
mamachanko commented 2 years ago

@mq2195 ty for your question!

Although I am not completely sure I follow.

Could you attach an example output and how it falls short of what's desired?

In the meantime, YAML fragments can contain one of three things, maps, doc sets or arrays (or null). Once you unpack its value you can work the underlying data structure. That includes programmatic use of the overlay module.

mamachanko commented 2 years ago

actually reads the title 🤦‍♂️

I get it!

Right, assigning to YAML Fragments does not seem to work, but can (as mentioned ^) extract its contents and then work with them.

On my phone rn. Will provide better help once at a workstation.

mq2195 commented 2 years ago

Initially I was considering few approaches:

  1. yaml.encode/decode - but if I understand this correctly, that would mean templating on text - and that is what I am running away from...
  2. template/overlay - but that means more effort to prepare the template files with extra annotations (i.e when adding new alerts) - i prefer low maintenance
  3. dynamically parse/loop through the yaml file and change what is needed on the fly. I was aware of yaml fragments and possibility of mapping them to dic, what I was not aware of were yaml fragments inside...
    rule_dic: {"alert": "H002_NodeDown", "expr": "up{job=\"node_exporter\"} == 0", "for": "3m", "labels": yamlfragment(*yamlmeta.Map), "annotations": yamlfragment(*yamlmeta.Map)}
  4. implement missing functionality - since this is my first contact with python/starlack, and I need this sooner than later - this might be too ambitious...

So, this is how I resolved it, it is not pretty, but it works (prometheus config file structure does not change). It should be possible to optimize this with recursive approach. I am including all of the files - maybe this will help someone with similar issue. (Now I am beginning to understand, why this functionality is missing :) )

./ytt-linux-amd64 -f ./templates/ -f ./alertrules/ -f ./schema/ -f ./env/ST.yaml --output-files out

./alertrules/alert_all.star

load("@ytt:data","data")
load("@ytt:assert", "assert")
load("alertrules_prometheus.lib.yml", "alertrules_prometheus")
load("alertrules_host.lib.yml", alertrules_host="alertrules_host")

alertruless = data.values.alertrules.enabled
alerts = data.values.alertrules.alerts

def alert_all():
  x = []
  for alertrule in alertruless:
    if alertrule == "alertrules_prometheus":
      x.extend(alertrules_prometheus())
    elif alertrule == "alertrules_host":
      x.extend(alertrules_host())
    else:
      assert.fail("undefined alertrules: {}".format(alertrule))
    end
  end

  # we are going to copy items from x to new_groups, and modify them if needed
  new_groups = []

  # for each group
  for alertgroup in x:
    alertgroup_dic = dict(**alertgroup)
    print("alertgroup_dic:", alertgroup_dic)
    new_rules = []

    # for each rule (each alert in a list)
    for rule in alertgroup_dic.get("rules"):
      rule_dic = dict(**rule)
      print("rule_dic:", rule_dic)

      # detect if modification is required
      if update_required(rule_dic.get("alert")):
        new_rule = dict()
        print("update required")

        # find reference to data - what to update (we already know it is there)
        alert_ref = None
        for alert in data.values.alertrules.alerts:
          if alert.name == rule_dic.get("alert"):
            alert_ref = alert
            break
          end
        end
        alert_ref_dic = dict(**alert_ref)
        print("alert_ref_dic:", alert_ref_dic)

        # we will not modify name
        alert_ref_dic.pop("name")

        # display which tags/nodes will be updated
        for update_data in alert_ref_dic.keys():
          print("need to update:", update_data)
        end

        # lets recreate the rule
        for key in rule_dic.keys():
          print("processing key:", key)
          new_key_value = alert_ref_dic.get(key)

          # check if there is new value
          if new_key_value != None:
            print("updating:", key, "new_value:", new_key_value)

            # direct subnodes
            if key == "for" or key == "expr":
              new_rule.update({key: new_key_value})

            # complex subnodes
            elif key == "labels" or key == "annotations":
              new_nodes = dict()

              existing_data_dict = dict(**rule_dic.get(key))
              data_to_update = dict(**new_key_value)
              print("existing_data_dict:", existing_data_dict)
              print("data_to_update:", data_to_update)

              # iterate through existing nodes
              for node_key in existing_data_dict.keys():
                print("processing node key:", node_key)

                # set new value if exists
                new_label_value = data_to_update.get(node_key)
                if new_label_value != None:
                  new_nodes.update({node_key: data_to_update.get(node_key)})
                else:
                  new_nodes.update({node_key: existing_data_dict.get(node_key)})
                end
              end
              new_rule.update({key: new_nodes})
            end
          else:
            new_rule.update({key: rule_dic.get(key)})
          end
        end

        print("new_rule:", new_rule)
        new_rules.append(new_rule)

      else:
        print("update not required")
        new_rules.append(rule_dic)
      end

    end
    alertgroup_dic.pop("rules")
    alertgroup_dic.update(rules=new_rules)

    new_groups.append(alertgroup_dic)
  end

  return new_groups
end

def update_required(alert_name):
  result = False
#  print("searching for:", alert_name)
  for alert in data.values.alertrules.alerts:
#    print("processing:", alert.name)
    if alert.name == alert_name:
#      print("match found")
      result = True
      break
    end
  end
  return result
end

./alertrules/alertrules_host.lib.yml

#@ load("@ytt:data","data")

#@ def alertrules_host():
- name: host
  rules:
  - alert: H001_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 3m
    labels:
      severity: critical
      environment: #@ data.values.env
    annotations:
      title: Node {{ $labels.instance }} is down
      description: Failed to scrape {{ $labels.job }}
  - alert: H002_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 3m
    labels:
      severity: critical
      environment: #@ data.values.env
    annotations:
      title: Node {{ $labels.instance }} is down
      description: Failed to scrape {{ $labels.job }}
#@ end

./alertrules/alertrules_prometheus.lib.yml

#@ load("@ytt:data","data")

#@ def alertrules_prometheus():
- name: monitoring
  rules:
  - alert: MON001_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 3m
    labels:
      severity: critical
      environment: #@ data.values.env
    annotations:
      title: Node {{ $labels.instance }} is down
      description: Failed to scrape {{ $labels.job }}
  - alert: MON002_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 3m
    labels:
      severity: critical
      environment: #@ data.values.env
    annotations:
      title: Node {{ $labels.instance }} is down
      description: Failed to scrape {{ $labels.job }}
#@ end

./env/ST.yaml

#@data/values
---
env: ST
app_name: APP_Test
node_exporter_urls:
  - https://192.168.6.100:10080
  - https://192.168.6.101:10080
  - https://192.168.6.102:10080
mq_exporter_urls: 
  - https://192.168.6.100:10080
alertrules:
  enabled:
    - alertrules_prometheus
    - alertrules_host
  alerts:
  - name: MON001_NodeDown
    labels:
      severity: warning
  - name: H001_NodeDown
    for: 5m
    labels:
      severity: info
  - name: H002_NodeDown
    annotations:
      title: new title

./templates/alertrules.yaml

#@ load("alert_all.star", "alert_all")

groups: #@ alert_all()

./schema/schema.yaml

#@data/values-schema
---
env: ""
app_name: ""
#@schema/nullable
node_exporter_urls: 
  - ""
#@schema/nullable
mq_exporter_urls: 
  - ""
#@schema/nullable
alertrules:
  enabled:
    - ""
#@schema/nullable
#@schema/type any=True
  alerts:
    - ""

and the result: ./out/alertrules.yaml

groups:
- name: monitoring
  rules:
  - alert: MON001_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 3m
    labels:
      severity: warning                                                        <= changed
      environment: ST
    annotations:
      title: Node {{ $labels.instance }} is down
      description: Failed to scrape {{ $labels.job }}
  - alert: MON002_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 3m
    labels:
      severity: critical
      environment: ST
    annotations:
      title: Node {{ $labels.instance }} is down
      description: Failed to scrape {{ $labels.job }}
- name: host
  rules:
  - alert: H001_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 5m                                                        <= changed
    labels:
      severity: info                                                        <= changed
      environment: ST
    annotations:
      title: Node {{ $labels.instance }} is down
      description: Failed to scrape {{ $labels.job }}
  - alert: H002_NodeDown
    expr: up{job="node_exporter"} == 0
    for: 3m
    labels:
      severity: critical
      environment: ST
    annotations:
      title: new title                                                        <= changed
      description: Failed to scrape {{ $labels.job }}
mamachanko commented 2 years ago

@mq2195 I am glad to hear that your we able to achieve the desired outcome!

You may be interested in how you could work with the overlay module programmatically. For example, consider @jtigger's deep merge example on the ytt playground.