ansible-collections / netapp.ontap

Ansible collection to support NetApp ONTAP configuration.
https://galaxy.ansible.com/netapp/ontap
GNU General Public License v3.0
55 stars 35 forks source link

Add expected_iops_allocation & peak_iops_allocation to na_ontap_qos_policy_group #175

Closed vrd83 closed 8 months ago

vrd83 commented 1 year ago

Summary

The na_ontap_qos_policy_group does not allow us to specify expected_iops_allocation or peak_iops_allocation when creating Keystone adaptive QoS policy groups.

The older ZAPI based na_ontap_qos_adaptive_policy_group allowed us to specify the peak_iops_allocation, but not expected_iops_allocation: https://docs.ansible.com/ansible/latest/collections/netapp/ontap/na_ontap_qos_adaptive_policy_group_module.html

The ask is to allow both expected_iops_allocation and peak_iops_allocation in na_ontap_qos_adaptive_policy_group.

Links to relevant Keystone documentation: https://docs.netapp.com/us-en/keystone-staas/concepts/qos.html#settings-for-extreme-service-level https://docs.netapp.com/us-en/keystone-staas/concepts/qos.html#settings-for-premium-service-level https://docs.netapp.com/us-en/keystone-staas/concepts/qos.html#settings-for-performance-service-level https://docs.netapp.com/us-en/keystone-staas/concepts/qos.html#settings-for-standard-service-level https://docs.netapp.com/us-en/keystone-staas/concepts/qos.html#settings-for-value-service-level

Component Name

na_ontap_qos_policy_group

Additional Information

---
- hosts: localhost
  collections:
    - netapp.ontap
  gather_facts: false
  vars:
    login: &login
      hostname: "10.10.10.10"
      username: "admin"
      password: "password"
  svm_name: "test-svm"
  aqos_policy_groups:
    keystone-staas-extreme:
      absolute_min_iops: 1000IOPS
      block_size: 32K
      expected_iops_allocation: allocated-space
      expected_iops: 6144IOPS/TB
      peak_iops_allocation: used_space
      peak_iops: 12288IOPS/TB
    keystone-staas-premium:
      absolute_min_iops: 500IOPS
      block_size: 32K
      expected_iops_allocation: allocated-space
      expected_iops: 2048IOPS/TB
      peak_iops_allocation: used_space
      peak_iops: 4096IOPS/TB
    keystone-staas-performance:
      absolute_min_iops: 250IOPS
      block_size: 32K
      expected_iops_allocation: allocated-space
      expected_iops: 1024IOPS/TB
      peak_iops_allocation: used_space
      peak_iops: 2048IOPS/TB
    keystone-staas-standard:
      absolute_min_iops: 75IOPS
      block_size: 32K
      expected_iops_allocation: allocated-space
      expected_iops: 256IOPS/TB
      peak_iops_allocation: used_space
      peak_iops: 512IOPS/TB
    keystone-staas-value:
      absolute_min_iops: 75IOPS
      block_size: 32K
      expected_iops_allocation: allocated-space
      expected_iops: 64IOPS/TB
      peak_iops_allocation: used_space
      peak_iops: 128IOPS/TB
  name: aQoS Configuraton
  tasks:
    - name: create adaptive qos policy group in REST.
      with_dict: "{{ aqos_policy_groups }}"
      loop_control:
        label: "{{ item.key }}"
      na_ontap_qos_policy_group:
        <<: *login
        name: "{{ item.key }}"
        state: present
        use_rest: always
        vserver: "{{ svm_name }}"
        adaptive_qos_options:
          absolute_min_iops: "{{ item.value.absolute_min_iops }}"
          block_size: "{{ item.value.block_size }}"
          expected_iops_allocation: "{{ item.value.expected_iops_allocation }}"
          expected_iops: "{{ item.value.expected_iops }}"
          peak_iops_allocation: "{{ item.value.peak_iops_allocation }}"
          peak_iops: "{{ item.value.peak_iops }}"
carchi8py commented 11 months ago

@vrd83 sorry for the delay. I've added a story (DEVOPS-6488) to support peak-iops-allocation

For expected-iops-allocation there is no REST Equivalent currently. You can ask the REST API team on discord (netapp.io) in the ontap-api channel about adding this to the rest API.

https://docs.netapp.com/us-en/ontap-restmap-9131/qos.html#qos-adaptive-policy-group-create Screenshot 2023-10-16 at 11 03 34 AM

carchi8py commented 8 months ago

Support for this was added in 22.8.0