ansible-collections / community.vmware

Ansible Collection for VMware
GNU General Public License v3.0
343 stars 336 forks source link

VMware Tanzu* Modules #428

Open scottd018 opened 3 years ago

scottd018 commented 3 years ago
SUMMARY

New VMware community that is separate from Core for Tanzu modules.

ISSUE TYPE
COMPONENT NAME
ADDITIONAL INFORMATION

We have an internal initiative for VMware Tanzu products that will drive us to either:

A) Using the URI module to make API calls to Tanzu SaaS services (e.g. Tanzu Mission Control)

-OR-

B) Develop new modules for these SaaS Services

Would like to be able to use option B. However, this issue is about deciding whether or not the VMware Tanzu modules need their own community or not. Or if Tanzu modules belong in the core community.vmware. We can obviously start them here and move them later if needed.

Please discuss. :)

mariolenz commented 3 years ago

A) Using the URI module to make API calls to Tanzu SaaS services (e.g. Tanzu Mission Control)

Sounds like an accident waiting to happen :-)=)

B) Develop new modules for these SaaS Services

Would like to be able to use option B.

In my opinion, that would be way better.

However, this issue is about deciding whether or not the VMware Tanzu modules need their own community or not. Or if Tanzu modules belong in the core community.vmware.

Well, it depends on what you mean with "Tanzu". As far as I understand, this is a rather complex portfolio of products including CloudFoundry, CodeStream, the stuff VMware got when they took over Bitnami and other things (PKS -> TKGi).

When we're talking about VCF / vSphere with Tanzu (a.k.a Project Pacific), I think this repo or vmware.vmware_rest is the right place because it's part of vSphere, even if you have to buy it separately. For everything else, I'm not so sure...

I think it would be best if we keep this collection focused on vSphere.

Actually, VMware has so many products now that I think this collection should be called community.vsphere instead of community.vmware. This name would make clear what it's about. After all, there's not only the Tanzu portfolio but also vRO, vROPs, vRA, Log Insight and a lot of other stuff. I think it'll be hard to maintain modules for all these products in one collection.

scottd018 commented 3 years ago

A) Using the URI module to make API calls to Tanzu SaaS services (e.g. Tanzu Mission Control)

Sounds like an accident waiting to happen :-)=)

You'd be shocked at how efficient I've gotten using uri in lieu of missing Ansible modules. :) . haha

This name would make clear what it's about. After all, there's not only the Tanzu portfolio but also vRO, vROPs, vRA, Log Insight and a lot of other stuff. I think it'll be hard to maintain modules for all these products in one collection.

100% agree there. That's kinda the reason I was thinking that there needs to be a new collection for all Tanzu products. It doesn't make sense to put the Tanzu (generally VMware's Kubernetes-specific portfolio of products) underneath this repo.

Is there a formal process to get a new Ansible community started? Our internal initiative will drive us towards a good start for these modules. We can begin development and come back when we have some things done. Just wondered what that process looked like and who would be responsible for carving out the new "community". We could also host it in a separate repo and migrate it to whichever new community is appropriate once it gets large enough if that's the better option.

Thanks for the discussion! Appreciate the feedback.

mariolenz commented 3 years ago

A) Using the URI module to make API calls to Tanzu SaaS services (e.g. Tanzu Mission Control)

Sounds like an accident waiting to happen :-)=)

You'd be shocked at how efficient I've gotten using uri in lieu of missing Ansible modules. :) . haha

I'm not shocked that easily ;-) Nevertheless, "that way madness lies" imho.

Is there a formal process to get a new Ansible community started?

I don't know. @Akasurde @goneri Is there a formal process to start a new community?

Our internal initiative will drive us towards a good start for these modules. We can begin development and come back when we have some things done. Just wondered what that process looked like and who would be responsible for carving out the new "community". We could also host it in a separate repo and migrate it to whichever new community is appropriate once it gets large enough if that's the better option.

Well, you don't have to have your collection under ansible-collections. Dell and CheckPoint don't, for example. If you're from VMware, you should just create it under VMware.

If you're not from VMware, I can understand why you want to make this an "official" ansible-collections. On the other hand, it would be quite good PR for you if you could make your collection the collection for VMware Tanzu modules ;-)

goneri commented 3 years ago

Hi @scottd018,

The new vmware.vmware_rest modules are auto-generated from the API definition (Swagger 2.0 file). For now we focus on the vcenter end-points (vcenter.json). Do you have an Swagger 2.0 or an OpenApi 3.0 JSON to document these Tanzu* end-points? It may be possible to auto-generate these modules too?

The tool that we use is here: https://github.com/ansible-collections/vmware_rest_code_generator

Akasurde commented 3 years ago

@scottd018 Thanks for providing this feature idea. I think we can create a separate collection from VMware Tanzu Product line since they cover different set of requirements and targeted users.

Let us know about OpenAPI JSON (if there are) and we can automagically generate modules.

Thanks.

mariolenz commented 3 years ago

@scottd018 @goneri @Akasurde It' still a very early version (0.0.1) and it's YAML instead of JSON, but on developer.vmware.com I found a Swagger/Open API specification for Tanzu Mission Control.

Hope this helps.

Akasurde commented 3 years ago

@mariolenz Great, thanks for providing this information. It will really help.

JoschuaA4 commented 3 years ago

@Akasurde @mariolenz @goneri @scottd018 I've found the API to the K8S deployment via Tanzu https://developer.vmware.com/docs/vsphere-automation/latest/vcenter/api/vcenter/namespace-management/clusters/clusteractionenable/post/

mzlumin commented 3 years ago

I was able to install Tanzu with NSX-T 3.1 based on this Api.

However i am far away from good, but wanted to share my playbook used. I added the variables by hand for now and have not had time to look into parsing them.

---
- hosts: localhost
  name: Create vSphere tag-based storage policy for TKG 
  gather_facts: false
  vars_files: ../answerfile.yml

  tasks:

    - name: Login to get Session ID
      uri:
        url: https://{{ vcenter_hostname }}.{{ domain }}/api/session
        user: '{{ vcenter_username }}'
        password: '{{ vcenter_password }}'
        validate_certs: no
        method: POST
        #body: "{{ lookup('file','issue.json') }}"
        force_basic_auth: yes
        status_code: 201
        body_format: json
      register: this

    - name: Get Cluster from VCenter
      uri:
        validate_certs: no
        url: https://{{ vcenter_hostname }}.{{ domain }}/api/vcenter/cluster
        method: GET
        return_content: yes
        headers:
          vmware-api-session-id: "{{ this.vmware_api_session_id }}"          

    - name: Get Networks from VCenter
      uri:
        validate_certs: no
        url: https://{{ vcenter_hostname }}.{{ domain }}/api/vcenter/network
        method: GET
        return_content: yes
        headers:
          vmware-api-session-id: "{{ this.vmware_api_session_id }}"   

    - name: Get Storage Policies from VCenter
      uri:
        validate_certs: no
        url: https://{{ vcenter_hostname }}.{{ domain }}/api/vcenter/storage/policies
        method: GET
        return_content: yes
        headers:
          vmware-api-session-id: "{{ this.vmware_api_session_id }}"   

    - name: Get VDS from NSX        # Needed UUID from VDS!
      uri:
        url: https://10.0.10.40/api/v1/fabric/virtual-switches
        user: 'admin'
        password: '{{ vcenter_password }}'
        validate_certs: no
        method: GET
        #body: "{{ lookup('file','issue.json') }}"
        force_basic_auth: yes
        status_code: 200
        body_format: json

    - name: Get Edge-Cluster from NSX        # Needed ID from Cluster !
      uri:
        url: https://10.0.10.40/api/v1/edge-clusters
        user: 'admin'
        password: '{{ vcenter_password }}'
        validate_certs: no
        method: GET
        #body: "{{ lookup('file','issue.json') }}"
        force_basic_auth: yes
        status_code: 200
        body_format: json

    - name: Enable TKG on Cluster
      uri:
        validate_certs: no
#        url: https://{{ vcenter_hostname }}.{{ domain }}/api/vcenter/namespace-management/clusters/{{cluster_name_compute}}?action=enable
        url: https://{{ vcenter_hostname }}.{{ domain }}/api/vcenter/namespace-management/clusters/domain-c1008?action=enable
        method: POST
        body_format: json
        status_code: [204]
        return_content: true
        headers:
          vmware-api-session-id: "{{ this.vmware_api_session_id }}"
          Content-Type: application/json
        body: 
          ephemeral_storage_policy: "1031a289-e08a-483a-9713-574fd0b07238"
          image_storage:
            storage_policy: "1031a289-e08a-483a-9713-574fd0b07238"
          master_management_network:
            address_range:
              address_count: 5
              gateway: "10.0.10.253"
              starting_address: "10.0.10.190"
              subnet_mask: "255.255.255.0"
            mode: "STATICRANGE"
            network: "dvportgroup-1025"
          master_DNS:
            - "10.0.10.202"
          master_NTP_servers:
            - "pool.ntp.org"
          master_storage_policy: "1031a289-e08a-483a-9713-574fd0b07238"
          network_provider: "NSXT_CONTAINER_PLUGIN"
          ncp_cluster_network_spec:
            cluster_distributed_switch: "50 0f ac 8c 54 ff 20 31-ab c9 02 30 f8 3a 25 ef"
            egress_cidrs:
              - address: "10.30.20.0"
                prefix: "24"
            ingress_cidrs:
               - address: "10.30.10.0"
                 prefix: "24"
            nsx_edge_cluster: "bc7fa4d6-2c5a-4cd3-a133-77b70d374ce7" #"Edge-Cluster-01"
            pod_cidrs: 
               - address: "10.244.0.0"
                 prefix: "21"
          service_cidr:
            address: "10.64.96.0"
            prefix: 24
          size_hint: "TINY"
          default_kubernetes_service_content_library: "Tanzu-TKG-Library" 

Output:

TASK [Enable TKG on Cluster] ***************************************************************************************************************
task path: /home/mzulmin/lab-images/ANSIBLE-vSphere-VCSA-OVA-Deploy/playbooks/13-enable-workload-management.yml:92
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: mzulmin
<127.0.0.1> EXEC /bin/sh -c 'echo ~mzulmin && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/mzulmin/.ansible/tmp `"&& mkdir "` echo /home/mzulmin/.ansible/tmp/ansible-tmp-1621882904.459102-11020-95431964083019 `" && echo ansible-tmp-1621882904.459102-11020-95431964083019="` echo /home/mzulmin/.ansible/tmp/ansible-tmp-1621882904.459102-11020-95431964083019 `" ) && sleep 0'
Using module file /usr/local/lib/python3.8/dist-packages/ansible/modules/uri.py
<127.0.0.1> PUT /home/mzulmin/.ansible/tmp/ansible-local-10855z_09tgdf/tmp9_x0gkf5 TO /home/mzulmin/.ansible/tmp/ansible-tmp-1621882904.459102-11020-95431964083019/AnsiballZ_uri.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/mzulmin/.ansible/tmp/ansible-tmp-1621882904.459102-11020-95431964083019/ /home/mzulmin/.ansible/tmp/ansible-tmp-1621882904.459102-11020-95431964083019/AnsiballZ_uri.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/mzulmin/.ansible/tmp/ansible-tmp-1621882904.459102-11020-95431964083019/AnsiballZ_uri.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/mzulmin/.ansible/tmp/ansible-tmp-1621882904.459102-11020-95431964083019/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "connection": "close",
    "content": "",
    "cookies": {},
    "cookies_string": "",
    "date": "Mon, 24 May 2021 19:01:44 GMT",
    "elapsed": 3,
    "invocation": {
        "module_args": {
            "attributes": null,
            "body": {
                "default_kubernetes_service_content_library": "Tanzu-TKG-Library",
                "ephemeral_storage_policy": "1031a289-e08a-483a-9713-574fd0b07238",
                "image_storage": {
                    "storage_policy": "1031a289-e08a-483a-9713-574fd0b07238"
                },
                "master_DNS": [
                    "10.0.10.202"
                ],
                "master_NTP_servers": [
                    "pool.ntp.org"
                ],
                "master_management_network": {
                    "address_range": {
                        "address_count": 5,
                        "gateway": "10.0.10.253",
                        "starting_address": "10.0.10.190",
                        "subnet_mask": "255.255.255.0"
                    },
                    "mode": "STATICRANGE",
                    "network": "dvportgroup-1025"
                },
                "master_storage_policy": "1031a289-e08a-483a-9713-574fd0b07238",
                "ncp_cluster_network_spec": {
                    "cluster_distributed_switch": "50 0f ac 8c 54 ff 20 31-ab c9 02 30 f8 3a 25 ef",
                    "egress_cidrs": [
                        {
                            "address": "10.30.20.0",
                            "prefix": "24"
                        }
                    ],
                    "ingress_cidrs": [
                        {
                            "address": "10.30.10.0",
                            "prefix": "24"
                        }
                    ],
                    "nsx_edge_cluster": "bc7fa4d6-2c5a-4cd3-a133-77b70d374ce7",
                    "pod_cidrs": [
                        {
                            "address": "10.244.0.0",
                            "prefix": "21"
                        }
                    ]
                },
                "network_provider": "NSXT_CONTAINER_PLUGIN",
                "service_cidr": {
                    "address": "10.64.96.0",
                    "prefix": 24
                },
                "size_hint": "TINY"
            },
            "body_format": "json",
            "ca_path": null,
            "client_cert": null,
            "client_key": null,
            "creates": null,
            "dest": null,
            "follow_redirects": "safe",
            "force": false,
            "force_basic_auth": false,
            "group": null,
            "headers": {
                "Content-Type": "application/json",
                "vmware-api-session-id": "31361c6ca70fcdfcbed20a4a40779918"
            },
            "http_agent": "ansible-httpget",
            "method": "POST",
            "mode": null,
            "owner": null,
            "remote_src": false,
            "removes": null,
            "return_content": true,
            "selevel": null,
            "serole": null,
            "setype": null,
            "seuser": null,
            "src": null,
            "status_code": [
                204
            ],
            "timeout": 30,
            "unix_socket": null,
            "unsafe_writes": false,
            "url": "https://srv-vcenter-01.megasp.net/api/vcenter/namespace-management/clusters/domain-c1008?action=enable",
            "url_password": null,
            "url_username": null,
            "use_gssapi": false,
            "use_proxy": true,
            "validate_certs": false
        }
    },
    "msg": "OK (unknown bytes)",
    "redirected": false,
    "server": "envoy",
    "status": 204,
    "url": "https://srv-vcenter-01.megasp.net/api/vcenter/namespace-management/clusters/domain-c1008?action=enable",
    "x_envoy_upstream_service_time": "3058"
}
Read vars_file '../answerfile.yml'
META: ran handlers
Read vars_file '../answerfile.yml'
META: ran handlers

PLAY RECAP *********************************************************************************************************************************
localhost                  : ok=8    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0