saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:
https://repo.saltproject.io/
Apache License 2.0
14.17k stars 5.48k forks source link

salt-cloud -a start not working for vmware cloud provider #56343

Open jorisdejosselin opened 4 years ago

jorisdejosselin commented 4 years ago

Description of Issue

salt-cloud -a start does not seem to render the map file correctly when using the vmware cloud provider with the latest version. This results in no machines being targeted in the map file. curiously the salt-cloud -a stop does seem to work correctly. Which makes things tedious to start up again in vmware.

Steps to Reproduce Issue

salt-cloud -a start -m /etc/salt/cloud.maps.d/kubernetes_cluster.map.jinja -l debug
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: salt
[DEBUG   ] Missing configuration file: /etc/salt/cloud.providers
[DEBUG   ] Including configuration from '/etc/salt/cloud.providers.d/vmware.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.providers.d/vmware.conf
[DEBUG   ] Missing configuration file: /etc/salt/cloud.profiles
[DEBUG   ] Including configuration from '/etc/salt/cloud.profiles.d/base_linux.conf'
[DEBUG   ] Reading configuration from /etc/salt/cloud.profiles.d/base_linux.conf
[DEBUG   ] Configuration file path: /etc/salt/cloud
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[INFO    ] salt-cloud starting
[DEBUG   ] Marking 'base64_encode' as a jinja filter
[DEBUG   ] Marking 'base64_decode' as a jinja filter
[DEBUG   ] Marking 'md5' as a jinja filter
[DEBUG   ] Marking 'sha1' as a jinja filter
[DEBUG   ] Marking 'sha256' as a jinja filter
[DEBUG   ] Marking 'sha512' as a jinja filter
[DEBUG   ] Marking 'hmac' as a jinja filter
[DEBUG   ] Marking 'hmac_compute' as a jinja filter
[DEBUG   ] Marking 'random_hash' as a jinja filter
[DEBUG   ] Marking 'rand_str' as a jinja filter
[DEBUG   ] Marking 'file_hashsum' as a jinja filter
[DEBUG   ] Marking 'http_query' as a jinja filter
[DEBUG   ] Marking 'strftime' as a jinja filter
[DEBUG   ] Marking 'date_format' as a jinja filter
[DEBUG   ] Marking 'yaml_dquote' as a jinja filter
[DEBUG   ] Marking 'yaml_squote' as a jinja filter
[DEBUG   ] Marking 'yaml_encode' as a jinja filter
[DEBUG   ] Marking 'raise' as a jinja global
[DEBUG   ] Marking 'match' as a jinja test
[DEBUG   ] Marking 'equalto' as a jinja test
[DEBUG   ] Marking 'skip' as a jinja filter
[DEBUG   ] Marking 'sequence' as a jinja filter
[DEBUG   ] Marking 'to_bool' as a jinja filter
[DEBUG   ] Marking 'tojson' as a jinja filter
[DEBUG   ] Marking 'quote' as a jinja filter
[DEBUG   ] Marking 'regex_escape' as a jinja filter
[DEBUG   ] Marking 'regex_search' as a jinja filter
[DEBUG   ] Marking 'regex_match' as a jinja filter
[DEBUG   ] Marking 'regex_replace' as a jinja filter
[DEBUG   ] Marking 'uuid' as a jinja filter
[DEBUG   ] Marking 'unique' as a jinja filter
[DEBUG   ] Marking 'min' as a jinja filter
[DEBUG   ] Marking 'max' as a jinja filter
[DEBUG   ] Marking 'avg' as a jinja filter
[DEBUG   ] Marking 'union' as a jinja filter
[DEBUG   ] Marking 'intersect' as a jinja filter
[DEBUG   ] Marking 'difference' as a jinja filter
[DEBUG   ] Marking 'symmetric_difference' as a jinja filter
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: salt
[DEBUG   ] Grains refresh requested. Refreshing grains.
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] pyVmomi not loaded: Incompatible versions of Python. See Issue #29537.
[DEBUG   ] LazyLoaded zfs.is_supported
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] compile template: /etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja
[DEBUG   ] Jinja search path: [u'/var/cache/salt/cloud/files/base']
[DEBUG   ] Updating roots fileserver cache
[PROFILE ] Time (in seconds) to render '/etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja' using 'jinja' renderer: 0.0621020793915
[DEBUG   ] Rendered data from file: /etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja:

base_linux:

  - kubemaster80:
        grains:
            hostname: kubemaster80
            ip: 192.168.1.80

  - kubemaster81:
        grains:
            hostname: kubemaster81
            ip: 192.168.1.81

  - kubemaster82:
        grains:
            hostname: kubemaster82
            ip: 192.168.1.82

  - kubenode83:
        grains:
            hostname: kubenode83
            ip: 192.168.1.83

  - kubenode84:
        grains:
            hostname: kubenode84
            ip: 192.168.1.84

  - kubenode85:
        grains:
            hostname: kubenode85
            ip: 192.168.1.85

[DEBUG   ] Results of YAML rendering:
OrderedDict([(u'base_linux', [OrderedDict([(u'kubemaster80', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubemaster80'), (u'ip', u'192.168.1.80')]))]))]), OrderedDict([(u'kubemaster81', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubemaster81'), (u'ip', u'192.168.1.81')]))]))]), OrderedDict([(u'kubemaster82', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubemaster82'), (u'ip', u'192.168.1.82')]))]))]), OrderedDict([(u'kubenode83', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubenode83'), (u'ip', u'192.168.1.83')]))]))]), OrderedDict([(u'kubenode84', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubenode84'), (u'ip', u'192.168.1.84')]))]))]), OrderedDict([(u'kubenode85', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubenode85'), (u'ip', u'192.168.1.85')]))]))])])])
[PROFILE ] Time (in seconds) to render '/etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja' using 'yaml' renderer: 0.00189900398254
[INFO    ] Applying map from '/etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja'.
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG   ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: salt
[DEBUG   ] Grains refresh requested. Refreshing grains.
[DEBUG   ] Reading configuration from /etc/salt/cloud
[DEBUG   ] LazyLoaded zfs.is_supported
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] LazyLoaded jinja.render
[DEBUG   ] LazyLoaded yaml.render
[DEBUG   ] compile template: /etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja
[DEBUG   ] Jinja search path: [u'/var/cache/salt/cloud/files/base']
[PROFILE ] Time (in seconds) to render '/etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja' using 'jinja' renderer: 0.00166201591492
[DEBUG   ] Rendered data from file: /etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja:

base_linux:

  - kubemaster80:
        grains:
            hostname: kubemaster80
            ip: 192.168.1.80

  - kubemaster81:
        grains:
            hostname: kubemaster81
            ip: 192.168.1.81

  - kubemaster82:
        grains:
            hostname: kubemaster82
            ip: 192.168.1.82

  - kubenode83:
        grains:
            hostname: kubenode83
            ip: 192.168.1.83

  - kubenode84:
        grains:
            hostname: kubenode84
            ip: 192.168.1.84

  - kubenode85:
        grains:
            hostname: kubenode85
            ip: 192.168.1.85

[DEBUG   ] Results of YAML rendering:
OrderedDict([(u'base_linux', [OrderedDict([(u'kubemaster80', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubemaster80'), (u'ip', u'192.168.1.80')]))]))]), OrderedDict([(u'kubemaster81', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubemaster81'), (u'ip', u'192.168.1.81')]))]))]), OrderedDict([(u'kubemaster82', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubemaster82'), (u'ip', u'192.168.1.82')]))]))]), OrderedDict([(u'kubenode83', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubenode83'), (u'ip', u'192.168.1.83')]))]))]), OrderedDict([(u'kubenode84', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubenode84'), (u'ip', u'192.168.1.84')]))]))]), OrderedDict([(u'kubenode85', OrderedDict([(u'grains', OrderedDict([(u'hostname', u'kubenode85'), (u'ip', u'192.168.1.85')]))]))])])])
[PROFILE ] Time (in seconds) to render '/etc/salt/cloud.maps.d/kubernetes_cluster_linux.map.jinja' using 'yaml' renderer: 0.00219583511353
[DEBUG   ] Could not LazyLoad vmware.optimize_providers: 'vmware.optimize_providers' is not available.
[DEBUG   ] The 'vmware' cloud driver is unable to be optimized.
[DEBUG   ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False
[DEBUG   ] LazyLoaded parallels.avail_locations
[DEBUG   ] LazyLoaded proxmox.avail_sizes
The following virtual machines are set to be actioned with "start":

Proceed? [N/y] 

Versions Report

Salt Version: Salt: 3000

Dependency Versions: cffi: Not Installed cherrypy: Not Installed dateutil: Not Installed docker-py: Not Installed gitdb: Not Installed gitpython: Not Installed Jinja2: 2.7.2 libgit2: Not Installed M2Crypto: Not Installed Mako: Not Installed msgpack-pure: Not Installed msgpack-python: 0.6.2 mysql-python: Not Installed pycparser: Not Installed pycrypto: 2.6.1 pycryptodome: Not Installed pygit2: Not Installed Python: 2.7.5 (default, Aug 7 2019, 00:51:29) python-gnupg: Not Installed PyYAML: 3.11 PyZMQ: 15.3.0 smmap: Not Installed timelib: Not Installed Tornado: 4.5.3 ZMQ: 4.1.4

System Versions: dist: centos 7.7.1908 Core locale: UTF-8 machine: x86_64 release: 3.10.0-1062.12.1.el7.x86_64 system: Linux version: CentOS Linux 7.7.1908 Core

Ch3LL commented 4 years ago

was this working previously? and can you share a sanitized version of your map file?

jorisdejosselin commented 4 years ago

This has not worked for me. First time working with salt-cloud and the VMware provider. Below are the map file and the jinja code used to generate the map file. Hopefully this is enough information to find the problem.

Full map file:

base_linux:

  - kubemaster80:
        grains:
            hostname: kubemaster80
            ip: 192.168.1.80

  - kubemaster81:
        grains:
            hostname: kubemaster81
            ip: 192.168.1.81

  - kubemaster82:
        grains:
            hostname: kubemaster82
            ip: 192.168.1.82

  - kubenode83:
        grains:
            hostname: kubenode83
            ip: 192.168.1.83 

  - kubenode84:
        grains:
            hostname: kubenode84
            ip: 192.168.1.84 

  - kubenode85:
        grains:
            hostname: kubenode85
            ip: 192.168.1.85 

Full map file with jinja:

base_linux:
{% for i in range(80, 83, 1) %}
  - kubemaster{{ i }}:
        grains:
            hostname: kubemaster{{ i }}
            ip: 192.168.1.{{ i }}
{% endfor %}
{% for i in range(83, 86, 1) %}
  - kubenode{{ i }}:
        grains:
            hostname: kubenode{{ i }}
            ip: 192.168.1.{{ i }} 
{% endfor %}
jorisdejosselin commented 4 years ago

To make it more readable i removed the empty lines created by jinja with the below code. This did not solve the issue however.

base_linux:
{%- for i in range(80, 83, 1) %}
  - kubemaster{{ i }}:
        grains:
            hostname: kubemaster{{ i }}
            ip: 192.168.1.{{ i }}
{%- endfor %}
{%- for i in range(83, 86, 1) %}
  - kubenode{{ i }}:
        grains:
            hostname: kubenode{{ i }}
            ip: 192.168.1.{{ i }} 
{%- endfor %}
  - kube-apiserver-load-balancer:
          grains:
              hostname: kube-apiserver_load-balancer
              ip: 192.168.1.100

Output:

base_linux:
  - kubemaster80:
        grains:
            hostname: kubemaster80
            ip: 192.168.1.80
  - kubemaster81:
        grains:
            hostname: kubemaster81
            ip: 192.168.1.81
  - kubemaster82:
        grains:
            hostname: kubemaster82
            ip: 192.168.1.82
  - kubenode83:
        grains:
            hostname: kubenode83
            ip: 192.168.1.83
  - kubenode84:
        grains:
            hostname: kubenode84
            ip: 192.168.1.84
  - kubenode85:
        grains:
            hostname: kubenode85
            ip: 192.168.1.85
Ch3LL commented 4 years ago

thanks for the added information. Looks like im able to replicate this even all the way back to version 2018.3.4 so this is not a regression.

ping @saltstack/team-cloud any ideas here? I've actually never used actions in conjunction with map files. Is that combination expected to work?