Open jorisdejosselin opened 4 years ago
was this working previously? and can you share a sanitized version of your map file?
This has not worked for me. First time working with salt-cloud and the VMware provider. Below are the map file and the jinja code used to generate the map file. Hopefully this is enough information to find the problem.
Full map file:
base_linux:
- kubemaster80:
grains:
hostname: kubemaster80
ip: 192.168.1.80
- kubemaster81:
grains:
hostname: kubemaster81
ip: 192.168.1.81
- kubemaster82:
grains:
hostname: kubemaster82
ip: 192.168.1.82
- kubenode83:
grains:
hostname: kubenode83
ip: 192.168.1.83
- kubenode84:
grains:
hostname: kubenode84
ip: 192.168.1.84
- kubenode85:
grains:
hostname: kubenode85
ip: 192.168.1.85
Full map file with jinja:
base_linux:
{% for i in range(80, 83, 1) %}
- kubemaster{{ i }}:
grains:
hostname: kubemaster{{ i }}
ip: 192.168.1.{{ i }}
{% endfor %}
{% for i in range(83, 86, 1) %}
- kubenode{{ i }}:
grains:
hostname: kubenode{{ i }}
ip: 192.168.1.{{ i }}
{% endfor %}
To make it more readable i removed the empty lines created by jinja with the below code. This did not solve the issue however.
base_linux:
{%- for i in range(80, 83, 1) %}
- kubemaster{{ i }}:
grains:
hostname: kubemaster{{ i }}
ip: 192.168.1.{{ i }}
{%- endfor %}
{%- for i in range(83, 86, 1) %}
- kubenode{{ i }}:
grains:
hostname: kubenode{{ i }}
ip: 192.168.1.{{ i }}
{%- endfor %}
- kube-apiserver-load-balancer:
grains:
hostname: kube-apiserver_load-balancer
ip: 192.168.1.100
Output:
base_linux:
- kubemaster80:
grains:
hostname: kubemaster80
ip: 192.168.1.80
- kubemaster81:
grains:
hostname: kubemaster81
ip: 192.168.1.81
- kubemaster82:
grains:
hostname: kubemaster82
ip: 192.168.1.82
- kubenode83:
grains:
hostname: kubenode83
ip: 192.168.1.83
- kubenode84:
grains:
hostname: kubenode84
ip: 192.168.1.84
- kubenode85:
grains:
hostname: kubenode85
ip: 192.168.1.85
thanks for the added information. Looks like im able to replicate this even all the way back to version 2018.3.4 so this is not a regression.
ping @saltstack/team-cloud any ideas here? I've actually never used actions in conjunction with map files. Is that combination expected to work?
Description of Issue
salt-cloud -a start does not seem to render the map file correctly when using the vmware cloud provider with the latest version. This results in no machines being targeted in the map file. curiously the salt-cloud -a stop does seem to work correctly. Which makes things tedious to start up again in vmware.
Steps to Reproduce Issue
Versions Report
Salt Version: Salt: 3000
Dependency Versions: cffi: Not Installed cherrypy: Not Installed dateutil: Not Installed docker-py: Not Installed gitdb: Not Installed gitpython: Not Installed Jinja2: 2.7.2 libgit2: Not Installed M2Crypto: Not Installed Mako: Not Installed msgpack-pure: Not Installed msgpack-python: 0.6.2 mysql-python: Not Installed pycparser: Not Installed pycrypto: 2.6.1 pycryptodome: Not Installed pygit2: Not Installed Python: 2.7.5 (default, Aug 7 2019, 00:51:29) python-gnupg: Not Installed PyYAML: 3.11 PyZMQ: 15.3.0 smmap: Not Installed timelib: Not Installed Tornado: 4.5.3 ZMQ: 4.1.4
System Versions: dist: centos 7.7.1908 Core locale: UTF-8 machine: x86_64 release: 3.10.0-1062.12.1.el7.x86_64 system: Linux version: CentOS Linux 7.7.1908 Core