saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:
https://repo.saltproject.io/
Apache License 2.0
14.08k stars 5.47k forks source link

[BUG][DEBUG] Passing on saltutil error. Key 'u'retcode' missing from client return. This may be an error in the client. #56724

Closed lsambolino closed 1 year ago

lsambolino commented 4 years ago

Description When trying to launch salt 'ourdockernodehostname' state.apply haproxy-docker/haproxyv2 test=True the salt-master node enters the following loop and never exits:

[DEBUG   ] Checking whether jid 20200420094156723578 is still running
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/master', u'salt.dev.ourdomain_master', u'tcp://127.0.0.1:4506', u'clear')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://127.0.0.1:4506
[DEBUG   ] Trying to connect to: tcp://127.0.0.1:4506
[DEBUG   ] **Passing on saltutil error. Key 'u'retcode' missing from client return. This may be an error in the client.**

Before entering the loop, the output is:

[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/f_defaults.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/f_defaults.conf
[DEBUG   ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG   ] Changed git to gitfs in master opts' fileserver_backend list
[DEBUG   ] Changed minion to minionfs in master opts' fileserver_backend list
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: salt.dev.ourdomain
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] Configuration file path: /etc/salt/master
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Including configuration from '/etc/salt/master.d/f_defaults.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/f_defaults.conf
[DEBUG   ] Including configuration from '/etc/salt/master.d/reactor.conf'
[DEBUG   ] Reading configuration from /etc/salt/master.d/reactor.conf
[DEBUG   ] Changed git to gitfs in master opts' fileserver_backend list
[DEBUG   ] Changed minion to minionfs in master opts' fileserver_backend list
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: salt.dev.ourdomain
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] MasterEvent PUB socket URI: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/master', u'salt.dev.ourdomain_master', u'tcp://127.0.0.1:4506', u'clear')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://127.0.0.1:4506
[DEBUG   ] Trying to connect to: tcp://127.0.0.1:4506
[DEBUG   ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pub.ipc
[DEBUG   ] LazyLoaded local_cache.get_load
[DEBUG   ] Reading minion list from /var/cache/salt/master/jobs/d5/93725603162a6463d72585c85baa617c2d75925b5f8858d8418144efcbc2e2/.minions.p
[DEBUG   ] get_iter_returns for jid 20200420094156723578 sent to set(['docker3.dev.ourdomain']) will timeout at 09:42:01.729167
[DEBUG   ] Checking whether jid 20200420094156723578 is still running
[DEBUG   ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/master', u'salt.dev.ourdomain_master', u'tcp://127.0.0.1:4506', u'clear')
[DEBUG   ] Connecting the Minion to the Master URI (for the return server): tcp://127.0.0.1:4506
[DEBUG   ] Trying to connect to: tcp://127.0.0.1:4506
[DEBUG   ] Passing on saltutil error. Key 'u'retcode' missing from client return. This may be an error in the client.

We are trying to apply the haproxyv2.sls state file which should update haproxy config of selected haproxy instances running on the dockerhostname node. We put the desidered haproxy configuration in a jinja file which is then loaded by the SLS state.

Setup The SLS we are trying to launch is as follows:

#!pyobjects
from salt://services/configuration.sls import configuration, iterate_instances, iterate_services
from salt://common/init.sls import domain, ourdomainenv, kube_host
import salt.ext.six as six
import socket

def _add_slash_url(url):
    url = url + '/' if url[-1] != '/' else url
    return url

# We are dynamically adding ACLs based on Kubernetes Services states
def add_service_acls(service, acls):
    try:
        aliases = ' '.join(configuration.envs[subenv]['aliases'])
        salt.log.debug('aliases: {}'.format(aliases))
        acl_subenv = "acl subenv-{}-v1 hdr(host) -i {}".format(subenv, aliases)
        salt.log.debug('ACL: {}'.format(acl_subenv))
        acls.add(acl_subenv)
    except KeyError:
        salt.test.exception('subenv {} no alias defined!'.format(subenv))

# Still section related to dynamic ACL intohaproxy
def add_instance_acls(instance_conf, acls):
    if instance_conf.hostname.sort() != default_host.sort():
        acl_instance_host = 'acl service-{}-host hdr(host) -i {}'.format(instance_conf.id,
                                                                         ' '.join(instance_conf.hostname))
        acls.add(acl_instance_host)
    if domain == 'prod.ourdomain' and instance_conf.id == 'ourservicename-ourappname-ourappname-default':
        acls.add('acl service-{id}-v1 path_beg /ourservicename/api/v1/orderstatus/'
                 .format(id=instance_conf.id))
    acl_instance_url = 'acl service-{}-url-v2 path_beg {}'\
                       .format(instance_conf.id,
                               _add_slash_url(instance_conf.url))

    acls.add(acl_instance_url)
    if service.sls_version == 'v1':
        acls.add('acl service-{id}-v1 path_beg /{service_name}/'
                 .format(id=instance_conf.id,
                         service_name=service.service_name))
# Dynamically setting backends in haproxy config
def add_use_backends(instance_conf, use_backends):
    if instance_conf.hostname.sort() != default_host.sort():
        host_acl = 'service-{id}-host'.format(instance_conf.id)
    else:
        host_acl = 'routing_v2'
    # this is a  v1 example https://haproxy-ourservicename.ourdomain/flower
    # this is a  v2 + sls_v1 example https://services.prod.ourdomain/ourservicename/flower
    # this is a  v2 + sls_v2 example https://services.prod.ourdomain/v2/ourservicename/ourservicename/flower
    if service.sls_version == 'v1':
        if instance_conf.haproxy['serve_from_slash']:
            use_backends.append('use_backend {id} if '
                                'subenv-{subenv}-v1'.format(id=instance_conf.id,
                                                                   subenv=subenv))
        else:
            use_backends.append('use_backend {id} if {host_acl} service-{id}-url-v2 or '
                                'subenv-{subenv}-v1 service-{id}-v1'.format(id=instance_conf.id,
                                                                            host_acl=host_acl,
                                                                            subenv=subenv))
    elif service.sls_version == 'v2':
        if domain == 'prod.ourdomain' and instance_conf.id == 'ourservicename-ourappname-ourappname-default':
            use_backends.append('use_backend {id} if {host_acl} service-{id}-url-v2 or '
                                'subenv-{subenv}-v1 service-{id}-v1'.format(id=instance_conf.id,
                                                                            host_acl=host_acl,
                                                                            subenv=subenv))
        else:
            use_backends.append('use_backend {id} if {host_acl} service-{id}-url-v2'
                            .format(id=instance_conf.id,
                                    host_acl=host_acl))

# Adding kubeserver as backend nodes entries
def add_kube_server(instance_conf, backends, ssl_options):
    salt.log.debug('adding kube servers for {}'.format(instance_conf.id))
    try:
        kube_service = mine(kube_host, '{}-service'.format(instance_conf.id.replace('_','--')))[kube_host]
    except KeyError:
        backends.append('# No service {} in mine'.format(instance_conf.id))
        return
    ports = kube_service['spec']['ports']
    node_port = None
    for port in ports:
        if port['name'] == instance_conf.protocol:
            node_port = port['node_port']
    if node_port is None:
        salt.test.exception('Cannot find node_port for {}'.format(instance_conf.id))
    if instance_conf.kubernetes['ingress']['enabled']:
        backends.append('http-request set-header Host {}'.format(instance_conf.kubernetes['ingress']['hostname']))
    elif instance_conf.haproxy['keep_host_header']:
        backends.append('acl existing-x-forw-host req.hdr(X-Forwarded-Host) -m found')
        backends.append('http-request set-header Host %[req.hdr(X-Forwarded-Host)] if existing-x-forw-host')
    for kube_node in instance_conf.kubernetes['haproxy_hosts']:
        ip = socket.gethostbyname(kube_node)
        if instance_conf.kubernetes['ingress']['enabled']:
            name = instance_conf.kubernetes['ingress']['hostname']
            ssl_options = "send-proxy ssl verify none sni str({})".format(name)
            node_port=31444
        backends.append('server {name} {ip}:{port} check weight 100 agent-port 8080 agent-check {ssl}'
                        .format(name=kube_node,
                                ip=ip,
                                port=node_port,
                                ssl=ssl_options
                                ))
    backends.append('rspadd X-ourdomain-Orchestrator:\ Kubernetes')

def add_backend(instance_conf, backends):

    def add_haproxy_options(instance_conf,param,backends,default=None):
        value = None
        if hasattr(instance_conf, param):
            value = getattr(instance_conf,param)
        elif default is not None:
            value = default
        if value is not None:
            backends.append("{} {}".format(param.replace("_", " "), value))

    def add_haproxy_connection_options(instance_conf,param,backends,default=None):
        value = getattr(instance_conf, 'haproxy_{}'.format(param))
        if value:
            backends.append("option {}".format(param))

    allowed_ip = ' '.join(instance_conf.haproxy['whitelist'])
    if len(allowed_ip) > 0:
        backends.append('acl network_allowed src {}'.format(allowed_ip))
        backends.append("block if !network_allowed")
    ssl_options = "ssl verify none" if instance_conf.protocol == 'https' else ""
    if hasattr(instance_conf, 'cookie_affinity'):
        has_cookie_affinity = True
        cookie_options = "cookie {} prefix nocache".format(instance_conf.cookie_affinity)
    else:
        has_cookie_affinity = False
        cookie_options = None

    add_kube_server(instance_conf, backends, ssl_options)
    # Add backend configurations
    if cookie_options is not None:
        backends.append(cookie_options)
    add_haproxy_options(instance_conf, 'balance', backends)
    add_haproxy_options(instance_conf, 'timeout_tunnel', backends)
    add_haproxy_options(instance_conf, 'timeout_server', backends, default="110s")
    add_haproxy_connection_options(instance_conf, 'forceclose', backends)
    # health check option
    backends.append("option tcp-check")
    # rewrites
    if service.sls_version == 'v1':
        backends.append('acl be-service-{}-url-v2 path_beg {}'
                        .format(instance_conf.id, _add_slash_url(instance_conf.url)))
        backends.append('reqrep ^([^\\ ]*)\\ /' + service_name + '/(.*)'
                        '     \\1\ /\\2 unless be-service-{}-url-v2'
                        .format(instance_conf.id))
    if domain == 'prod.ourdomain' and instance_conf.id == 'ourservicename-ourappname-ourappname-default':
        backends.append('reqrep ^([^\\ ]*)\\ /ourservicename/api/v1/orderstatus/(.*)'
                        '     \\1\ /v2/ourservicename/ourappname/ourappname/v1/lambdas/calls/ourservicename_b2b.service.order.status\\2')

    #Rewrite the url, unless we are instructed to don't do it
    if instance_conf.haproxy_normalize_url:
        backends.append('reqrep ^([^\\ ]*)\\ ' + instance_conf.url + '/(.*)     \\1\ /\\2')
    # add request header
    for key, value in six.iteritems(instance_conf.http_req_headers):
        backends.append('reqadd {}:\ {}'.format(key,value))
    for key, value in six.iteritems(instance_conf.http_req_headers_if_not_present):
        backends.append('acl {hdr}-exists req.hdr({hdr}) -m found'
                        .format(hdr=key))
        backends.append('reqadd {hdr}:\ {value} unless {hdr}-exists'
                        .format(hdr=key, value=value))

    backends.append('rspadd X-ourdomain-Backend:\ {}'.format(instance_conf.id))

acls = set()
use_backends = []
backends = {}
default_host = ['services', 'services.{}'.format(domain)]
acls.add("acl routing_v2 hdr(host) -i {}".format(' '.join(default_host)))

    # Iterate over service list
for service in iterate_services():
    salt.log.debug('Checking: {}'.format(service.service_id))
    subenv = service.subenv
    service_name = service.service_name
    add_service_acls(service, acls)
    for service_instance in iterate_instances(service):
        if service_instance.haproxy['exposed'] or service.external_access['has_frontend']:
            salt.log.debug("Doing {}".format(service_instance.id))
            add_instance_acls(service_instance, acls)
            add_use_backends(service_instance, use_backends)
            backends[service_instance.id]=[]
            add_backend(service_instance, backends[service_instance.id])
# Here we point to the cfg and jinja files, which are not loaded effectively
File.managed("/etc/haproxy/haproxy.cfg",
             source="salt://haproxy-docker/files/haproxyv2.jinja",
             template='jinja',
             context={'acls': acls,
                      'use_backends': use_backends,
                      'backends': backends},
             watch_in=[{'service': 'haproxy.service'}])

The top SLS file is composed as follows:

base:
  '*':
    - base
    - salt/pkgrepo
    - salt/minion
    - postfix
    - openssh/known_hosts
    - openldap/client
    - unbound
    - filebeat
    - prometheus/node_exporter
    - auditbeat
  'G@roles:redis':
    - match: compound
    - redis
  'G@roles:redis-sentinel':
    - match: compound
    - redis/sentinel
  'G@roles:rate_limiter':
    - match: compound
    - redis
  'G@roles:sentry':
    - match: compound
    - sentry
  'G@roles:jumpbox':
    - match: compound
    - postgresql/client
    - ansible
  'G@roles:elasticsearch':
    - match: compound
    - elasticsearch
  'G@roles:monitoring-lb':
    - match: compound
    - haproxy
    - haproxy/sysctl
    - keepalived
    - keepalived-exporter
  'G@roles:elasticsearch-curator':
    - match: compound
    - elasticsearch/curator
  'G@roles:logstash':
    - match: compound
    - logstash
  'G@roles:logbackup':
    - logstash/logbackup
  'G@roles:dmz_services':
    - match: compound
    - squid
    - apt-cacher.ng.server
    - keepalived
    - keepalived-exporter
  'G@roles:kibana':
    - match: compound
    - kibana
  'G@roles:db_backup_barman':
    - match: compound
    - postgresql/barman
  'G@roles:grafana':
    - match: compound
    - grafana
  'I@roles:openvpn':
    - match: compound
    - openvpn
  'G@roles:master':
    - match: compound
    - salt/master
    - salt/salt-api
    - rabbitmq/rabbitmqadmin
  'I@roles:db':
    - match: compound
    - postgresql/install
    - postgresql/postgres_exporter
  'I@roles:rabbitmq':
    - match: compound
    - rabbitmq
  'G@roles:etcd':
    - match: compound
    - etcd
  'I@roles:pushgateway':
    - match: compound
    - prometheus/pushgateway
  'I@roles:prometheus':
    - match: compound
    - prometheus
    - prometheus/blackbox_exporter
    - prometheus/nginx
  'I@roles:ca':
    - match: compound
    - docker/ca
    - postgresql/ca
    - logstash/ca
    - salt/ca_signing
    - front-end/external-client-crt
  'I@roles:swarm':
    - match: compound
    - docker
    - docker/cadvisor
  'G@roles:alertmanager':
    - match: compound
    - prometheus/alertmanager
  'I@roles:docker':
    - match: compound
    - docker
    - docker/cadvisor
    - docker/rsyslog
  'I@roles:fe-gateway':
    - match: compound
    - front-end
  'G@roles:haproxy-docker':
    - match: compound
    - haproxy-docker
    - haproxy-docker/haproxyv2
  'G@roles:keepalived':
    - match: compound
    - keepalived
    - keepalived-exporter
  'G@roles:lb_fe_prod':
    - match: compound
    - haproxy
    - haproxy/sysctl
    - keepalived
    - keepalived-exporter
  'G@roles:lb_fe':
    - match: compound
    - keepalived
    - keepalived-exporter
    - haproxy
    - haproxy/sysctl
  'G@roles:lb_db':
    - match: compound
    - keepalived
    - keepalived-exporter
    - haproxy
    - haproxy/sysctl
  'G@roles:lb_fe_int':
    - match: compound
    - keepalived
    - keepalived-exporter
    - haproxy
    - haproxy/sysctl
  'G@roles:nfs':
    - match: compound
    - nfs
  'gitlab*':
    - gitlab
    - gitlab/cert
  'G@roles:openldap':
    - match: compound
    - openldap/server
  'G@roles:ldap-webapps':
    - match: compound
    - openldap/fusiondirectory
    - ltb
  'G@roles:jenkins':
    - match: compound
    - base/debian/i386
    - java/jdk
    - elixir
    - jenkins
    - jenkins/cert
    - jenkins/nginx
  'artifactory.dev.ourdomain':
    - artifactory/pro
    - artifactory/cert
  'G@roles:name_server':
    - match: compound
    - nsd
  'G@roles:webprober':
    - match: compound
    - webprober
  'G@roles:db_haproxy':
    - keepalived
    - keepalived-exporter
    - haproxy
    - haproxy/sysctl
  'I@roles:kubernetes-node':
    - kubernetes/node
  'I@roles:kubernetes-master':
    - kubernetes/master
  'G@ourdomainenv:ourdomaincloud-test or G@ourdomainenv:ourdomaincloud-prod':
    - match: compound
    - vmware
  'G@ourdomainenv:ec2':
    - match: compound
    - hostname
    - dhcp-options
  'G@ourdomainenv:ec2 not I@roles:kube*':
    - match: compound
    - swap
  'I@roles:harbor':
    - match: compound
    - harbor/install

Our Salt Master config is basically as default:

##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Master
# Values that are commented out but have no space after the comment are
# defaults that need not be set in the config. If there is a space after the
# comment that the value is presented as an example and is not the default.

# Per default, the master will automatically include all config files
# from master.d/*.conf (master.d is a directory in the same directory
# as the main master config file).
#default_include: master.d/*.conf

# The address of the interface to bind to:
#interface: 0.0.0.0

# Whether the master should listen for IPv6 connections. If this is set to True,
# the interface option must be adjusted, too. (For example: "interface: '::'")
#ipv6: False

# The tcp port used by the publisher:
#publish_port: 4505

# The user under which the salt master will run. Salt will update all
# permissions to allow the specified user to run the master. The exception is
# the job cache, which must be deleted if this user is changed. If the
# modified files cause conflicts, set verify_env to False.
#user: root

# Max open files
#
# Each minion connecting to the master uses AT LEAST one file descriptor, the
# master subscription connection. If enough minions connect you might start
# seeing on the console (and then salt-master crashes):
#   Too many open files (tcp_listener.cpp:335)
#   Aborted (core dumped)
#
# By default this value will be the one of `ulimit -Hn`, ie, the hard limit for
# max open files.
#
# If you wish to set a different value than the default one, uncomment and
# configure this setting. Remember that this value CANNOT be higher than the
# hard limit. Raising the hard limit depends on your OS and/or distribution,
# a good way to find the limit is to search the internet. For example:
#   raise max open files hard limit debian
#
#max_open_files: 100000

# The number of worker threads to start. These threads are used to manage
# return calls made from minions to the master. If the master seems to be
# running slowly, increase the number of threads.
#worker_threads: 5

# The port used by the communication interface. The ret (return) port is the
# interface used for the file server, authentication, job returns, etc.
#ret_port: 4506

# Specify the location of the daemon process ID file:
#pidfile: /var/run/salt-master.pid

# The root directory prepended to these options: pki_dir, cachedir,
# sock_dir, log_file, autosign_file, autoreject_file, extension_modules,
# key_logfile, pidfile:
#root_dir: /

# Directory used to store public key data:
#pki_dir: /etc/salt/pki/master

# Directory to store job and cache data:
#cachedir: /var/cache/salt/master

# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", etc.
#extension_modules: <no default>

# Verify and set permissions on configuration directories at startup:
#verify_env: True

# Set the number of hours to keep old job information in the job cache:
#keep_jobs: 24

# Set the default timeout for the salt command and api. The default is 5
# seconds.
#timeout: 5

# The loop_interval option controls the seconds for the master's maintenance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60

# Set the default outputter used by the salt command. The default is "nested".
#output: nested

# Return minions that timeout when running commands like test.ping
#show_timeout: True

# Display the jid when a job is published
#show_jid: False

# By default, output is colored. To disable colored output, set the color value
# to False.
#color: True

# Do not strip off the colored output from nested results and state outputs
# (true by default).
#strip_colors: False

# Set the directory used to hold unix sockets:
#sock_dir: /var/run/salt/master

# The master can take a while to start up when lspci and/or dmidecode is used
# to populate the grains for the master. Enable if you want to see GPU hardware
# data for your master.
#enable_gpu_grains: False

# The master maintains a job cache. While this is a great addition, it can be
# a burden on the master for larger deployments (over 5000 minions).
# Disabling the job cache will make previously executed jobs unavailable to
# the jobs system and is not generally recommended.
#job_cache: True

# Cache minion grains and pillar data in the cachedir.
#minion_data_cache: True

# Store all returns in the given returner.
# Setting this option requires that any returner-specific configuration also
# be set. See various returners in salt/returners for details on required
# configuration values. (See also, event_return_queue below.)
#
#event_return: mysql

# On busy systems, enabling event_returns can cause a considerable load on
# the storage system for returners. Events can be queued on the master and
# stored in a batched fashion using a single transaction for multiple events.
# By default, events are not queued.
#event_return_queue: 0

# Only events returns matching tags in a whitelist
# event_return_whitelist:
#   - salt/master/a_tag
#   - salt/master/another_tag

# Store all event returns _except_ the tags in a blacklist
# event_return_blacklist:
#   - salt/master/not_this_tag
#   - salt/master/or_this_one

# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# master event bus. The value is expressed in bytes.
#max_event_size: 1048576

# By default, the master AES key rotates every 24 hours. The next command
# following a key rotation will trigger a key refresh from the minion which may
# result in minions which do not respond to the first command after a key refresh.
#
# To tell the master to ping all minions immediately after an AES key refresh, set
# ping_on_rotate to True. This should mitigate the issue where a minion does not
# appear to initially respond after a key is rotated.
#
# Note that ping_on_rotate may cause high load on the master immediately after
# the key rotation event as minions reconnect. Consider this carefully if this
# salt master is managing a large number of minions.
#
# If disabled, it is recommended to handle this event by listening for the
# 'aes_key_rotate' event with the 'key' tag and acting appropriately.
#ping_on_rotate: False

# By default, the master deletes its cache of minion data when the key for that
# minion is removed. To preserve the cache after key deletion, set
# 'preserve_minion_cache' to True.
#
# WARNING: This may have security implications if compromised minions auth with
# a previous deleted minion ID.
#preserve_minion_cache: False

# If max_minions is used in large installations, the master might experience
# high-load situations because of having to check the number of connected
# minions for every authentication. This cache provides the minion-ids of
# all connected minions to all MWorker-processes and greatly improves the
# performance of max_minions.
#con_cache: False

# The master can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main master configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option, then the master will log a warning message.
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
# include:
#   - /etc/salt/extra_config
#include: []

#####        Security settings       #####
##########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False

# Enable auto_accept, this setting will automatically accept all incoming
# public keys from the minions. Note that this is insecure.
#auto_accept: False

# Time in minutes that a incoming public key with a matching name found in
# pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
# are removed when the master checks the minion_autosign directory.
# 0 equals no timeout
#autosign_timeout: 120

# If the autosign_file is specified, incoming keys specified in the
# autosign_file will be automatically accepted. This is insecure.  Regular
# expressions as well as globing lines are supported.
#autosign_file: /etc/salt/autosign.conf

# Works like autosign_file, but instead allows you to specify minion IDs for
# which keys will automatically be rejected. Will override both membership in
# the autosign_file and the auto_accept setting.
#autoreject_file: /etc/salt/autoreject.conf

# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure. If an autosign_file
# is specified, enabling permissive_pki_access will allow group access to that
# specific file.
#permissive_pki_access: False

# Allow users on the master access to execute specific commands on minions.
# This setting should be treated with care since it opens up execution
# capabilities to non root users. By default this capability is completely
# disabled.
#client_acl:
#  larry:
#    - test.ping
#    - network.*
#client_acl:
#  larry:
#    - test.ping
#    - network.*

# Blacklist any of the following users or modules
#
# This example would blacklist all non sudo users, including root from
# running any commands. It would also blacklist any use of the "cmd"
# module. This is completely disabled by default.
#

#client_acl_blacklist:
#  users:
#    - root
#    - '^(?!sudo_).*$'   #  all non sudo users
#  modules:
#    - cmd

# Enforce client_acl & client_acl_blacklist when users have sudo
# access to the salt command.
#
#sudo_acl: False

# The external auth system uses the Salt auth modules to authenticate and
# validate users to access areas of the Salt system.
#external_auth:
#  pam:
#    fred:
#      - test.*
external_auth: {'pam': {'qa%': [{'I@roles:swarm': [{'state.apply': {'args': ['services']}}, 'swarmng.ps', 'swarmng.restart', 'postresql/users']}, {'salt*': ['git.pull']}, {'I@roles:fe-gateway': [{'state.apply': {'args': ['front-end']}}]}], 'sysadm%': ['.*', '@runner', '@wheel'], 'forzali': ['.*', '@runner', '@wheel'], 'devs%': [{'I@roles:swarm': [{'state.apply': {'args': ['services|postgresql/users']}}, 'swarmng.ps', 'swarmng.restart']}, {'salt*': ['git.pull']}, {'I@roles:fe-gateway': [{'state.apply': {'args': ['front-end']}}]}]}}

# Time (in seconds) for a newly generated token to live. Default: 12 hours
#token_expire: 43200

# Allow minions to push files to the master. This is disabled by default, for
# security purposes.
file_recv: True

# Set a hard-limit on the size of the files that can be pushed to the master.
# It will be interpreted as megabytes. Default: 100
#file_recv_max_size: 100

# Signature verification on messages published from the master.
# This causes the master to cryptographically sign all messages published to its event
# bus, and minions then verify that signature before acting on the message.
#
# This is False by default.
#
# Note that to facilitate interoperability with masters and minions that are different
# versions, if sign_pub_messages is True but a message is received by a minion with
# no signature, it will still be accepted, and a warning message will be logged.
# Conversely, if sign_pub_messages is False, but a minion receives a signed
# message it will be accepted, the signature will not be checked, and a warning message
# will be logged. This behavior went away in Salt 2014.1.0 and these two situations
# will cause minion to throw an exception and drop the message.
#sign_pub_message: False

#master_sign_pubkey: False

#####    Master Module Management    #####
##########################################
# Manage how master side modules are loaded.

# Add any additional locations to look for master runners:
#runner_dirs: []

# Enable Cython for master side modules:
#cython_enable: False

#####      State System settings     #####
##########################################
# The state system uses a "top" file to tell the minions what environment to
# use and what modules to use. The state_top file is defined relative to the
# root of the base environment as defined in "File Server settings" below.
#state_top: top.sls

# The master_tops option replaces the external_nodes option by creating
# a plugable system for the generation of external top data. The external_nodes
# option is deprecated by the master_tops option.
#
# To gain the capabilities of the classic external_nodes system, use the
# following configuration:
# master_tops:
#   ext_nodes: <Shell command which returns yaml>
#

# The external_nodes option allows Salt to gather data that would normally be
# placed in a top file. The external_nodes option is the executable that will
# return the ENC data. Remember that Salt will look for external nodes AND top
# files and combine the results if both are enabled!
#external_nodes: None

# The renderer to use on the minions to render the state data
renderer: jinja | yaml | gpg

# The Jinja renderer can strip extra carriage returns and whitespace
# See http://jinja.pocoo.org/docs/api/#high-level-api
#
# If this is set to True the first newline after a Jinja block is removed
# (block, not variable tag!). Defaults to False, corresponds to the Jinja
# environment init variable "trim_blocks".
#jinja_trim_blocks: False
#
# If this is set to True leading spaces and tabs are stripped from the start
# of a line to a block. Defaults to False, corresponds to the Jinja
# environment init variable "lstrip_blocks".
#jinja_lstrip_blocks: False

# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution, defaults to False
#failhard: False

# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True

# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.  If set to 'mixed', the output
# will be terse unless a state failed, in which case that output will be full.
# If set to 'changes', the output will be full unless the state didn't change.
#state_output: full

# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
#   - pkg
#
#state_aggregate: False

#####      File Server settings      #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.

# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
#   base:
#     - /srv/salt/
#   dev:
#     - /srv/salt/dev/services
#     - /srv/salt/dev/states
#   prod:
#     - /srv/salt/prod/services
#     - /srv/salt/prod/states

#file_roots:
#  base:
#    - /srv/salt/common/states
#  ec2-ourdomain:
#    - /srv/salt/ec2-ourdomain/states
#    - /srv/salt/common/states

# The hash_type is the hash to use when discovering the hash of a file on
# the master server. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#
# Prior to changing this value, the master should be stopped and all Salt
# caches should be cleared.
#hash_type: md5

# The buffer size in the file server can be adjusted here:
#file_buffer_size: 1048576

# A regular expression (or a list of expressions) that will be matched
# against the file path before syncing the modules and states to the minions.
# This includes files affected by the file.recurse state.
# For example, if you manage your custom modules and states in subversion
# and don't want all the '.svn' folders and content synced to your minions,
# you could set this to '/\.svn($|/)'. By default nothing is ignored.

#file_ignore_regex:
#  - '/\.svn($|/)'
#  - '/\.git($|/)'

# A file glob (or list of file globs) that will be matched against the file
# path before syncing the modules and states to the minions. This is similar
# to file_ignore_regex above, but works on globs instead of regex. By default
# nothing is ignored.

# file_ignore_glob:
#  - '*.pyc'
#  - '*/somefolder/*.bak'
#  - '*.swp'

# File Server Backend
#
# Salt supports a modular fileserver backend system, this system allows
# the salt master to link directly to third party systems to gather and
# manage the files available to minions. Multiple backends can be
# configured and will be searched for the requested file in the order in which
# they are defined here. The default setting only enables the standard backend
# "roots" which uses the "file_roots" option.
#fileserver_backend:
#  - roots
#
# To use multiple backends list them in the order they are searched:
#fileserver_backend:
#  - git
#  - roots
fileserver_backend:
  - roots
  - minion
#
# Uncomment the line below if you do not want the file_server to follow
# symlinks when walking the filesystem tree. This is set to True
# by default. Currently this only applies to the default roots
# fileserver_backend.
#fileserver_followsymlinks: False
#
# Uncomment the line below if you do not want symlinks to be
# treated as the files they are pointing to. By default this is set to
# False. By uncommenting the line below, any detected symlink while listing
# files on the Master will not be returned to the Minion.
#fileserver_ignoresymlinks: True
#
# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is impacted. Default is False.
#fileserver_limit_traversal: False
#
# The fileserver can fire events off every time the fileserver is updated,
# these are disabled by default, but can be easily turned on by setting this
# flag to True
#fileserver_events: False

# Git File Server Backend Configuration
#
# Gitfs can be provided by one of two python modules: GitPython or pygit2. If
# using pygit2, both libgit2 and git must also be installed.
#gitfs_provider: gitpython
#
# When using the git fileserver backend at least one git remote needs to be
# defined. The user running the salt master will need read access to the repo.
#
# The repos will be searched in order to find the file requested by a client
# and the first repo to have the file will return it.
# When using the git backend branches and tags are translated into salt
# environments.
# Note:  file:// repos will be treated as a remote, so refs you want used must
# exist in that repo as *local* refs.

#gitfs_remotes:
#  - git://github.com/saltstack/salt-states.git
#  - file:///var/git/saltmaster
#
# The gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#gitfs_ssl_verify: True
#
# The gitfs_root option gives the ability to serve files from a subdirectory
# within the repository. The path is defined relative to the root of the
# repository and defaults to the repository root.
#gitfs_root: somefolder/otherfolder

#####         Pillar settings        #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
#pillar_roots:
#  base:
#    - /srv/salt/common/pillar/
#  ec2-ourdomain:
#    - /srv/salt/ec2-ourdomain/pillar/
#    - /srv/salt/common/pillar/
#

#ext_pillar:
#  - hiera: /etc/hiera.yaml
#  - cmd_yaml: cat /etc/salt/yaml

# The ext_pillar_first option allows for external pillar sources to populate
# before file system pillar. This allows for targeting file system pillar from
# ext_pillar.
#ext_pillar_first: False

# The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the pillar gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#pillar_gitfs_ssl_verify: True

# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
#pillar_opts: True

# The pillar_source_merging_strategy option allows you to configure merging strategy
# between different sources. It accepts four values: recurse, aggregate, overwrite,
# or smart. Recurse will merge recursively mapping of data. Aggregate instructs
# aggregation of elements between sources that use the #!yamlex renderer. Overwrite
# will verwrite elements according the order in which they are processed. This is
# behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based
# on the "renderer" setting and is the default value.
pillar_source_merging_strategy: smart

#####          Syndic settings       #####
##########################################
# The Salt syndic is used to pass commands through a master from a higher
# master. Using the syndic is simple, if this is a master that will have
# syndic servers(s) below it set the "order_masters" setting to True, if this
# is a master that will be running a syndic daemon for passthrough the
# "syndic_master" setting needs to be set to the location of the master server
# to receive commands from.

# Set the order_masters setting to True if this master will command lower
# masters' syndic interfaces.
#order_masters: False

# If this master will be running a salt syndic daemon, syndic_master tells
# this master where to receive commands from.
#syndic_master: masterofmaster

# This is the 'ret_port' of the MasterOfMaster:
#syndic_master_port: 4506

# PID file of the syndic daemon:
#syndic_pidfile: /var/run/salt-syndic.pid

# LOG file of the syndic daemon:
#syndic_log_file: syndic.log

#####      Peer Publish settings     #####
##########################################
# Salt minions can send commands to other minions, but only if the minion is
# allowed to. By default "Peer Publication" is disabled, and when enabled it
# is enabled for specific minions and specific commands. This allows secure
# compartmentalization of commands based on individual minions.

# The configuration uses regular expressions to match minions and then a list
# of regular expressions to match functions. The following will allow the
# minion authenticated as foo.example.com to execute functions from the test
# and pkg modules.
#peer:
#  foo.example.com:
#    - test.*
#    - pkg.*
#
# This will allow all minions to execute all commands:
#peer:
#  .*:
#    - .*
#
# This is not recommended, since it would allow anyone who gets root on any
# single minion to instantly have root on all of the minions!

peer:

  .*:

      - x509.sign_remote_certificate

# Minions can also be allowed to execute runners from the salt master.
# Since executing a runner from the minion could be considered a security risk,
# it needs to be enabled. This setting functions just like the peer setting
# except that it opens up runners instead of module functions.
#
# All peer runner support is turned off by default and must be enabled before
# using. This will enable all peer runners for all minions:
#peer_run:
#  .*:
#    - .*
#
# To enable just the manage.up runner for the minion foo.example.com:
#peer_run:
#  foo.example.com:
#    - manage.up

#####         Mine settings     #####
##########################################
# Restrict mine.get access from minions. By default any minion has a full access
# to get all mine data from master cache. In acl definion below, only pcre matches
# are allowed.
# mine_get:
#   .*:
#     - .*
#
# The example below enables minion foo.example.com to get 'network.interfaces' mine
# data only, minions web* to get all network.* and disk.* mine data and all other
# minions won't get any mine data.
# mine_get:
#   foo.example.com:
#     - network.interfaces
#   web.*:
#     - network.*
#     - disk.*

#####         Logging settings       #####
##########################################
# The location of the master log file
# The master log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/master
#log_file: file:///dev/log
#log_file: udp://loghost:10514

log_file: file:///dev/log
#key_logfile: /var/log/salt/key

# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
log_level: warning

# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
log_level_logfile: trace

# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'

# This can be used to control logging levels more specificically.  This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
#   log_granular_levels:
#     'salt': 'warning'
#     'salt.modules': 'debug'
#

#log_granular_levels: {}

#####         Node Groups           #####
##########################################
# Node groups allow for logical groupings of minion nodes. A group consists of a group
# name and a compound target.
#nodegroups:
#  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
#  group2: 'G@os:Debian and foo.domain.com'

#####     Range Cluster settings     #####
##########################################
# The range server (and optional port) that serves your cluster information
# https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
#
#range_server: range:80

#####     Windows Software Repo settings #####
##############################################
# Location of the repo on the master:
#win_repo: /srv/salt/win/repo

# Location of the master's repo cache file:
#win_repo_mastercachefile: /srv/salt/win/repo/winrepo.p

# List of git repositories to include with the local repo:

#win_gitrepos:
#  - 'https://github.com/saltstack/salt-winrepo.git'

#####      Returner settings          ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql

Steps to Reproduce the behavior (Include debug logs if possible and relevant)

  1. Create a salt cluster composed by at least a master and a minion worker, version is 2018.4
  2. Create a top file as mentioned
  3. Create a haproxyv2 file as mentioned
  4. Try to apply the haproxyv2 file referring to docker node with a compund match

Expected behavior We expect the state to be applied flawlessly.

Screenshots Posted code, no screenshots needed at this time.

Versions Report

salt --versions-report ``` Salt Version: Salt: 2018.3.4 Dependency Versions: cffi: 1.12.3 cherrypy: unknown dateutil: 2.5.3 docker-py: Not Installed gitdb: 2.0.0 gitpython: 2.1.1 ioflo: Not Installed Jinja2: 2.9.4 libgit2: 0.27.7 libnacl: Not Installed M2Crypto: 0.24.0 Mako: Not Installed msgpack-pure: Not Installed msgpack-python: 0.4.8 mysql-python: 1.3.7 pycparser: 2.19 pycrypto: 2.6.1 pycryptodome: Not Installed pygit2: 0.27.2 Python: 2.7.13 (default, Sep 26 2018, 18:42:22) python-gnupg: 0.3.9 PyYAML: 3.12 PyZMQ: 16.0.2 RAET: Not Installed smmap: 2.0.1 timelib: Not Installed Tornado: 4.4.3 ZMQ: 4.2.1 System Versions: dist: debian 9.9 locale: UTF-8 machine: x86_64 release: 4.9.0-9-amd64 system: Linux version: debian 9.9 ```

Additional context I am quite new to Saltstack, apologies in advance for missing information

alexey-zhukovin commented 4 years ago

@lsambolino Could you provide log file from `master.

alexey-zhukovin commented 4 years ago

Find the following information in the log file Example:

[DEBUG] Event dispatch: tag = salt / job / 20200424155920653826 / ret / alpha; data = {'cmd': '_return', 'id': 'alpha', 'success': True, 'return': {'pid': 25266, 'fun': 'test.sleep', 'arg': [10], 'tgt': 'alpha', 'jid': '20200424155915592808', 'ret': '', 'tgt_type': 'glob', 'user': 'dimm'}, 'retcode': 0, 'jid': '20200424155920653826', 'fun': 'saltutil.find_job', 'fun_args': ['20200424155915592808'], '_stamp': '2020-04-24T15: 59: 20.740296'}

Keywords: tag = salt / job /.*/ ret / 'fun': 'saltutil.find_job'

Ch3LL commented 1 year ago

Closing due to inactivity . If you are still seeing this on the latest version of Salt please open a new issue with the details and information requested.