saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:
https://repo.saltproject.io/
Apache License 2.0
14.17k stars 5.48k forks source link

Minion not connecting to saltmaster #24676

Closed tganzeboom closed 9 years ago

tganzeboom commented 9 years ago

Saltmaster


[root@cubietruck ~]# salt --versions-report
           Salt: 2015.5.2
         Python: 2.7.10 (default, May 30 2015, 02:17:57)
         Jinja2: 2.7.3
       M2Crypto: 0.22
 msgpack-python: 0.4.6
   msgpack-pure: Not Installed
       pycrypto: 2.6.1
        libnacl: Not Installed
         PyYAML: 3.11
          ioflo: Not Installed
          PyZMQ: 14.6.0
           RAET: Not Installed
            ZMQ: 4.1.1
           Mako: Not Installed
[root@cubietruck ~]# uname -a
Linux cubietruck 4.0.5-1-ARCH #1 SMP Mon Jun 8 19:03:28 MDT 2015 armv7l GNU/Linux
[root@cubietruck ~]#

Minion


root@machine:~# salt-call --versions-report
                  Salt: 2015.5.0
                Python: 2.7.9 (default, Apr  2 2015, 15:33:21)
                Jinja2: 2.7.3
              M2Crypto: 0.21.1
        msgpack-python: 0.4.2
          msgpack-pure: Not Installed
              pycrypto: 2.6.1
               libnacl: Not Installed
                PyYAML: 3.11
                 ioflo: Not Installed
                 PyZMQ: 14.4.1
                  RAET: Not Installed
                   ZMQ: 4.0.5
                  Mako: 1.0.0
 Debian source package: 2015.5.0+ds-1utopic1
root@machine:~# uname -a
Linux machine 3.19.0-20-generic #20-Ubuntu SMP Fri May 29 10:10:47 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
root@machine:~#

From the minion:

root@machine:~# salt-minion -l debug
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: machine
[DEBUG   ] Configuration file path: /etc/salt/minion
[INFO    ] Setting up the Salt Minion "machine"
[DEBUG   ] Created pidfile: /var/run/salt-minion.pid
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] Attempting to authenticate with the Salt Master at 192.168.1.87
[DEBUG   ] Initializing new SAuth for ('/etc/salt/pki/minion', 'machine', 'tcp://192.168.1.87:4506')
[INFO    ] SaltReqTimeoutError: after 60 seconds. (Try 1 of 7)
[INFO    ] SaltReqTimeoutError: after 60 seconds. (Try 2 of 7)
[INFO    ] SaltReqTimeoutError: after 60 seconds. (Try 3 of 7)
^C[WARNING ] Stopping the Salt Minion
[WARNING ] Exiting on Ctrl-c
[INFO    ] The salt minion is shut down
root@machine:~#

Salt-key is not showing the new minion.

However with tcpdump I do see traffic entering port 4506 on the saltmaster.

Is this a regression that I read about from some time back or something else?

jfindlay commented 9 years ago

@tganzeboom, is there a firewall on the master that is blocking traffic from the minion?

tganzeboom commented 9 years ago

@jfindlay No, there is not:

[root@cubietruck ~]# iptables -L -n
iptables v1.4.21: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
[root@cubietruck ~]#

Thanks for the console thingy.

basepi commented 9 years ago

@tganzeboom can you telnet into port 4506 from the minion?

It seems to me there must be something environment-specific going on, because there are many thousand minions running on 2015.5.2 these days without problem.

tganzeboom commented 9 years ago

@basepi Yes, I can:

me@minion:~$ telnet 192.168.1.87 4506
Trying 192.168.1.87...
Connected to 192.168.1.87.
Escape character is '^]'.
^]
telnet> q
Connection closed.
me@minion:~$

I also reinstalled the master and the minion and removed the directories in /var/ & /etc/ manually. Didn't have an effect.

I could do more test, it's not a prod environment, if you have any.

basepi commented 9 years ago

That is....very strange. Can you please include your minion and master config?

tganzeboom commented 9 years ago

@basepi Sure.

BTW, maybe redundant, but the minion is 2015.5.0, the server is 2015.5.2.

Minion


me@minion:~$ cat /etc/salt/minion
##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Minion.
# With the exception of the location of the Salt Master Server, values that are
# commented out but have an empty line after the comment are defaults that need
# not be set in the config. If there is no blank line after the comment, the
# value is presented as an example and is not the default.

# Per default the minion will automatically include all config files
# from minion.d/*.conf (minion.d is a directory in the same directory
# as the main minion config file).
#default_include: minion.d/*.conf

# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
master: 192.168.1.87

# If multiple masters are specified in the 'master' setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing
# the load of many minions executing salt-call requests, for example, from a cron job.
# If only one master is listed, this setting is ignored and a warning will be logged.
#random_master: False

# Set whether the minion should connect to the master via IPv6:
#ipv6: False

# Set the number of seconds to wait before attempting to resolve
# the master hostname if name resolution fails. Defaults to 30 seconds.
# Set to zero if the minion should shutdown and not retry.
# retry_dns: 30

# Set the port used by the master reply and authentication server.
#master_port: 4506

# The user to run salt.
#user: root

# Specify the location of the daemon process ID file.
#pidfile: /var/run/salt-minion.pid

# The root directory prepended to these options: pki_dir, cachedir, log_file,
# sock_dir, pidfile.
#root_dir: /

# The directory to store the pki information in
#pki_dir: /etc/salt/pki/minion

# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
#id:

# Append a domain to a hostname in the event that it does not exist.  This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
#append_domain:

# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against.
#grains:
#  roles:
#    - webserver
#    - memcache
#  deployment: datacenter4
#  cabinet: 13
#  cab_u: 14-15
#
# Where cache data goes.
#cachedir: /var/cache/salt/minion

# Verify and set permissions on configuration directories at startup.
#verify_env: True

# The minion can locally cache the return data from jobs sent to it, this
# can be a good way to keep track of jobs the minion has executed
# (on the minion side). By default this feature is disabled, to enable, set
# cache_jobs to True.
#cache_jobs: False

# Set the directory used to hold unix sockets.
#sock_dir: /var/run/salt/minion

# Set the default outputter used by the salt-call command. The default is
# "nested".
#output: nested
#
# By default output is colored. To disable colored output, set the color value
# to False.
#color: True

# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False

# Backup files that are replaced by file.managed and file.recurse under
# 'cachedir'/file_backups relative to their original location and appended
# with a timestamp. The only valid setting is "minion". Disabled by default.
#
# Alternatively this can be specified for each file in state files:
# /etc/ssh/sshd_config:
#   file.managed:
#     - source: salt://ssh/sshd_config
#     - backup: minion
#
#backup_mode: minion

# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the time, in
# seconds, between those reconnection attempts.
#acceptance_wait_time: 10

# If this is nonzero, the time between reconnection attempts will increase by
# acceptance_wait_time seconds per iteration, up to this maximum. If this is
# set to zero, the time between reconnection attempts will stay constant.
#acceptance_wait_time_max: 0

# If the master rejects the minion's public key, retry instead of exiting.
# Rejected keys will be handled the same as waiting on acceptance.
#rejected_retry: False

# When the master key changes, the minion will try to re-auth itself to receive
# the new master key. In larger environments this can cause a SYN flood on the
# master because all minions try to re-auth immediately. To prevent this and
# have a minion wait for a random amount of time, use this optional parameter.
# The wait-time will be a random number of seconds between 0 and the defined value.
#random_reauth_delay: 60

# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the timeout value,
# in seconds, for each individual attempt. After this timeout expires, the minion
# will wait for acceptance_wait_time seconds before trying again. Unless your master
# is under unusually heavy load, this should be left at the default.
#auth_timeout: 60

# Number of consecutive SaltReqTimeoutError that are acceptable when trying to
# authenticate.
#auth_tries: 7

# If authentication fails due to SaltReqTimeoutError during a ping_interval,
# cause sub minion process to restart.
#auth_safemode: False

# Ping Master to ensure connection is alive (minutes).
#ping_interval: 0

# To auto recover minions if master changes IP address (DDNS)
#    auth_tries: 10
#    auth_safemode: False
#    ping_interval: 90
#
# Minions won't know master is missing until a ping fails. After the ping fail,
# the minion will attempt authentication and likely fails out and cause a restart.
# When the minion restarts it will resolve the masters IP and attempt to reconnect.

# If you don't have any problems with syn-floods, don't bother with the
# three recon_* settings described below, just leave the defaults!
#
# The ZeroMQ pull-socket that binds to the masters publishing interface tries
# to reconnect immediately, if the socket is disconnected (for example if
# the master processes are restarted). In large setups this will have all
# minions reconnect immediately which might flood the master (the ZeroMQ-default
# is usually a 100ms delay). To prevent this, these three recon_* settings
# can be used.
# recon_default: the interval in milliseconds that the socket should wait before
#                trying to reconnect to the master (1000ms = 1 second)
#
# recon_max: the maximum time a socket should wait. each interval the time to wait
#            is calculated by doubling the previous time. if recon_max is reached,
#            it starts again at recon_default. Short example:
#
#            reconnect 1: the socket will wait 'recon_default' milliseconds
#            reconnect 2: 'recon_default' * 2
#            reconnect 3: ('recon_default' * 2) * 2
#            reconnect 4: value from previous interval * 2
#            reconnect 5: value from previous interval * 2
#            reconnect x: if value >= recon_max, it starts again with recon_default
#
# recon_randomize: generate a random wait time on minion start. The wait time will
#                  be a random value between recon_default and recon_default +
#                  recon_max. Having all minions reconnect with the same recon_default
#                  and recon_max value kind of defeats the purpose of being able to
#                  change these settings. If all minions have the same values and your
#                  setup is quite large (several thousand minions), they will still
#                  flood the master. The desired behavior is to have timeframe within
#                  all minions try to reconnect.
#
# Example on how to use these settings. The goal: have all minions reconnect within a
# 60 second timeframe on a disconnect.
# recon_default: 1000
# recon_max: 59000
# recon_randomize: True
#
# Each minion will have a randomized reconnect value between 'recon_default'
# and 'recon_default + recon_max', which in this example means between 1000ms
# 60000ms (or between 1 and 60 seconds). The generated random-value will be
# doubled after each attempt to reconnect. Lets say the generated random
# value is 11 seconds (or 11000ms).
# reconnect 1: wait 11 seconds
# reconnect 2: wait 22 seconds
# reconnect 3: wait 33 seconds
# reconnect 4: wait 44 seconds
# reconnect 5: wait 55 seconds
# reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max)
# reconnect 7: wait 11 seconds
# reconnect 8: wait 22 seconds
# reconnect 9: wait 33 seconds
# reconnect x: etc.
#
# In a setup with ~6000 thousand hosts these settings would average the reconnects
# to about 100 per second and all hosts would be reconnected within 60 seconds.
# recon_default: 100
# recon_max: 5000
# recon_randomize: False
#
#
# The loop_interval sets how long in seconds the minion will wait between
# evaluating the scheduler and running cleanup tasks. This defaults to a
# sane 60 seconds, but if the minion scheduler needs to be evaluated more
# often lower this value
#loop_interval: 60

# The grains_refresh_every setting allows for a minion to periodically check
# its grains to see if they have changed and, if so, to inform the master
# of the new grains. This operation is moderately expensive, therefore
# care should be taken not to set this value too low.
#
# Note: This value is expressed in __minutes__!
#
# A value of 10 minutes is a reasonable default.
#
# If the value is set to zero, this check is disabled.
#grains_refresh_every: 1

# Cache grains on the minion. Default is False.
#grains_cache: False

# Grains cache expiration, in seconds. If the cache file is older than this
# number of seconds then the grains cache will be dumped and fully re-populated
# with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache'
# is not enabled.
# grains_cache_expiration: 300

# Windows platforms lack posix IPC and must rely on slower TCP based inter-
# process communications. Set ipc_mode to 'tcp' on such systems
#ipc_mode: ipc

# Overwrite the default tcp ports used by the minion when in tcp mode
#tcp_pub_port: 4510
#tcp_pull_port: 4511

# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# minion event bus. The value is expressed in bytes.
#max_event_size: 1048576

# To detect failed master(s) and fire events on connect/disconnect, set
# master_alive_interval to the number of seconds to poll the masters for
# connection events.
#
#master_alive_interval: 30

# The minion can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main minion configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option then the minion will log a warning message.
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
#include:
#  - /etc/salt/extra_config
#  - /etc/roles/webserver
#
#
#
#####   Minion module management     #####
##########################################
# Disable specific modules. This allows the admin to limit the level of
# access the master has to the minion.
#disable_modules: [cmd,test]
#disable_returners: []
#
# Modules can be loaded from arbitrary paths. This enables the easy deployment
# of third party modules. Modules for returners and minions can be loaded.
# Specify a list of extra directories to search for minion modules and
# returners. These paths must be fully qualified!
#module_dirs: []
#returner_dirs: []
#states_dirs: []
#render_dirs: []
#utils_dirs: []
#
# A module provider can be statically overwritten or extended for the minion
# via the providers option, in this case the default module will be
# overwritten by the specified module. In this example the pkg module will
# be provided by the yumpkg5 module instead of the system default.
#providers:
#  pkg: yumpkg5
#
# Enable Cython modules searching and loading. (Default: False)
#cython_enable: False
#
# Specify a max size (in bytes) for modules on import. This feature is currently
# only supported on *nix operating systems and requires psutil.
# modules_max_memory: -1

#####    State Management Settings    #####
###########################################
# The state management system executes all of the state templates on the minion
# to enable more granular control of system state management. The type of
# template and serialization used for state management needs to be configured
# on the minion, the default renderer is yaml_jinja. This is a yaml file
# rendered from a jinja template, the available options are:
# yaml_jinja
# yaml_mako
# yaml_wempy
# json_jinja
# json_mako
# json_wempy
#
#renderer: yaml_jinja
#
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution. Defaults to False.
#failhard: False
#
# autoload_dynamic_modules turns on automatic loading of modules found in the
# environments on the master. This is turned on by default. To turn of
# autoloading modules when states run, set this value to False.
#autoload_dynamic_modules: True
#
# clean_dynamic_modules keeps the dynamic modules on the minion in sync with
# the dynamic modules on the master, this means that if a dynamic module is
# not on the master it will be deleted from the minion. By default, this is
# enabled and can be disabled by changing this value to False.
#clean_dynamic_modules: True
#
# Normally, the minion is not isolated to any single environment on the master
# when running states, but the environment can be isolated on the minion side
# by statically setting it. Remember that the recommended way to manage
# environments is to isolate via the top file.
#environment: None
#
# If using the local file directory, then the state top file name needs to be
# defined, by default this is top.sls.
#state_top: top.sls
#
# Run states when the minion daemon starts. To enable, set startup_states to:
# 'highstate' -- Execute state.highstate
# 'sls' -- Read in the sls_list option and execute the named sls files
# 'top' -- Read top_file option and execute based on that file on the Master
#startup_states: ''
#
# List of states to run when the minion starts up if startup_states is 'sls':
#sls_list:
#  - edit.vim
#  - hyper
#
# Top file to execute if startup_states is 'top':
#top_file: ''

# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
#   - pkg
#
#state_aggregate: False

#####     File Directory Settings    #####
##########################################
# The Salt Minion can redirect all file server operations to a local directory,
# this allows for the same state tree that is on the master to be used if
# copied completely onto the minion. This is a literal copy of the settings on
# the master but used to reference a local directory on the minion.

# Set the file client. The client defaults to looking on the master server for
# files, but can be directed to look at the local file directory setting
# defined below by setting it to local.
#file_client: remote

# The file directory works on environments passed to the minion, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
#   base:
#     - /srv/salt/
#   dev:
#     - /srv/salt/dev/services
#     - /srv/salt/dev/states
#   prod:
#     - /srv/salt/prod/services
#     - /srv/salt/prod/states
#
#file_roots:
#  base:
#    - /srv/salt

# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is negatively impacted. Default
# is False.
#fileserver_limit_traversal: False

# The hash_type is the hash to use when discovering the hash of a file in
# the local fileserver. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#
# Warning: Prior to changing this value, the minion should be stopped and all
# Salt caches should be cleared.
#hash_type: md5

# The Salt pillar is searched for locally if file_client is set to local. If
# this is the case, and pillar data is defined, then the pillar_roots need to
# also be configured on the minion:
#pillar_roots:
#  base:
#    - /srv/pillar
#
#
######        Security settings       #####
###########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False

# Enable permissive access to the salt keys.  This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir.  To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure.
#permissive_pki_access: False

# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True

# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.
#state_output: full

# The state_output_diff setting changes whether or not the output from
# successful states is returned. Useful when even the terse output of these
# states is cluttering the logs. Set it to True to ignore them.
#state_output_diff: False

# Fingerprint of the master public key to double verify the master is valid,
# the master fingerprint can be found by running "salt-key -F master" on the
# salt master.
#master_finger: ''

######         Thread settings        #####
###########################################
# Disable multiprocessing support, by default when a minion receives a
# publication a new process is spawned and the command is executed therein.
#multiprocessing: True

#####         Logging settings       #####
##########################################
# The location of the minion log file
# The minion log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/minion
#log_file: file:///dev/log
#log_file: udp://loghost:10514
#
#log_file: /var/log/salt/minion
#key_logfile: /var/log/salt/key

# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# Default: 'warning'
#log_level: warning

# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
# Default: 'warning'
#log_level_logfile:

# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'

# This can be used to control logging levels more specificically.  This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
#   log_granular_levels:
#     'salt': 'warning'
#     'salt.modules': 'debug'
#
#log_granular_levels: {}

# To diagnose issues with minions disconnecting or missing returns, ZeroMQ
# supports the use of monitor sockets # to log connection events. This
# feature requires ZeroMQ 4.0 or higher.
#
# To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a
# debug level or higher.
#
# A sample log event is as follows:
#
# [DEBUG   ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512,
# 'value': 27, 'description': 'EVENT_DISCONNECTED'}
#
# All events logged will include the string 'ZeroMQ event'. A connection event
# should be logged on the as the minion starts up and initially connects to the
# master. If not, check for debug log level and that the necessary version of
# ZeroMQ is installed.
#
#zmq_monitor: False

######      Module configuration      #####
###########################################
# Salt allows for modules to be passed arbitrary configuration data, any data
# passed here in valid yaml format will be passed on to the salt minion modules
# for use. It is STRONGLY recommended that a naming convention be used in which
# the module name is followed by a . and then the value. Also, all top level
# data must be applied via the yaml dict construct, some examples:
#
# You can specify that all modules should run in test mode:
#test: True
#
# A simple value for the test module:
#test.foo: foo
#
# A list for the test module:
#test.bar: [baz,quo]
#
# A dict for the test module:
#test.baz: {spam: sausage, cheese: bread}
#
#
######      Update settings          ######
###########################################
# Using the features in Esky, a salt minion can both run as a frozen app and
# be updated on the fly. These options control how the update process
# (saltutil.update()) behaves.
#
# The url for finding and downloading updates. Disabled by default.
#update_url: False
#
# The list of services to restart after a successful update. Empty by default.
#update_restart_services: []

######      Keepalive settings        ######
############################################
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
# the OS. If connections between the minion and the master pass through
# a state tracking device such as a firewall or VPN gateway, there is
# the risk that it could tear down the connection the master and minion
# without informing either party that their connection has been taken away.
# Enabling TCP Keepalives prevents this from happening.

# Overall state of TCP Keepalives, enable (1 or True), disable (0 or False)
# or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
#tcp_keepalive: True

# How long before the first keepalive should be sent in seconds. Default 300
# to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds
# on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
#tcp_keepalive_idle: 300

# How many lost probes are needed to consider the connection lost. Default -1
# to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
#tcp_keepalive_cnt: -1

# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1

######      Windows Software settings ######
############################################
# Location of the repository cache file on the master:
#win_repo_cachefile: 'salt://win/repo/winrepo.p'

######      Returner  settings        ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql
me@minion:~$

Salt Master


[me@cubietruck ~]$ cat /etc/salt/master
##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Master.
# Values that are commented out but have an empty line after the comment are
# defaults that do not need to be set in the config. If there is no blank line
# after the comment then the value is presented as an example and is not the
# default.

# Per default, the master will automatically include all config files
# from master.d/*.conf (master.d is a directory in the same directory
# as the main master config file).
#default_include: master.d/*.conf

# The address of the interface to bind to:
#interface: 0.0.0.0

# Whether the master should listen for IPv6 connections. If this is set to True,
# the interface option must be adjusted, too. (For example: "interface: '::'")
#ipv6: False

# The tcp port used by the publisher:
#publish_port: 4505

# The user under which the salt master will run. Salt will update all
# permissions to allow the specified user to run the master. The exception is
# the job cache, which must be deleted if this user is changed. If the
# modified files cause conflicts, set verify_env to False.
#user: root

# Max open files
#
# Each minion connecting to the master uses AT LEAST one file descriptor, the
# master subscription connection. If enough minions connect you might start
# seeing on the console (and then salt-master crashes):
#   Too many open files (tcp_listener.cpp:335)
#   Aborted (core dumped)
#
# By default this value will be the one of `ulimit -Hn`, ie, the hard limit for
# max open files.
#
# If you wish to set a different value than the default one, uncomment and
# configure this setting. Remember that this value CANNOT be higher than the
# hard limit. Raising the hard limit depends on your OS and/or distribution,
# a good way to find the limit is to search the internet. For example:
#   raise max open files hard limit debian
#
#max_open_files: 100000

# The number of worker threads to start. These threads are used to manage
# return calls made from minions to the master. If the master seems to be
# running slowly, increase the number of threads.
#worker_threads: 5

# The port used by the communication interface. The ret (return) port is the
# interface used for the file server, authentication, job returns, etc.
#ret_port: 4506

# Specify the location of the daemon process ID file:
#pidfile: /var/run/salt-master.pid

# The root directory prepended to these options: pki_dir, cachedir,
# sock_dir, log_file, autosign_file, autoreject_file, extension_modules,
# key_logfile, pidfile:
#root_dir: /

# Directory used to store public key data:
#pki_dir: /etc/salt/pki/master

# Directory to store job and cache data:
#cachedir: /var/cache/salt/master

# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", etc.
#extension_modules: <no default>

# Directory for custom modules. This directory can contain subdirectories for
# each of Salt's module types such as "runners", "output", "wheel", "modules",
# "states", "returners", etc.
# Like 'extension_modules' but can take an array of paths
#module_dirs: <no default>
#   - /var/cache/salt/minion/extmods

# Verify and set permissions on configuration directories at startup:
#verify_env: True

# Set the number of hours to keep old job information in the job cache:
#keep_jobs: 24

# Set the default timeout for the salt command and api. The default is 5
# seconds.
#timeout: 5

# The loop_interval option controls the seconds for the master's maintenance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60

# Set the default outputter used by the salt command. The default is "nested".
#output: nested

# Return minions that timeout when running commands like test.ping
#show_timeout: True

# By default, output is colored. To disable colored output, set the color value
# to False.
#color: True

# Do not strip off the colored output from nested results and state outputs
# (true by default).
# strip_colors: False

# Set the directory used to hold unix sockets:
#sock_dir: /var/run/salt/master

# The master can take a while to start up when lspci and/or dmidecode is used
# to populate the grains for the master. Enable if you want to see GPU hardware
# data for your master.
# enable_gpu_grains: False

# The master maintains a job cache. While this is a great addition, it can be
# a burden on the master for larger deployments (over 5000 minions).
# Disabling the job cache will make previously executed jobs unavailable to
# the jobs system and is not generally recommended.
#job_cache: True

# Cache minion grains and pillar data in the cachedir.
#minion_data_cache: True

# Store all returns in the given returner.
# Setting this option requires that any returner-specific configuration also
# be set. See various returners in salt/returners for details on required
# configuration values. (See also, event_return_queue below.)
#
#event_return: mysql

# On busy systems, enabling event_returns can cause a considerable load on
# the storage system for returners. Events can be queued on the master and
# stored in a batched fashion using a single transaction for multiple events.
# By default, events are not queued.
#event_return_queue: 0

# Only events returns matching tags in a whitelist
# event_return_whitelist:
#   - salt/master/a_tag
#   - salt/master/another_tag

# Store all event returns _except_ the tags in a blacklist
# event_return_blacklist:
#   - salt/master/not_this_tag
#   - salt/master/or_this_one

# Passing very large events can cause the minion to consume large amounts of
# memory. This value tunes the maximum size of a message allowed onto the
# master event bus. The value is expressed in bytes.
#max_event_size: 1048576

# By default, the master AES key rotates every 24 hours. The next command
# following a key rotation will trigger a key refresh from the minion which may
# result in minions which do not respond to the first command after a key refresh.
#
# To tell the master to ping all minions immediately after an AES key refresh, set
# ping_on_rotate to True. This should mitigate the issue where a minion does not
# appear to initially respond after a key is rotated.
#
# Note that ping_on_rotate may cause high load on the master immediately after
# the key rotation event as minions reconnect. Consider this carefully if this
# salt master is managing a large number of minions.
#
# If disabled, it is recommended to handle this event by listening for the
# 'aes_key_rotate' event with the 'key' tag and acting appropriately.
# ping_on_rotate: False

# By default, the master deletes its cache of minion data when the key for that
# minion is removed. To preserve the cache after key deletion, set
# 'preserve_minion_cache' to True.
#
# WARNING: This may have security implications if compromised minions auth with
# a previous deleted minion ID.
#preserve_minion_cache: False

# If max_minions is used in large installations, the master might experience
# high-load situations because of having to check the number of connected
# minions for every authentication. This cache provides the minion-ids of
# all connected minions to all MWorker-processes and greatly improves the
# performance of max_minions.
# con_cache: False

# The master can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main master configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option, then the master will log a warning message.
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
# include:
#   - /etc/salt/extra_config

#####        Security settings       #####
##########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False

# Enable auto_accept, this setting will automatically accept all incoming
# public keys from the minions. Note that this is insecure.
#auto_accept: False

# Time in minutes that a incoming public key with a matching name found in
# pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
# are removed when the master checks the minion_autosign directory.
# 0 equals no timeout
# autosign_timeout: 120

# If the autosign_file is specified, incoming keys specified in the
# autosign_file will be automatically accepted. This is insecure.  Regular
# expressions as well as globing lines are supported.
#autosign_file: /etc/salt/autosign.conf

# Works like autosign_file, but instead allows you to specify minion IDs for
# which keys will automatically be rejected. Will override both membership in
# the autosign_file and the auto_accept setting.
#autoreject_file: /etc/salt/autoreject.conf

# Enable permissive access to the salt keys. This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir. To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure. If an autosign_file
# is specified, enabling permissive_pki_access will allow group access to that
# specific file.
#permissive_pki_access: False

# Allow users on the master access to execute specific commands on minions.
# This setting should be treated with care since it opens up execution
# capabilities to non root users. By default this capability is completely
# disabled.
#client_acl:
#  larry:
#    - test.ping
#    - network.*
#
# Blacklist any of the following users or modules
#
# This example would blacklist all non sudo users, including root from
# running any commands. It would also blacklist any use of the "cmd"
# module. This is completely disabled by default.
#
#client_acl_blacklist:
#  users:
#    - root
#    - '^(?!sudo_).*$'   #  all non sudo users
#  modules:
#    - cmd

# Enforce client_acl & client_acl_blacklist when users have sudo
# access to the salt command.
#
#sudo_acl: False

# The external auth system uses the Salt auth modules to authenticate and
# validate users to access areas of the Salt system.
#external_auth:
#  pam:
#    fred:
#      - test.*
#
# Time (in seconds) for a newly generated token to live. Default: 12 hours
#token_expire: 43200

# Allow minions to push files to the master. This is disabled by default, for
# security purposes.
#file_recv: False

# Set a hard-limit on the size of the files that can be pushed to the master.
# It will be interpreted as megabytes. Default: 100
#file_recv_max_size: 100

# Signature verification on messages published from the master.
# This causes the master to cryptographically sign all messages published to its event
# bus, and minions then verify that signature before acting on the message.
#
# This is False by default.
#
# Note that to facilitate interoperability with masters and minions that are different
# versions, if sign_pub_messages is True but a message is received by a minion with
# no signature, it will still be accepted, and a warning message will be logged.
# Conversely, if sign_pub_messages is False, but a minion receives a signed
# message it will be accepted, the signature will not be checked, and a warning message
# will be logged. This behavior went away in Salt 2014.1.0 and these two situations
# will cause minion to throw an exception and drop the message.
# sign_pub_messages: False

#####     Salt-SSH Configuration     #####
##########################################

# Pass in an alternative location for the salt-ssh roster file
#roster_file: /etc/salt/roster

# Pass in minion option overrides that will be inserted into the SHIM for
# salt-ssh calls. The local minion config is not used for salt-ssh. Can be
# overridden on a per-minion basis in the roster (`minion_opts`)
#ssh_minion_opts:
#  gpg_keydir: /root/gpg

#####    Master Module Management    #####
##########################################
# Manage how master side modules are loaded.

# Add any additional locations to look for master runners:
#runner_dirs: []

# Enable Cython for master side modules:
#cython_enable: False

#####      State System settings     #####
##########################################
# The state system uses a "top" file to tell the minions what environment to
# use and what modules to use. The state_top file is defined relative to the
# root of the base environment as defined in "File Server settings" below.
#state_top: top.sls

# The master_tops option replaces the external_nodes option by creating
# a plugable system for the generation of external top data. The external_nodes
# option is deprecated by the master_tops option.
#
# To gain the capabilities of the classic external_nodes system, use the
# following configuration:
# master_tops:
#   ext_nodes: <Shell command which returns yaml>
#
#master_tops: {}

# The external_nodes option allows Salt to gather data that would normally be
# placed in a top file. The external_nodes option is the executable that will
# return the ENC data. Remember that Salt will look for external nodes AND top
# files and combine the results if both are enabled!
#external_nodes: None

# The renderer to use on the minions to render the state data
#renderer: yaml_jinja

# The Jinja renderer can strip extra carriage returns and whitespace
# See http://jinja.pocoo.org/docs/api/#high-level-api
#
# If this is set to True the first newline after a Jinja block is removed
# (block, not variable tag!). Defaults to False, corresponds to the Jinja
# environment init variable "trim_blocks".
# jinja_trim_blocks: False
#
# If this is set to True leading spaces and tabs are stripped from the start
# of a line to a block. Defaults to False, corresponds to the Jinja
# environment init variable "lstrip_blocks".
# jinja_lstrip_blocks: False

# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution, defaults to False
#failhard: False

# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True

# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.  If set to 'mixed', the output
# will be terse unless a state failed, in which case that output will be full.
# If set to 'changes', the output will be full unless the state didn't change.
#state_output: full

# Automatically aggregate all states that have support for mod_aggregate by
# setting to True. Or pass a list of state module names to automatically
# aggregate just those types.
#
# state_aggregate:
#   - pkg
#
#state_aggregate: False

#####      File Server settings      #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.

# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
#   base:
#     - /srv/salt/
#   dev:
#     - /srv/salt/dev/services
#     - /srv/salt/dev/states
#   prod:
#     - /srv/salt/prod/services
#     - /srv/salt/prod/states
#
#file_roots:
#  base:
#    - /srv/salt

# The hash_type is the hash to use when discovering the hash of a file on
# the master server. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#
# Prior to changing this value, the master should be stopped and all Salt
# caches should be cleared.
#hash_type: md5

# The buffer size in the file server can be adjusted here:
#file_buffer_size: 1048576

# A regular expression (or a list of expressions) that will be matched
# against the file path before syncing the modules and states to the minions.
# This includes files affected by the file.recurse state.
# For example, if you manage your custom modules and states in subversion
# and don't want all the '.svn' folders and content synced to your minions,
# you could set this to '/\.svn($|/)'. By default nothing is ignored.
#file_ignore_regex:
#  - '/\.svn($|/)'
#  - '/\.git($|/)'

# A file glob (or list of file globs) that will be matched against the file
# path before syncing the modules and states to the minions. This is similar
# to file_ignore_regex above, but works on globs instead of regex. By default
# nothing is ignored.
# file_ignore_glob:
#  - '*.pyc'
#  - '*/somefolder/*.bak'
#  - '*.swp'

# File Server Backend
#
# Salt supports a modular fileserver backend system, this system allows
# the salt master to link directly to third party systems to gather and
# manage the files available to minions. Multiple backends can be
# configured and will be searched for the requested file in the order in which
# they are defined here. The default setting only enables the standard backend
# "roots" which uses the "file_roots" option.
#fileserver_backend:
#  - roots
#
# To use multiple backends list them in the order they are searched:
#fileserver_backend:
#  - git
#  - roots
#
# Uncomment the line below if you do not want the file_server to follow
# symlinks when walking the filesystem tree. This is set to True
# by default. Currently this only applies to the default roots
# fileserver_backend.
#fileserver_followsymlinks: False
#
# Uncomment the line below if you do not want symlinks to be
# treated as the files they are pointing to. By default this is set to
# False. By uncommenting the line below, any detected symlink while listing
# files on the Master will not be returned to the Minion.
#fileserver_ignoresymlinks: True
#
# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is impacted. Default is False.
# fileserver_limit_traversal: False
#
# The fileserver can fire events off every time the fileserver is updated,
# these are disabled by default, but can be easily turned on by setting this
# flag to True
#fileserver_events: False

# Git File Server Backend Configuration
#
# Gitfs can be provided by one of two python modules: GitPython or pygit2. If
# using pygit2, both libgit2 and git must also be installed.
#gitfs_provider: gitpython
#
# When using the git fileserver backend at least one git remote needs to be
# defined. The user running the salt master will need read access to the repo.
#
# The repos will be searched in order to find the file requested by a client
# and the first repo to have the file will return it.
# When using the git backend branches and tags are translated into salt
# environments.
# Note:  file:// repos will be treated as a remote, so refs you want used must
# exist in that repo as *local* refs.
#gitfs_remotes:
#  - git://github.com/saltstack/salt-states.git
#  - file:///var/git/saltmaster
#
# The gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#gitfs_ssl_verify: True
#
# The gitfs_root option gives the ability to serve files from a subdirectory
# within the repository. The path is defined relative to the root of the
# repository and defaults to the repository root.
#gitfs_root: somefolder/otherfolder
#
#
#####         Pillar settings        #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
#pillar_roots:
#  base:
#    - /srv/pillar
#
#ext_pillar:
#  - hiera: /etc/hiera.yaml
#  - cmd_yaml: cat /etc/salt/yaml

# The ext_pillar_first option allows for external pillar sources to populate
# before file system pillar. This allows for targeting file system pillar from
# ext_pillar.
#ext_pillar_first: False

# The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the pillar gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#pillar_gitfs_ssl_verify: True

# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
#pillar_opts: False

# The pillar_safe_render_error option prevents the master from passing piller
# render errors to the minion. This is set on by default because the error could
# contain templating data which would give that minion information it shouldn't
# have, like a password! When set true the error message will only show:
#   Rendering SLS 'my.sls' failed. Please see master log for details.
#pillar_safe_render_error: True

# The pillar_source_merging_strategy option allows you to configure merging strategy
# between different sources. It accepts four values: recurse, aggregate, overwrite,
# or smart. Recurse will merge recursively mapping of data. Aggregate instructs
# aggregation of elements between sources that use the #!yamlex renderer. Overwrite
# will verwrite elements according the order in which they are processed. This is
# behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based
# on the "renderer" setting and is the default value.
#pillar_source_merging_strategy: smart

#####          Syndic settings       #####
##########################################
# The Salt syndic is used to pass commands through a master from a higher
# master. Using the syndic is simple, if this is a master that will have
# syndic servers(s) below it set the "order_masters" setting to True, if this
# is a master that will be running a syndic daemon for passthrough the
# "syndic_master" setting needs to be set to the location of the master server
# to receive commands from.

# Set the order_masters setting to True if this master will command lower
# masters' syndic interfaces.
#order_masters: False

# If this master will be running a salt syndic daemon, syndic_master tells
# this master where to receive commands from.
#syndic_master: masterofmaster

# This is the 'ret_port' of the MasterOfMaster:
#syndic_master_port: 4506

# PID file of the syndic daemon:
#syndic_pidfile: /var/run/salt-syndic.pid

# LOG file of the syndic daemon:
#syndic_log_file: syndic.log

#####      Peer Publish settings     #####
##########################################
# Salt minions can send commands to other minions, but only if the minion is
# allowed to. By default "Peer Publication" is disabled, and when enabled it
# is enabled for specific minions and specific commands. This allows secure
# compartmentalization of commands based on individual minions.

# The configuration uses regular expressions to match minions and then a list
# of regular expressions to match functions. The following will allow the
# minion authenticated as foo.example.com to execute functions from the test
# and pkg modules.
#peer:
#  foo.example.com:
#    - test.*
#    - pkg.*
#
# This will allow all minions to execute all commands:
#peer:
#  .*:
#    - .*
#
# This is not recommended, since it would allow anyone who gets root on any
# single minion to instantly have root on all of the minions!

# Minions can also be allowed to execute runners from the salt master.
# Since executing a runner from the minion could be considered a security risk,
# it needs to be enabled. This setting functions just like the peer setting
# except that it opens up runners instead of module functions.
#
# All peer runner support is turned off by default and must be enabled before
# using. This will enable all peer runners for all minions:
#peer_run:
#  .*:
#    - .*
#
# To enable just the manage.up runner for the minion foo.example.com:
#peer_run:
#  foo.example.com:
#    - manage.up
#
#
#####         Mine settings     #####
##########################################
# Restrict mine.get access from minions. By default any minion has a full access
# to get all mine data from master cache. In acl definion below, only pcre matches
# are allowed.
# mine_get:
#   .*:
#     - .*
#
# The example below enables minion foo.example.com to get 'network.interfaces' mine
# data only, minions web* to get all network.* and disk.* mine data and all other
# minions won't get any mine data.
# mine_get:
#   foo.example.com:
#     - network.interfaces
#   web.*:
#     - network.*
#     - disk.*

#####         Logging settings       #####
##########################################
# The location of the master log file
# The master log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#log_file: /var/log/salt/master
#log_file: file:///dev/log
#log_file: udp://loghost:10514

#log_file: /var/log/salt/master
#key_logfile: /var/log/salt/key

# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
#log_level: warning

# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# If using 'log_granular_levels' this must be set to the highest desired level.
#log_level_logfile: warning

# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'

# This can be used to control logging levels more specificically.  This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
#   log_granular_levels:
#     'salt': 'warning'
#     'salt.modules': 'debug'
#
#log_granular_levels: {}

#####         Node Groups           #####
##########################################
# Node groups allow for logical groupings of minion nodes. A group consists of a group
# name and a compound target.
#nodegroups:
#  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
#  group2: 'G@os:Debian and foo.domain.com'

#####     Range Cluster settings     #####
##########################################
# The range server (and optional port) that serves your cluster information
# https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
#
#range_server: range:80

#####     Windows Software Repo settings #####
##############################################
# Location of the repo on the master:
#win_repo: '/srv/salt/win/repo'
#
# Location of the master's repo cache file:
#win_repo_mastercachefile: '/srv/salt/win/repo/winrepo.p'
#
# List of git repositories to include with the local repo:
#win_gitrepos:
#  - 'https://github.com/saltstack/salt-winrepo.git'

#####      Returner settings          ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql
[me@cubietruck ~]$
basepi commented 9 years ago

Maybe I should have had you grep for uncommented lines. As far as I can see you're not setting any config values in the master, and you're only setting master: in the minion, correct? Now I'm even more mystified.

tganzeboom commented 9 years ago

@basepi Correct.

tganzeboom commented 9 years ago

@basepi Well, since you were even baffled, I had a closer look. I found it very strange too.

Before reboot:


[root@cubietruck ~]# ps -ef|grep -i salt
root      2974     1  0 Jun15 ?        00:00:06 /usr/bin/python2 /usr/bin/salt-master
root      2981  2974  2 Jun15 ?        02:32:56 /usr/bin/python2 /usr/bin/salt-master
root      2982  2974  0 Jun15 ?        00:00:00 /usr/bin/python2 /usr/bin/salt-master
root      2983  2974  0 Jun15 ?        00:00:00 /usr/bin/python2 /usr/bin/salt-master
root      2984  2974  0 Jun15 ?        00:00:00 /usr/bin/python2 /usr/bin/salt-master
root      2989  2984  0 Jun15 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root      2990  2984  0 Jun15 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root      2991  2984  0 Jun15 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root      2994  2984  0 Jun15 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root      2997  2984  0 Jun15 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root      3000  2984 99 Jun15 ?        3-20:16:53 /usr/bin/python2 /usr/bin/salt-master
root     16114 16090  0 20:47 pts/0    00:00:00 grep -i salt
[root@cubietruck ~]# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      178/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      176/sshd
tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      2982/python2
tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      3000/python2
tcp6       0      0 :::5355                 :::*                    LISTEN      178/systemd-resolve
tcp6       0      0 :::22                   :::*                    LISTEN      176/sshd
udp        0      0 0.0.0.0:5355            0.0.0.0:*                           178/systemd-resolve
udp6       0      0 :::5355                 :::*                                178/systemd-resolve
[root@cubietruck ~]#

[root@cubietruck master]# strace -fp 3000
Process 3000 attached with 7 threads
[pid  3010] epoll_wait(18,  <unfinished ...>
[pid  3009] epoll_wait(16,  <unfinished ...>
[pid  3008] epoll_wait(14,  <unfinished ...>
[pid  3007] epoll_wait(12,  <unfinished ...>
[pid  3006] epoll_wait(10,  <unfinished ...>
[pid  3005] epoll_wait(8,  <unfinished ...>
[pid  3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid  3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid  3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid  3000] clock_gettime(CLOCK_MONOTONIC, {336708, 972193035}) = 0
[pid  3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid  3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid  3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid  3000] clock_gettime(CLOCK_MONOTONIC, {336708, 974342915}) = 0
[pid  3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid  3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid  3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid  3000] clock_gettime(CLOCK_MONOTONIC, {336708, 977157754}) = 0
<snip>
.......
</snip>
[pid  3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid  3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid  3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
^C
Process 3005 detached
Process 3006 detached
Process 3007 detached
Process 3008 detached
Process 3009 detached
Process 3010 detached
[root@cubietruck master]#

And many more were spit out in rapid fashion.

The machine was already rebooted because of a kernel upgrade some days before, but I did another reboot anyway, because that strace is not supposed to show those messages and behold:

After reboot:


[root@cubietruck ~]# ps -ef|grep -i salt
root       175     1  0 21:20 ?        00:00:03 /usr/bin/python2 /usr/bin/salt-master
root       187   175  2 21:20 ?        00:00:31 /usr/bin/python2 /usr/bin/salt-master
root       188   175  0 21:20 ?        00:00:00 /usr/bin/python2 /usr/bin/salt-master
root       189   175  0 21:20 ?        00:00:00 /usr/bin/python2 /usr/bin/salt-master
root       190   175  0 21:20 ?        00:00:00 /usr/bin/python2 /usr/bin/salt-master
root       195   190  0 21:20 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root       196   190  0 21:20 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root       197   190  0 21:20 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root       200   190  0 21:20 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root       203   190  0 21:20 ?        00:00:09 /usr/bin/python2 /usr/bin/salt-master
root       206   190  0 21:20 ?        00:00:00 /usr/bin/python2 /usr/bin/salt-master
root       939   880  0 21:39 pts/0    00:00:00 grep -i salt
[root@cubietruck ~]# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      176/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      174/sshd
tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      188/python2
tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      206/python2
tcp6       0      0 :::5355                 :::*                    LISTEN      176/systemd-resolve
tcp6       0      0 :::22                   :::*                    LISTEN      174/sshd
udp        0      0 0.0.0.0:5355            0.0.0.0:*                           176/systemd-resolve
udp6       0      0 :::5355                 :::*                                176/systemd-resolve
[root@cubietruck ~]# strace -fp 206|more
[root@cubietruck ~]# strace -fp 206|more
Process 206 attached with 7 threads
[pid   216] epoll_wait(18,  <unfinished ...>
[pid   215] epoll_wait(16,  <unfinished ...>
[pid   214] epoll_wait(14,  <unfinished ...>
[pid   213] epoll_wait(12,  <unfinished ...>
[pid   212] epoll_wait(10,  <unfinished ...>
[pid   211] epoll_wait(8,  <unfinished ...>
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1 <unfinished ...>
[pid   212] <... epoll_wait resumed> {{EPOLLIN, {u32=2993685608, u64=2993685608}}}, 256, -1) = 1
[pid   212] accept(21, {sa_family=AF_INET, sin_port=htons(56488), sin_addr=inet_addr("192.168.1.85")}, [16]) = 23
[pid   212] fcntl64(23, F_SETFD, FD_CLOEXEC) = 0
[pid   212] setsockopt(23, SOL_TCP, TCP_NODELAY, [1], 4) = 0
[pid   212] fcntl64(23, F_GETFL)        = 0x2 (flags O_RDWR)
[pid   212] fcntl64(23, F_SETFL, O_RDWR|O_NONBLOCK) = 0
[pid   212] getpeername(23, {sa_family=AF_INET, sin_port=htons(56488), sin_addr=inet_addr("192.168.1.85")}, [16]) = 0
[pid   212] write(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] <... epoll_wait resumed> {{EPOLLIN, {u32=3082969728, u64=3082969728}}}, 256, -1) = 1
[pid   212] write(9, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0 <unfinished ...>
[pid   212] <... write resumed> )       = 8
[pid   213] <... poll resumed> )        = 1 ([{fd=11, revents=POLLIN}])
[pid   212] epoll_wait(10, {{EPOLLIN, {u32=3082969672, u64=3082969672}}}, 256, -1) = 1
[pid   213] read(11,  <unfinished ...>
[pid   212] poll([{fd=9, events=POLLIN}], 1, 0) = 1 ([{fd=9, revents=POLLIN}])
[pid   212] read(9,  <unfinished ...>
[pid   213] <... read resumed> "\1\0\0\0\0\0\0\0", 8) = 8
[pid   212] <... read resumed> "\1\0\0\0\0\0\0\0", 8) = 8
[pid   212] poll([{fd=9, events=POLLIN}], 1, 0 <unfinished ...>
[pid   213] write(19, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
[pid   212] <... poll resumed> )        = 0 (Timeout)
[pid   212] epoll_wait(10,  <unfinished ...>
[pid   213] <... write resumed> )       = 8
[pid   206] <... poll resumed> )        = 1 ([{fd=19, revents=POLLIN}])
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 1 ([{fd=19, revents=POLLIN}])
[pid   206] read(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0 <unfinished ...>
[pid   213] epoll_ctl(12, EPOLL_CTL_ADD, 23, {0, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] clock_gettime(CLOCK_MONOTONIC, {2284, 681151270}) = 0
[pid   213] epoll_ctl(12, EPOLL_CTL_MOD, 23, {EPOLLIN, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] epoll_ctl(12, EPOLL_CTL_MOD, 23, {EPOLLIN|EPOLLOUT, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] recv(23, "\377\0\0\0\0\0\0\0\1\177", 12, 0) = 10
[pid   213] recv(23, 0xb2700542, 2, 0)  = -1 EAGAIN (Resource temporarily unavailable)
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   213] clock_gettime(CLOCK_MONOTONIC, {2284, 682670687}) = 0
[pid   213] epoll_wait(12, {{EPOLLOUT, {u32=2991621432, u64=2991621432}}}, 256, 29999) = 1
[pid   213] send(23, "\377\0\0\0\0\0\0\0\1\177\3", 11, 0) = 11
[pid   213] epoll_ctl(12, EPOLL_CTL_MOD, 23, {EPOLLIN, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] clock_gettime(CLOCK_MONOTONIC, {2284, 684289740}) = 0
[pid   213] epoll_wait(12, {{EPOLLIN, {u32=2991621432, u64=2991621432}}}, 256, 29997) = 1
[pid   213] recv(23, "\3\0", 2, 0)      = 2
[pid   213] epoll_ctl(12, EPOLL_CTL_MOD, 23, {EPOLLIN|EPOLLOUT, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] recv(23, "NULL\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 52, 0) = 52
[pid   213] recv(23, 0xb250a640, 8192, 0) = -1 EAGAIN (Resource temporarily unavailable)
[pid   213] epoll_wait(12, {{EPOLLOUT, {u32=2991621432, u64=2991621432}}}, 256, -1) = 1
[pid   213] send(23, "\0NULL\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 53, 0) = 53
[pid   213] epoll_wait(12, {{EPOLLIN|EPOLLOUT, {u32=2991621432, u64=2991621432}}}, 256, -1) = 1
[pid   213] send(23, "\4)\5READY\vSocket-Type\0\0\0\6ROUTER\10I"..., 43, 0) = 43
[pid   213] recv(23, "\4&\5READY\vSocket-Type\0\0\0\3REQ\10Iden"..., 8192, 0) = 546
[pid   213] write(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] epoll_wait(12, {{EPOLLOUT, {u32=2991621432, u64=2991621432}}}, 256, -1) = 1
[pid   213] epoll_ctl(12, EPOLL_CTL_MOD, 23, {EPOLLIN, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] epoll_wait(12,  <unfinished ...>
[pid   206] <... poll resumed> )        = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1) = 1 ([{fd=19, revents=POLLIN}])
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 1 ([{fd=19, revents=POLLIN}])
[pid   206] read(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   206] write(11, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
[pid   213] <... epoll_wait resumed> {{EPOLLIN, {u32=3082969728, u64=3082969728}}}, 256, -1) = 1
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 1 ([{fd=11, revents=POLLIN}])
[pid   213] read(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   213] epoll_wait(12,  <unfinished ...>
[pid   206] <... write resumed> )       = 8
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] write(11, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
[pid   213] <... epoll_wait resumed> {{EPOLLIN, {u32=3082969728, u64=3082969728}}}, 256, -1) = 1
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 1 ([{fd=11, revents=POLLIN}])
[pid   213] read(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   213] epoll_wait(12,  <unfinished ...>
[pid   206] <... write resumed> )       = 8
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] clock_gettime(CLOCK_MONOTONIC, {2284, 693797374}) = 0
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] write(13, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
[pid   214] <... epoll_wait resumed> {{EPOLLIN, {u32=3082908800, u64=3082908800}}}, 256, -1) = 1
[pid   214] poll([{fd=13, events=POLLIN}], 1, 0) = 1 ([{fd=13, revents=POLLIN}])
[pid   214] read(13, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   214] epoll_ctl(14, EPOLL_CTL_MOD, 24, {EPOLLIN|EPOLLOUT, {u32=2992670184, u64=2992670184}}) = 0
[pid   214] send(24, "\1\5\0k\213F\301\1\0\2\0\0\0\0\0\0\1\357\202\244load\203\243cmd\245_a"..., 513, 0) = 513
[pid   206] <... write resumed> )       = 8
[pid   214] poll([{fd=13, events=POLLIN}], 1, 0 <unfinished ...>
[pid   206] clock_gettime(CLOCK_MONOTONIC,  <unfinished ...>
[pid   214] <... poll resumed> )        = 0 (Timeout)
[pid   206] <... clock_gettime resumed> {2284, 700603947}) = 0
[pid   214] epoll_wait(14,  <unfinished ...>
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0 <unfinished ...>
[pid   214] <... epoll_wait resumed> {{EPOLLOUT, {u32=2992670184, u64=2992670184}}}, 256, -1) = 1
[pid   206] <... poll resumed> )        = 0 (Timeout)
[pid   214] epoll_ctl(14, EPOLL_CTL_MOD, 24, {EPOLLIN, {u32=2992670184, u64=2992670184}} <unfinished ...>
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0 <unfinished ...>
[pid   214] <... epoll_ctl resumed> )   = 0
[pid   206] <... poll resumed> )        = 0 (Timeout)
[pid   214] epoll_wait(14,  <unfinished ...>
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1 <unfinished ...>
[pid   214] <... epoll_wait resumed> {{EPOLLIN, {u32=2992670184, u64=2992670184}}}, 256, -1) = 1
[pid   214] recv(24, "\1\5\0k\213F\301\1\0\0\26\202\244load\201\243ret\303\243enc\245clea"..., 8192, 0) = 33
[pid   214] write(20, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   214] epoll_wait(14,  <unfinished ...>
[pid   206] <... poll resumed> )        = 1 ([{fd=20, revents=POLLIN}])
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 1 ([{fd=20, revents=POLLIN}])
[pid   206] read(20, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] clock_gettime(CLOCK_MONOTONIC, {2284, 711442263}) = 0
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] write(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] <... epoll_wait resumed> {{EPOLLIN, {u32=3082969728, u64=3082969728}}}, 256, -1) = 1
[pid   206] clock_gettime(CLOCK_MONOTONIC, {2284, 713058360}) = 0
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1 <unfinished ...>
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 1 ([{fd=11, revents=POLLIN}])
[pid   213] read(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] epoll_ctl(12, EPOLL_CTL_MOD, 23, {EPOLLIN|EPOLLOUT, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] write(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   206] <... poll resumed> )        = 1 ([{fd=19, revents=POLLIN}])
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 1 ([{fd=19, revents=POLLIN}])
[pid   206] read(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] send(23, "\1\0\0\26\202\244load\201\243ret\303\243enc\245clear", 26, 0 <unfinished ...>
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0 <unfinished ...>
[pid   213] <... send resumed> )        = 26
[pid   206] <... poll resumed> )        = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1 <unfinished ...>
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   213] epoll_wait(12, {{EPOLLIN|EPOLLOUT, {u32=2991621432, u64=2991621432}}}, 256, -1) = 1
[pid   213] epoll_ctl(12, EPOLL_CTL_MOD, 23, {EPOLLIN, {u32=2991621432, u64=2991621432}}) = 0
[pid   213] recv(23, "", 8192, 0)       = 0
[pid   213] write(9, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   212] <... epoll_wait resumed> {{EPOLLIN, {u32=3082969672, u64=3082969672}}}, 256, -1) = 1
[pid   212] poll([{fd=9, events=POLLIN}], 1, 0) = 1 ([{fd=9, revents=POLLIN}])
[pid   212] read(9, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   212] write(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   212] poll([{fd=9, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   212] epoll_wait(10,  <unfinished ...>
[pid   213] epoll_ctl(12, EPOLL_CTL_DEL, 23, b2508540) = 0
[pid   213] close(23)                   = 0
[pid   213] epoll_wait(12, {{EPOLLIN, {u32=3082969728, u64=3082969728}}}, 256, -1) = 1
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 1 ([{fd=11, revents=POLLIN}])
[pid   213] read(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] write(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   213] epoll_wait(12,  <unfinished ...>
[pid   206] <... poll resumed> )        = 1 ([{fd=19, revents=POLLIN}])
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 1 ([{fd=19, revents=POLLIN}])
[pid   206] read(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] write(11, "\1\0\0\0\0\0\0\0", 8 <unfinished ...>
[pid   213] <... epoll_wait resumed> {{EPOLLIN, {u32=3082969728, u64=3082969728}}}, 256, -1) = 1
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 1 ([{fd=11, revents=POLLIN}])
[pid   213] read(11, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] write(9, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] write(19, "\1\0\0\0\0\0\0\0", 8) = 8
[pid   213] poll([{fd=11, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   213] epoll_wait(12,  <unfinished ...>
[pid   212] <... epoll_wait resumed> {{EPOLLIN, {u32=3082969672, u64=3082969672}}}, 256, -1) = 1
[pid   206] <... write resumed> )       = 8
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1) = 1 ([{fd=19, revents=POLLIN}])
[pid   212] poll([{fd=9, events=POLLIN}], 1, 0 <unfinished ...>
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0 <unfinished ...>
[pid   212] <... poll resumed> )        = 1 ([{fd=9, revents=POLLIN}])
[pid   206] <... poll resumed> )        = 1 ([{fd=19, revents=POLLIN}])
[pid   212] read(9,  <unfinished ...>
[pid   206] read(19,  <unfinished ...>
[pid   212] <... read resumed> "\1\0\0\0\0\0\0\0", 8) = 8
[pid   206] <... read resumed> "\1\0\0\0\0\0\0\0", 8) = 8
[pid   212] poll([{fd=9, events=POLLIN}], 1, 0 <unfinished ...>
[pid   206] poll([{fd=19, events=POLLIN}], 1, 0 <unfinished ...>
[pid   212] <... poll resumed> )        = 0 (Timeout)
[pid   206] <... poll resumed> )        = 0 (Timeout)
[pid   206] poll([{fd=20, events=POLLIN}], 1, 0 <unfinished ...>
[pid   212] epoll_wait(10,  <unfinished ...>
[pid   206] <... poll resumed> )        = 0 (Timeout)
[pid   206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1
Process 206 detached
 <detached ...>
Process 211 detached
Process 212 detached
Process 213 detached
Process 214 detached
Process 215 detached
Process 216 detached
[root@cubietruck ~]# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
minion
Rejected Keys:
[root@cubietruck ~]#

So the reboot made everything work again, although it has been rebooted before. I'm not sure what the cause for the weird output from strace was before the reboot, but I don't think it has anything to do with saltstack, since it works correctly now.

Sorry if it caused inconvenience.

basepi commented 9 years ago

Very strange! Glad you got it working, though! Keep us posted if it happens again.