saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:
https://repo.saltproject.io/
Apache License 2.0
14.15k stars 5.47k forks source link

Severe connection issues on Windows #13069

Closed andrejohansson closed 6 years ago

andrejohansson commented 10 years ago

First of all, I wanna say that my experience so far with Salt have been great. The architecture and simplicity of things when you get a hang of it is remarkable!

I've waited a bit to post this issue because it's hard to reproduce consistently, but unfortunately it's a major roadblock from using Salt in production environments using Windows unless I'm missing something big. Actually there is two problems but I describe the connection issues here and I will open another issue for the other problem (bootstrapping, se #13070).

We have our servers running in Azure, so everything here applies to Azure environments, also it will hopefully give you a good chance to replicate the environment. If not, contact me and you can have access to my environment

The Salt Master Is running Ubuntu 14.04 LTS (base image from Azure gallery) with only the Saltmaster installed.

The Salt Minion Is running Windows Server 2012 R2 64 Bit (base image from Azure gallery) with the SaltMinon installed and whatever software I've tried to install via salt.

The problem Connection drops quite often between the master and the minion, I'm working actively with the setup and maybe 1 out of 4 commands will get "the minion did not respond"/no output on the master. I am running the minion in console mode with -l debug.

Sometimes I see that the command reaches the minion and it works a bit. But many times it will not even reach the minions.

I get the feeling that there is two problems:

My workarounds so far:

Could #7159 be behind this maybe?

So, the stability in the connection prevents me from "trusting" that the minions do what I tell them to. Also the ability to call salt from scripts for automation is a no-go.

I wish I had more concrete information to give, but as I said, the environment is available to use if you want to see it for yourself.

Versions report

root@saltmaster:/srv# salt --versions-report
           Salt: 2014.1.4
         Python: 2.7.6 (default, Mar 22 2014, 22:59:56)
         Jinja2: 2.7.2
       M2Crypto: 0.21.1
 msgpack-python: 0.3.0
   msgpack-pure: Not Installed
       pycrypto: 2.6.1
         PyYAML: 3.10
          PyZMQ: 14.0.1
            ZMQ: 4.0.4
root@saltmaster:/srv# salt rs-sm1 test.versions_report
rs-sm1:
               Salt: 2014.1.4
             Python: 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)]
             Jinja2: 2.7.1
           M2Crypto: 0.21.1
     msgpack-python: 0.4.2
       msgpack-pure: Not Installed
           pycrypto: 2.6
             PyYAML: 3.11
              PyZMQ: 14.1.1
                ZMQ: 4.0.4
root@saltmaster:/srv#
andrejohansson commented 10 years ago

Some additions....the minion stopped responding (running in console with -l debug) to test.ping calls from the master. I pinged three times without response, then when I should restarted I pressed CTRL+C on the minion and interestingly the minion didnt exit but some subprocess seemed to do and suddenly the minion started responding again.

So, could it be that the minion gets stuck in some sub process and while it is, it's not responsive to the master?

rs-sm1:
    Minion did not return
root@saltmaster:/srv# salt rs-sm1 test.ping
rs-sm1:
    Minion did not return
root@saltmaster:/srv# salt rs-sm1 test.ping
rs-sm1:
    Minion did not return
root@saltmaster:/srv# salt rs-sm1 test.ping
rs-sm1:
    True

Logs for the same time:

[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20140528121010049293', '
tgt': 'rs-sm1', 'ret': '', 'user': 'sudo_andre', 'arg': [], 'fun': 'pkg.refresh_
db'}
[INFO    ] User sudo_andre Executing command saltutil.find_job with jid 20140528
121020069146
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20140528121020069146', '
tgt': 'rs-sm1', 'ret': '', 'user': 'sudo_andre', 'arg': ['20140528121010049293']
, 'fun': 'saltutil.find_job'}
[INFO    ] User sudo_andre Executing command test.ping with jid 2014052812120573
0532
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20140528121205730532', '
tgt': 'rs-sm1', 'ret': '', 'user': 'sudo_andre', 'arg': [], 'fun': 'test.ping'}
[INFO    ] Returning information for job: 20140528121020069146
[INFO    ] User sudo_andre Executing command saltutil.find_job with jid 20140528
121215753513
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20140528121215753513', '
tgt': 'rs-sm1', 'ret': '', 'user': 'sudo_andre', 'arg': ['20140528121205730532']
, 'fun': 'saltutil.find_job'}
[INFO    ] Returning information for job: 20140528121205730532
[INFO    ] User sudo_andre Executing command test.ping with jid 2014052812122261
2574
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20140528121222612574', '
tgt': 'rs-sm1', 'ret': '', 'user': 'sudo_andre', 'arg': [], 'fun': 'test.ping'}
[INFO    ] Returning information for job: 20140528121215753513
[INFO    ] User sudo_andre Executing command saltutil.find_job with jid 20140528
121232637500
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20140528121232637500', '
tgt': 'rs-sm1', 'ret': '', 'user': 'sudo_andre', 'arg': ['20140528121222612574']
, 'fun': 'saltutil.find_job'}
[INFO    ] Returning information for job: 20140528121222612574
[INFO    ] Returning information for job: 20140528121232637500
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[INFO    ] Returning information for job: 20140528121010049293
[INFO    ] User sudo_andre Executing command test.ping with jid 2014052812124914
7699
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20140528121249147699', '
tgt': 'rs-sm1', 'ret': '', 'user': 'sudo_andre', 'arg': [], 'fun': 'test.ping'}
[INFO    ] Returning information for job: 20140528121249147699

It was just before the line

[INFO    ] Returning information for job: 20140528121222612574

I pressed CTRL+C

basepi commented 10 years ago

Thanks for the detailed report! I thought we had squelched most of the windows connection problems with the newer ZMQ version, but apparently not. We'll investigate this.

UtahDave commented 10 years ago

I wonder if your firewalls closing down the connection abruptly. It looks like some people have had some success here: https://github.com/saltstack/salt/issues/6231#issuecomment-29878818

Can you try a couple of the suggestions in the above link?

andrejohansson commented 10 years ago

I read the thread above and checked my system. Left the master/minion idle overnight. Then I checked with tcpview and saw that just like another user there was still a connection on 4505 from the minion to the master in state ESTABLISHED. Sending commands to the minion didn't work.

Closing the connection (with tcpview, so no restart of the minion) helped and the next command from the master to the minion worked as expected.

basepi commented 10 years ago

Yep, definitely sounds like the same problem. I still can't believe that one side can see a tcp connection as ESTABLISHED and the other side can have no idea about it. Weird one.

I really thought that ZMQ 4.0.4 fixed this..... =\

UtahDave commented 10 years ago

@andrejohansson did you try setting up a recurring test.ping?

andrejohansson commented 10 years ago

@UtahDave No, not yet. I don't see that as a solution and we don't have anything in production yet so I haven't seen the need yet other than for investigating the problem of course.

Another thing I've noticed though (this can be imagined) is that sometimes a minion hangs and doesn't respond to commands. Except for the test.ping command! After the ping command runs I can run other commands (like state.highstate) again. Is there some logic to this or am I mistaken completely?

andrejohansson commented 10 years ago

@UtahDave I now tried to add a recurring ping using the scheduler functionality. But I am a bit unclear of how it should work.

I added

schedule:
  keepalivejob:
    function: test.ping
    seconds: 60

To the /etc/salt/master config file and then restarted the master. But I can't see any pings coming in to the minion at all.

How can i check if the schedule is running correctly?

Edit: The above didnt work, I got an error in the log:

2014-06-02 09:21:54,233 [salt.utils.schedule][INFO    ] Invalid function: keepalivejob in job test.ping. Ignoring.

Ive tried to add the following to my crontab to see if it helps:

0 */1 * * * salt '*' test.ping > /dev/null 2>&1
olliewalsh commented 10 years ago

Could #7159 be behind this maybe?

No, I don't think it has anything to do with dropped events (jobs are not events).

andrejohansson commented 10 years ago

Just to report back, I have been running about 30 instances in Azure now (windows server 2008 and 2012) with a 1 minute scheduled test.ping and it seems to work fine. Only trouble is that it fills logs and halite ui.

UtahDave commented 10 years ago

Thanks for the report!

UtahDave commented 10 years ago

I'm going to close this. Please feel free to comment if you'd like to reopen this.

andrejohansson commented 10 years ago

Well, like I said...it fills the logs. Maybe it would be possible to add an option to the master config. "silent_keepalive_ping: 60" which would execute a test.ping every x seconds. The difference from the solution above beeing that it should not show up in job lists and logs, and also no need to edit cron jobs manually. Would it be a possible solution? Of course it would be even better to figure out how one side can have a connection established status and the other side not. But I understand thats a pretty tough one and may be out of scope for this project.

UtahDave commented 10 years ago

@andrejohansson that's a good point. Let me think about a good way to do this.

andrejohansson commented 10 years ago

I noticed the "Keepalive settings" section on minions today, is this a new section? Is there some recommended values/tweaks to use for windows? Most things mentioned in comments concerns linux.

andrejohansson commented 10 years ago

Here comes some more reports back, since I've been running a test.ping every minute it seems like jobs have filled up cache and other things. Today I could not run any salt commands on the master anymore, I entered a command like

salt * test.ping

and it just stuck there. I restarted the salt-master but then I got

root@saltmaster:~# salt 'rs-sm1' test.ping
Failed to connect to the Master, is the Salt Master running?

So things were quite out of order, even when using tab completion on the server I got an error:

bash: cannot create temp file for here-document: No space left on device 

So, disk full? First I suspected logs filling the disks but I couldn't find any large lags. I couldn't even run df -h. I searched and found issues #10404 and #12396 and it seems like things aren't cleaned up as they should. Running

service salt-master stop
rm -rf /var/cache/salt/master/jobs/*
service salt-master start

to cleanup seemed to work. I think this issue should be reopened until there is a solid way to keep minions alive without causing the master to go down.

UtahDave commented 10 years ago

Hm. OK. I agree. Good point. I'll reopen this.

@andrejohansson, have you had any success with the "Keepalive settings"?

mechalaris commented 9 years ago

I would like to add my 2cents worth to this discussion. We are using Salt in Azure also (both master and minion are on public facing internet) and are having connectivity issues. Test.ping fails after a couple of minutes even if continually running test.ping every minute. The Linux minions work fine as do Windows minions on our corporate network. It is most definitely down to the Azure Fabric dropping connections and I understand the same issue happens with AWS so pretty far reaching issue. Running both Windows 2008 R2 and 2012 R2.

In my case the minion is stuck in a SYN_SENT state with the master but the master sees the connection as ESTABLISHED. Simply restarting the master service fixes everything for a couple of minutes.

I wonder if any progress has been made in this area perhaps around the Keepalive settings? Also want to reference this issue which may be the same: https://github.com/saltstack/salt/issues/6231#issuecomment-29878818 As mentioned a cron test.ping does not work for me.

Thanks

mechalaris commented 9 years ago

Hi,

Any thoughts? I am keen to do a POC on nearly 100 Windows servers in Azure.

basepi commented 9 years ago

@mechalaris What version have you been testing? I think the keepalive has been much better recently, but I'm not 100% sure.

andrejohansson commented 9 years ago

The description that @mechalaris gives is still valid for us too. A reboot of the master makes things work for a while, then it just dies, note that just a restart of the salt-master service does not work. I've tried:

Interestingly enough, when I ran test.ping now it looked like all my local machines and all amazon machines responded but no azure machine. The master itself is on azure.

I read about people in other issues where they get responses of hundreds of machines in milliseconds/seconds. I think I've never been able to get a response that fast. A command to a windows machine takes at least ten seconds, if it answers at all.

Any more suggestions? Can you see something weird in my configs?

Versions report from master:

root@saltmaster:~# salt-master --versions-report
           Salt: 2014.7.0
         Python: 2.7.6 (default, Mar 22 2014, 22:59:56)
         Jinja2: 2.7.2
       M2Crypto: 0.21.1
 msgpack-python: 0.3.0
   msgpack-pure: Not Installed
       pycrypto: 2.6.1
        libnacl: Not Installed
         PyYAML: 3.10
          ioflo: Not Installed
          PyZMQ: 14.0.1
           RAET: Not Installed
            ZMQ: 4.0.4
           Mako: 0.9.1

Running root@saltmaster:~# salt '*' test.versions_report gives a mix of 2014.1.7 and 2014.7.0 clients (the ones that responds):

prd-as-monitor:
               Salt: 2014.1.7
             Python: 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)]
             Jinja2: 2.7.1
           M2Crypto: 0.21.1
     msgpack-python: 0.4.2
       msgpack-pure: Not Installed
           pycrypto: 2.6
             PyYAML: 3.11
              PyZMQ: 14.1.1
                ZMQ: 4.0.4
hbg-andrej-st:
               Salt: 2014.7.0
             Python: 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)]
             Jinja2: 2.7.1
           M2Crypto: 0.21.1
     msgpack-python: 0.4.2
       msgpack-pure: Not Installed
           pycrypto: 2.6
            libnacl: Not Installed
             PyYAML: 3.11
              ioflo: Not Installed
              PyZMQ: 14.1.1
               RAET: Not Installed
                ZMQ: 4.0.4
               Mako: Not Installed

Minion config

##### Primary configuration settings #####
##########################################

ipc_mode: tcp

# Make Salt Minion behave
recon_default: 1000
recon_max: 59000
recon_randomize: True
acceptance_wait_time: 10
random_reauth_delay: 60
auth_timeout: 60

# Per default the minion will automatically include all config files
# from minion.d/*.conf (minion.d is a directory in the same directory
# as the main minion config file).
#default_include: minion.d/*.conf

# Set the location of the salt master server, if the master server cannot be
# resolved, then the minion will fail to start.
# test
master: saltmaster.cloudapp.net

# Set the number of seconds to wait before attempting to resolve
# the master hostname if name resolution fails. Defaults to 30 seconds.
# Set to zero if the minion should shutdown and not retry.
# retry_dns: 30

# Set the port used by the master reply and authentication server
#master_port: 4506

# The user to run salt
#user: root

# Specify the location of the daemon process ID file
#pidfile: /var/run/salt-minion.pid

# The root directory prepended to these options: pki_dir, cachedir, log_file,
# sock_dir, pidfile.
# root_dir: c:\salt

# The directory to store the pki information in
#pki_dir: /etc/salt/pki/minion
pki_dir: /conf/pki/minion

# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.

# if we have mapped the hostname to be translated to another
# minion id, use it, otherwise use the hostname as minion id
{% if salt['pillar.get']('hostname_id_map:' + grains['host'], 'notmapped') == 'notmapped'  %}
id: {{ grains['host'] }}
{% else %}
id: {{ pillar['hostname_id_map'][grains['host']] }}
{% endif %}

# Append a domain to a hostname in the event that it does not exist.  This is
# useful for systems where socket.getfqdn() does not actually result in a
# FQDN (for instance, Solaris).
#append_domain:

# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against:
grains:
  owner:
    - readsoft
{% if pillar.get('roles') %}
  roles:
{% for role in pillar.get('roles', {}) %}
    - {{ role }}
{% endfor %}
{% endif %}
{% if pillar.get('region') %}
  region:
{% for region in pillar.get('region', {}) %}
    - {{ region }}
{% endfor %}
{% endif %}

# Where cache data goes
#cachedir: /var/cache/salt/minion

# Verify and set permissions on configuration directories at startup
#verify_env: True

# The minion can locally cache the return data from jobs sent to it, this
# can be a good way to keep track of jobs the minion has executed
# (on the minion side). By default this feature is disabled, to enable
# set cache_jobs to True
#cache_jobs: False

# set the directory used to hold unix sockets
#sock_dir: /var/run/salt/minion

# Backup files that are replaced by file.managed and file.recurse under
# 'cachedir'/file_backups relative to their original location and appended
# with a timestamp. The only valid setting is "minion". Disabled by default.
#
# Alternatively this can be specified for each file in state files:
#
# /etc/ssh/sshd_config:
#   file.managed:
#     - source: salt://ssh/sshd_config
#       - backup: minion
#
#backup_mode: minion

# When waiting for a master to accept the minion's public key, salt will
# continuously attempt to reconnect until successful. This is the time, in
# seconds, between those reconnection attempts.
#acceptance_wait_time: 10

# If this is set, the time between reconnection attempts will increase by 
# acceptance_wait_time seconds per iteration, up to this maximum. If this
# is not set, the time between reconnection attempts will stay constant.
#acceptance_wait_time_max: None

# When healing, a dns_check is run. This is to make sure that the originally
# resolved dns has not changed. If this is something that does not happen in
# your environment, set this value to False.
#dns_check: True

# Windows platforms lack posix IPC and must rely on slower TCP based inter-
# process communications. Set ipc_mode to 'tcp' on such systems
#ipc_mode: ipc
#
# Overwrite the default tcp ports used by the minion when in tcp mode
#tcp_pub_port: 4510
#tcp_pull_port: 4511

# The minion can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main minion configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option then the minion will log a warning message.
#
#
# Include a config file from some other path:
# include: /etc/salt/extra_config
#
# Include config from several files and directories:
# include:
#  - /etc/salt/extra_config
#  - /etc/roles/webserver

#####   Minion module management     #####
##########################################
# Disable specific modules. This allows the admin to limit the level of
# access the master has to the minion
#disable_modules: [cmd,test]
#disable_returners: []
#
# Modules can be loaded from arbitrary paths. This enables the easy deployment
# of third party modules. Modules for returners and minions can be loaded.
# Specify a list of extra directories to search for minion modules and
# returners. These paths must be fully qualified!
#module_dirs: []
#returner_dirs: []
#states_dirs: []
#render_dirs: []
#
# A module provider can be statically overwritten or extended for the minion
# via the providers option, in this case the default module will be
# overwritten by the specified module. In this example the pkg module will
# be provided by the yumpkg5 module instead of the system default.
#
# providers:
#   pkg: yumpkg5
#
# Enable Cython modules searching and loading. (Default: False)
#cython_enable: False
#

#####    State Management Settings    #####
###########################################
# The state management system executes all of the state templates on the minion
# to enable more granular control of system state management. The type of
# template and serialization used for state management needs to be configured
# on the minion, the default renderer is yaml_jinja. This is a yaml file
# rendered from a jinja template, the available options are:
# yaml_jinja
# yaml_mako
# yaml_wempy
# json_jinja
# json_mako
# json_wempy
#
#renderer: yaml_jinja
#
# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution, defaults to False
#failhard: False
#
# autoload_dynamic_modules Turns on automatic loading of modules found in the
# environments on the master. This is turned on by default, to turn of
# autoloading modules when states run set this value to False
#autoload_dynamic_modules: True
#
# clean_dynamic_modules keeps the dynamic modules on the minion in sync with
# the dynamic modules on the master, this means that if a dynamic module is
# not on the master it will be deleted from the minion. By default this is
# enabled and can be disabled by changing this value to False
#clean_dynamic_modules: True
#
# Normally the minion is not isolated to any single environment on the master
# when running states, but the environment can be isolated on the minion side
# by statically setting it. Remember that the recommended way to manage
# environments is to isolate via the top file.
#environment: prod
#
# If using the local file directory, then the state top file name needs to be
# defined, by default this is top.sls.
#state_top: top.sls
#
# Run states when the minion daemon starts. To enable, set startup_states to:
# 'highstate' -- Execute state.highstate
# 'sls' -- Read in the sls_list option and execute the named sls files
# 'top' -- Read top_file option and execute based on that file on the Master 
startup_states: 'highstate'
#
# list of states to run when the minion starts up if startup_states is 'sls'
#sls_list: 
#  - edit.vim
#  - hyper
#
# top file to execute if startup_states is 'top'
#top_file: ''

#####     File Directory Settings    #####
##########################################
# The Salt Minion can redirect all file server operations to a local directory,
# this allows for the same state tree that is on the master to be used if
# copied completely onto the minion. This is a literal copy of the settings on
# the master but used to reference a local directory on the minion.

# Set the file client, the client defaults to looking on the master server for
# files, but can be directed to look at the local file directory setting
# defined below by setting it to local.
#file_client: remote

# The file directory works on environments passed to the minion, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
#   base:
#     - /srv/salt/
#   dev:
#     - /srv/salt/dev/services
#     - /srv/salt/dev/states
#   prod:
#     - /srv/salt/prod/services
#     - /srv/salt/prod/states
#
# Default:
#file_roots:
#  base:
#    - /srv/salt

# The hash_type is the hash to use when discovering the hash of a file in
# the minion directory, the default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#hash_type: md5

# The Salt pillar is searched for locally if file_client is set to local. If
# this is the case, and pillar data is defined, then the pillar_roots need to
# also be configured on the minion:
#pillar_roots:
#  base:
#    - /srv/pillar

######        Security settings       #####
###########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False

# Enable permissive access to the salt keys.  This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir.  To make the access explicit, root must belong to the group
# you've given access to. This is potentially quite insecure.
#permissive_pki_access: False

# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True
#
# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.
#state_output: full
#
# Fingerprint of the master public key to double verify the master is valid,
# the master fingerprint can be found by running "salt-key -F master" on the
# salt master.
#master_finger: ''

######         Thread settings        #####
###########################################
# Disable multiprocessing support, by default when a minion receives a
# publication a new process is spawned and the command is executed therein.
#multiprocessing: True
multiprocessing: False

######         Logging settings       #####
###########################################
# The location of the minion log file.
# This can be a path for the log file, or, this can be, since 0.11.0, a system
# logger address, for example:
#   tcp://localhost:514/LOG_USER
#   tcp://localhost/LOG_DAEMON
#   udp://localhost:5145/LOG_KERN
#   udp://localhost
#   file:///dev/log
#   file:///dev/log/LOG_SYSLOG
#   file:///dev/log/LOG_DAEMON
#
# The above examples are self explanatory, but:
#   <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
#
# Make sure you have a properly configured syslog or you won't get any warnings
#
#log_file: /var/log/salt/minion
#
#
# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# Default: 'warning'
#log_level: warning
#
# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
# Default: 'warning'
#log_level_logfile:
#
# The date and time format used in log messages. Allowed date/time formating
# can be seen on http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
#
# The format of the console logging messages. Allowed formatting options can
# be seen on http://docs.python.org/library/logging.html#logrecord-attributes
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
#
# Logger levels can be used to tweak specific loggers logging levels.
# For example, if you want to have the salt library at the 'warning' level,
# but you still wish to have 'salt.modules' at the 'debug' level:
#   log_granular_levels: {
#     'salt': 'warning',
#     'salt.modules': 'debug'
#   }
#
#log_granular_levels: {}

######      Module configuration      #####
###########################################
# Salt allows for modules to be passed arbitrary configuration data, any data
# passed here in valid yaml format will be passed on to the salt minion modules
# for use. It is STRONGLY recommended that a naming convention be used in which
# the module name is followed by a . and then the value. Also, all top level
# data must be applied via the yaml dict construct, some examples:
#
# You can specify that all modules should run in test mode:
#test: True
#
# A simple value for the test module:
#test.foo: foo
#
# A list for the test module:
#test.bar: [baz,quo]
#
# A dict for the test module:
#test.baz: {spam: sausage, cheese: bread}

######      Update settings          ######
###########################################
# Using the features in Esky, a salt minion can both run as a frozen app and
# be updated on the fly. These options control how the update process
# (saltutil.update()) behaves.
#
# The url for finding and downloading updates. Disabled by default.
#update_url: False
#
# The list of services to restart after a successful update. Empty by default.
#update_restart_services: []

######      Keepalive settings        ######
############################################
# ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by
# the OS. If connections between the minion and the master pass through
# a state tracking device such as a firewall or VPN gateway, there is
# the risk that it could tear down the connection the master and minion
# without informing either party that their connection has been taken away.
# Enabling TCP Keepalives prevents this from happening.
#
# Overall state of TCP Keepalives, enable (1 or True), disable (0 or False) 
# or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled.
tcp_keepalive: True
#
# How long before the first keepalive should be sent in seconds. Default 300
# to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds 
# on Linux see /proc/sys/net/ipv4/tcp_keepalive_time.
tcp_keepalive_idle: 60
#
# How many lost probes are needed to consider the connection lost. Default -1
# to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes.
#tcp_keepalive_cnt: -1
#
# How often, in seconds, to send keepalives after the first one. Default -1 to 
# use OS defaults, typically 75 seconds on Linux, see 
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
tcp_keepalive_intvl: 60

######      Windows Software settings ######
############################################
# Location of the repository cache file on the master
# win_repo_cachefile: 'salt://win/repo/winrepo.p'

Master config

##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Master
# Values that are commented out but have no space after the comment are
# defaults that need not be set in the config. If there is a space after the
# comment that the value is presented as an example and is not the default.

# Per default, the master will automatically include all config files
# from master.d/*.conf (master.d is a directory in the same directory
# as the main master config file)
#default_include: master.d/*.conf

# The address of the interface to bind to
#interface: 0.0.0.0

# Whether the master should listen for IPv6 connections. If this is set to True,
# the interface option must be adjusted too (for example: "interface: '::'")
#ipv6: False

# The tcp port used by the publisher
#publish_port: 4505

# The user under which the salt master will run. Salt will update all
# permissions to allow the specified user to run the master. The exception is
# the job cache, which must be deleted if this user is changed.  If the
# modified files cause conflicts set verify_env to False.
#user: root

# Max open files
# Each minion connecting to the master uses AT LEAST one file descriptor, the
# master subscription connection. If enough minions connect you might start
# seeing on the console(and then salt-master crashes):
#   Too many open files (tcp_listener.cpp:335)
#   Aborted (core dumped)
#
# By default this value will be the one of `ulimit -Hn`, ie, the hard limit for
# max open files.
#
# If you wish to set a different value than the default one, uncomment and
# configure this setting. Remember that this value CANNOT be higher than the
# hard limit. Raising the hard limit depends on your OS and/or distribution,
# a good way to find the limit is to search the internet for(for example):
#   raise max open files hard limit debian
#
#max_open_files: 100000

# The number of worker threads to start, these threads are used to manage
# return calls made from minions to the master, if the master seems to be
# running slowly, increase the number of threads
worker_threads: 16

# The port used by the communication interface. The ret (return) port is the
# interface used for the file server, authentication, job returnes, etc.
#ret_port: 4506

# Specify the location of the daemon process ID file
#pidfile: /var/run/salt-master.pid

# The root directory prepended to these options: pki_dir, cachedir,
# sock_dir, log_file, autosign_file, autoreject_file, extension_modules,
# key_logfile, pidfile.
#root_dir: /

# Directory used to store public key data
#pki_dir: /etc/salt/pki/master

# Directory to store job and cache data
#cachedir: /var/cache/salt/master

# Verify and set permissions on configuration directories at startup
#verify_env: True

# Set the number of hours to keep old job information in the job cache
#keep_jobs: 24

# Set the default timeout for the salt command and api, the default is 5
# seconds
timeout: 120
show_timeout: True

# The loop_interval option controls the seconds for the master's maintinance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60

# Set the default outputter used by the salt command. The default is "nested"
#output: nested

# By default output is colored, to disable colored output set the color value
# to False
#color: True

# Set the directory used to hold unix sockets
#sock_dir: /var/run/salt/master

# The master can take a while to start up when lspci and/or dmidecode is used
# to populate the grains for the master. Enable if you want to see GPU hardware
# data for your master.
#
# enable_gpu_grains: False

# The master maintains a job cache, while this is a great addition it can be
# a burden on the master for larger deployments (over 5000 minions).
# Disabling the job cache will make previously executed jobs unavailable to
# the jobs system and is not generally recommended.
#
#job_cache: True

# Cache minion grains and pillar data in the cachedir.
#minion_data_cache: True

# The master can include configuration from other files. To enable this,
# pass a list of paths to this option. The paths can be either relative or
# absolute; if relative, they are considered to be relative to the directory
# the main master configuration file lives in (this file). Paths can make use
# of shell-style globbing. If no files are matched by a path passed to this
# option then the master will log a warning message.
#
#
# Include a config file from some other path:
#include: /etc/salt/extra_config
#
# Include config from several files and directories:
#include:
#  - /etc/salt/extra_config

#####        Security settings       #####
##########################################
# Enable "open mode", this mode still maintains encryption, but turns off
# authentication, this is only intended for highly secure environments or for
# the situation where your keys end up in a bad state. If you run in open mode
# you do so at your own risk!
#open_mode: False

# Enable auto_accept, this setting will automatically accept all incoming
# public keys from the minions. Note that this is insecure.
#auto_accept: False

# If the autosign_file is specified, incoming keys specified in the
# autosign_file will be automatically accepted. This is insecure.  Regular
# expressions as well as globing lines are supported.
#autosign_file: /etc/salt/autosign.conf

# Works like autosign_file, but instead allows you to specify minion IDs for
# which keys will automatically be rejected. Will override both membership in
# the autosign_file and the auto_accept setting.
#autosign_file: /etc/salt/autosign.conf

# Enable permissive access to the salt keys.  This allows you to run the
# master or minion as root, but have a non-root group be given access to
# your pki_dir.  To make the access explicit, root must belong to the group
# you've given access to.  This is potentially quite insecure.
# If an autosign_file is specified, enabling permissive_pki_access will allow group access
# to that specific file.
#permissive_pki_access: False

# Allow users on the master access to execute specific commands on minions.
# This setting should be treated with care since it opens up execution
# capabilities to non root users. By default this capability is completely
# disabled.
#
#client_acl:
#  larry:
#    - test.ping
#    - network.*
#

# Blacklist any of the following users or modules
#
# This example would blacklist all non sudo users, including root from
# running any commands. It would also blacklist any use of the "cmd"
# module.
# This is completely disabled by default.
#
#client_acl_blacklist:
#  users:
#    - root
#    - '^(?!sudo_).*$'   #  all non sudo users
#  modules:
#    - cmd

# The external auth system uses the Salt auth modules to authenticate and
# validate users to access areas of the Salt system.
#
external_auth:
  pam:
    andre:
      - .*
      - '@runner'
      - '@wheel'
    johan:
      - .*
      - '@runner'
      - '@wheel'
    marcus:
      - .*
      - '@runner'
      - '@wheel'

# Time (in seconds) for a newly generated token to live. Default: 12 hours
#token_expire: 43200

# Allow minions to push files to the master. This is disabled by default, for
# security purposes.
#file_recv: False

# Set a hard-limit on the size of the files that can be pushed to the master.
# It will be interpreted as megabytes.
# Default: 100
#file_recv_max_size: 100

# Signature verification on messages published from the master.
# This causes the master to cryptographically sign all messages published to its event
# bus, and minions then verify that signature before acting on the message.
#
# This is False by default.
#
# Note that to facilitate interoperability with masters and minions that are different
# versions, if sign_pub_messages is True but a message is received by a minion with
# no signature, it will still be accepted, and a warning message will be logged.
# Conversely, if sign_pub_messages is False, but a minion receives a signed
# message it will be accepted, the signature will not be checked, and a warning message
# will be logged.  This behavior will go away in Salt 0.17.6 (or Hydrogen RC1, whichever
# comes first) and these two situations will cause minion to throw an exception and
# drop the message.
#
# sign_pub_messages: False

#####    Master Module Management    #####
##########################################
# Manage how master side modules are loaded

# Add any additional locations to look for master runners
#runner_dirs: []

# Enable Cython for master side modules
#cython_enable: False

#####      State System settings     #####
##########################################
# The state system uses a "top" file to tell the minions what environment to
# use and what modules to use. The state_top file is defined relative to the
# root of the base environment as defined in "File Server settings" below.
#state_top: top.sls

# The master_tops option replaces the external_nodes option by creating
# a plugable system for the generation of external top data. The external_nodes
# option is deprecated by the master_tops option.
# To gain the capabilities of the classic external_nodes system, use the
# following configuration:
# master_tops:
#   ext_nodes: <Shell command which returns yaml>
#
#master_tops: {}

# The external_nodes option allows Salt to gather data that would normally be
# placed in a top file. The external_nodes option is the executable that will
# return the ENC data. Remember that Salt will look for external nodes AND top
# files and combine the results if both are enabled!
#external_nodes: None

# The renderer to use on the minions to render the state data
#renderer: yaml_jinja

# The Jinja renderer can strip extra carriage returns and whitespace
# See http://jinja.pocoo.org/docs/api/#high-level-api
#
# If this is set to True the first newline after a Jinja block is removed
# (block, not variable tag!). Defaults to False, corresponds to the Jinja
# environment init variable "trim_blocks".
# jinja_trim_blocks: False
#
# If this is set to True leading spaces and tabs are stripped from the start
# of a line to a block. Defaults to False, corresponds to the Jinja
# environment init variable "lstrip_blocks".
# jinja_lstrip_blocks: False

# The failhard option tells the minions to stop immediately after the first
# failure detected in the state execution, defaults to False
#failhard: False

# The state_verbose and state_output settings can be used to change the way
# state system data is printed to the display. By default all data is printed.
# The state_verbose setting can be set to True or False, when set to False
# all data that has a result of True and no changes will be suppressed.
#state_verbose: True

# The state_output setting changes if the output is the full multi line
# output for each changed state if set to 'full', but if set to 'terse'
# the output will be shortened to a single line.  If set to 'mixed', the output
# will be terse unless a state failed, in which case that output will be full.
#state_output: full

#####      File Server settings      #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.

# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
#   base:
#     - /srv/salt
#   dev:
#     - /srv/salt/dev
#   prod:
#     - /srv/salt/prod

file_roots:
  base:
    - /srv/salt

# The hash_type is the hash to use when discovering the hash of a file on
# the master server. The default is md5, but sha1, sha224, sha256, sha384
# and sha512 are also supported.
#hash_type: md5

# The buffer size in the file server can be adjusted here:
#file_buffer_size: 1048576

# A regular expression (or a list of expressions) that will be matched
# against the file path before syncing the modules and states to the minions.
# This includes files affected by the file.recurse state.
# For example, if you manage your custom modules and states in subversion
# and don't want all the '.svn' folders and content synced to your minions,
# you could set this to '/\.svn($|/)'. By default nothing is ignored.
#
#file_ignore_regex:
#  - '/\.svn($|/)'
#  - '/\.git($|/)'

# A file glob (or list of file globs) that will be matched against the file
# path before syncing the modules and states to the minions. This is similar
# to file_ignore_regex above, but works on globs instead of regex. By default
# nothing is ignored.
#
# file_ignore_glob:
#  - '*.pyc'
#  - '*/somefolder/*.bak'
#  - '*.swp'

# File Server Backend
# Salt supports a modular fileserver backend system, this system allows
# the salt master to link directly to third party systems to gather and
# manage the files available to minions. Multiple backends can be
# configured and will be searched for the requested file in the order in which
# they are defined here. The default setting only enables the standard backend
# "roots" which uses the "file_roots" option.
#
fileserver_backend:
  - git
  - roots
#
# To use multiple backends list them in the order they are searched:
#
#fileserver_backend:
#  - git
#  - roots
#
# Uncomment the line below if you do not want the file_server to follow
# symlinks when walking the filesystem tree. This is set to True
# by default. Currently this only applies to the default roots
# fileserver_backend.
#
#fileserver_followsymlinks: False
#
# Uncomment the line below if you do not want symlinks to be
# treated as the files they are pointing to. By default this is set to
# False. By uncommenting the line below, any detected symlink while listing
# files on the Master will not be returned to the Minion.
#
#fileserver_ignoresymlinks: True
#
# By default, the Salt fileserver recurses fully into all defined environments
# to attempt to find files. To limit this behavior so that the fileserver only
# traverses directories with SLS files and special Salt directories like _modules,
# enable the option below. This might be useful for installations where a file root
# has a very large number of files and performance is impacted. Default is False.
#
# fileserver_limit_traversal: False
#
# The fileserver can fire events off every time the fileserver is updated,
# these are disabled by default, but can be easily turned on by setting this
# flag to True
#fileserver_events: False
#
# Git fileserver backend configuration
# When using the git fileserver backend at least one git remote needs to be
# defined. The user running the salt master will need read access to the repo.
#
gitfs_remotes:
  - git@github.com:readsoftab/salt-states.git
# - git://github.com/saltstack/salt-states.git
#  - file:///var/git/saltmaster
#
# The gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#gitfs_ssl_verify: True
#
# The repos will be searched in order to find the file requested by a client
# and the first repo to have the file will return it.
# When using the git backend branches and tags are translated into salt
# environments.
# Note:  file:// repos will be treated as a remote, so refs you want used must
# exist in that repo as *local* refs.
#
# The gitfs_root option gives the ability to serve files from a subdirectory
# within the repository. The path is defined relative to the root of the
# repository and defaults to the repository root.
#gitfs_root: somefolder/otherfolder

#####         Pillar settings        #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.

#pillar_roots:
#  base:
#    - /srv/pillar/base
#  dev:
#    - /srv/pillar/dev
#  prod:
#    - /srv/pillar/prod

ext_pillar:
  - git: master git@github.com:readsoftab/salt-pillars.git

# The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate
# errors when contacting the pillar gitfs backend. You might want to set this to
# false if you're using a git backend that uses a self-signed certificate but
# keep in mind that setting this flag to anything other than the default of True
# is a security concern, you may want to try using the ssh transport.
#pillar_gitfs_ssl_verify: True

# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
#pillar_opts: True

#####          Syndic settings       #####
##########################################
# The Salt syndic is used to pass commands through a master from a higher
# master. Using the syndic is simple, if this is a master that will have
# syndic servers(s) below it set the "order_masters" setting to True, if this
# is a master that will be running a syndic daemon for passthrough the
# "syndic_master" setting needs to be set to the location of the master server
# to receive commands from.

# Set the order_masters setting to True if this master will command lower
# masters' syndic interfaces.
#order_masters: False

# If this master will be running a salt syndic daemon, syndic_master tells
# this master where to receive commands from.
#syndic_master: masterofmaster

# This is the 'ret_port' of the MasterOfMaster
#syndic_master_port: 4506

# PID file of the syndic daemon
#syndic_pidfile: /var/run/salt-syndic.pid

# LOG file of the syndic daemon
#syndic_log_file: syndic.log

#####      Peer Publish settings     #####
##########################################
# Salt minions can send commands to other minions, but only if the minion is
# allowed to. By default "Peer Publication" is disabled, and when enabled it
# is enabled for specific minions and specific commands. This allows secure
# compartmentalization of commands based on individual minions.

# The configuration uses regular expressions to match minions and then a list
# of regular expressions to match functions. The following will allow the
# minion authenticated as foo.example.com to execute functions from the test
# and pkg modules.
#
#peer:
#  foo.example.com:
#    - test.*
#    - pkg.*
#
# This will allow all minions to execute all commands:
#
#peer:
#  .*:
#    - .*
#
# This is not recommended, since it would allow anyone who gets root on any
# single minion to instantly have root on all of the minions!

# Minions can also be allowed to execute runners from the salt master.
# Since executing a runner from the minion could be considered a security risk,
# it needs to be enabled. This setting functions just like the peer setting
# except that it opens up runners instead of module functions.
#
# All peer runner support is turned off by default and must be enabled before
# using. This will enable all peer runners for all minions:
#
#peer_run:
#  .*:
#    - .*
#
# To enable just the manage.up runner for the minion foo.example.com:
#
#peer_run:
#  foo.example.com:
#    - manage.up

#####         Mine settings     #####
##########################################
# Restrict mine.get access from minions. By default any minion has a full access
# to get all mine data from master cache. In acl definion below, only pcre matches
# are allowed.
#
# mine_get:
#   .*:
#     - .*
#
# Example below enables minion foo.example.com to get  'network.interfaces' mine data only
# , minions web* to get all network.* and disk.* mine data and all other minions won't get
# any mine data.
#
# mine_get:
#   foo.example.com:
#     - network.inetrfaces
#   web.*:
#     - network.*
#     - disk.*

#####         Logging settings       #####
##########################################
# The location of the master log file
# The master log can be sent to a regular file, local path name, or network
# location. Remote logging works best when configured to use rsyslogd(8) (e.g.:
# ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI
# format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
log_file: /var/log/salt/master
#log_file: file:///dev/log
#log_file: udp://loghost:10514

#log_file: /var/log/salt/master
#key_logfile: /var/log/salt/key

# The level of messages to send to the console.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
#log_level: warning

# The level of messages to send to the log file.
# One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'.
log_level_logfile: debug

# The date and time format used in log messages. Allowed date/time formating
# can be seen here: http://docs.python.org/library/time.html#time.strftime
#log_datefmt: '%H:%M:%S'
#log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'

# The format of the console logging messages. Allowed formatting options can
# be seen here: http://docs.python.org/library/logging.html#logrecord-attributes
#log_fmt_console: '[%(levelname)-8s] %(message)s'
#log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'

# This can be used to control logging levels more specificically.  This
# example sets the main salt library at the 'warning' level, but sets
# 'salt.modules' to log at the 'debug' level:
#   log_granular_levels:
#     'salt': 'warning',
#     'salt.modules': 'debug'
#
#log_granular_levels: {}

#####         Node Groups           #####
##########################################
# Node groups allow for logical groupings of minion nodes.
# A group consists of a group name and a compound target.
#
#nodegroups:
#  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
#  group2: 'G@os:Debian and foo.domain.com'

#####     Range Cluster settings     #####
##########################################
# The range server (and optional port) that serves your cluster information
# https://github.com/grierj/range/wiki/Introduction-to-Range-with-YAML-files
#
#range_server: range:80

#####     Windows Software Repo settings #####
##############################################
# Location of the repo on the master
win_repo: '/srv/salt/win/repo'

# Location of the master's repo cache file
win_repo_mastercachefile: '/srv/salt/win/repo/winrepo.p'

# List of git repositories to include with the local repo
win_gitrepos:
  - 'git@github.com:readsoftab/salt-winrepo.git'
  # official winrepo
  #- 'https://github.com/saltstack/salt-winrepo.git'

# Web ui settings
halite:
  level: 'debug'
  server: 'cherrypy'
  host: '0.0.0.0'
  port: '8080'
  cors: False
  tls: True
  certpath: '/etc/pki/tls/certs/localhost.crt'
  keypath: '/etc/pki/tls/certs/localhost.key'
  pempath: '/etc/pki/tls/certs/localhost.pem'
mechalaris commented 9 years ago

Master info below:

Salt: 2014.7.0 Python: 2.7.8 (default, Oct 20 2014, 15:05:19) Jinja2: 2.7.3 M2Crypto: 0.21.1 msgpack-python: 0.4.2 msgpack-pure: Not Installed pycrypto: 2.6.1 libnacl: Not Installed PyYAML: 3.11 ioflo: Not Installed PyZMQ: 14.3.1 RAET: Not Installed ZMQ: 4.0.4 Mako: 1.0.0

UtahDave commented 9 years ago

There's a bugfix release for Windows available. 2014.7.1, that fixes a memory leak and connection issues. I'm not sure if will help with the azure specific problem you're seeing, but it will help, in general

andrejohansson commented 9 years ago

The bugfix release doesn't seem to be available for 64 bit windows. I'm looking here http://docs.saltstack.com/en/latest/topics/installation/windows.html

I also noticed today when running 2014.7.0 in console that the minion is continuously running the command saltutil.find_job with maybe a one second interval. This becomes quite spammy and I have the feeling that it slows down the minions. Even if I run a simple "test.ping" command towards a windows minion I have to wait 15-20 seconds to get a response. What I can see now is that the ping command becomes queued after maybe 5 find_job commands and hence the response to the server becomes slow.

Could this be because windows is running single threaded too (multiprocessing: false)?

What would you say is the normal response time of a healthy windows minion for the test.ping command?

Some output from my minion:

[INFO    ] Starting a new job with PID 22124
[INFO    ] Returning information for job: 20150123122638559569
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[INFO    ] User root Executing command saltutil.find_job with jid 20150123122643
620029
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20150123122643620029', '
tgt': '*', 'ret': '', 'user': 'root', 'arg': ['20150123071954328431'], 'fun': 's
altutil.find_job'}
[INFO    ] Starting a new job with PID 22124
[INFO    ] Returning information for job: 20150123122643620029
[INFO    ] User root Executing command saltutil.find_job with jid 20150123122648
705465
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20150123122648705465', '
tgt': '*', 'ret': '', 'user': 'root', 'arg': ['20150123071954328431'], 'fun': 's
altutil.find_job'}
[INFO    ] Starting a new job with PID 22124
[INFO    ] Returning information for job: 20150123122648705465
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[INFO    ] User root Executing command saltutil.find_job with jid 20150123122653
743518
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20150123122653743518', '
tgt': '*', 'ret': '', 'user': 'root', 'arg': ['20150123071954328431'], 'fun': 's
altutil.find_job'}
[INFO    ] Starting a new job with PID 22124
[INFO    ] Returning information for job: 20150123122653743518
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[INFO    ] User root Executing command saltutil.find_job with jid 20150123122658
788707
[DEBUG   ] Command details {'tgt_type': 'glob', 'jid': '20150123122658788707', '
tgt': '*', 'ret': '', 'user': 'root', 'arg': ['20150123071954328431'], 'fun': 's
altutil.find_job'}
[INFO    ] Starting a new job with PID 22124
[INFO    ] Returning information for job: 20150123122658788707
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: c:\salt\conf\pki\minion\minion.pem
[INFO    ] User root Executing command saltutil.find_job with jid 20150123122703
840746
basepi commented 9 years ago

Hrm, that's strange. Usually find_job is only triggered by a master checking if a job is still running. It may be that your master has some sort of job which is stuck and is, as a result, continually triggering these find_job jobs.

mechalaris commented 9 years ago

Would you be able to provide some more info on the bugfix please?

stale[bot] commented 6 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.