saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:
https://repo.saltproject.io/
Apache License 2.0
14.15k stars 5.48k forks source link

Don't run state.highstate if not all gitfs remotes are available in cache #31913

Closed retrry closed 6 years ago

retrry commented 8 years ago

Description of Issue/Question

I'm using Saltstack with Vagrant, but I've hit a problem, where on first run state.highstate fails, because not all gitfs roots are pulled from repository.

I have Salt master configuration, which heavily uses saltstack-formulas and I keep them in git repositories (I have configured 8 gitfs_remotes). All of them are used, when configuring development environment. Vagrant runs state.highstate right after installing it, but salt-master needs time, on first run, to pull all information from git repositories, so state.highstate fails (see attached log). I think salt-master should wait for all gitfs remotes to be available in cache on first run.

Copying salt minion config to vm.
Copying salt master config to vm.
Uploading minion keys.
Uploading master keys.
Checking if salt-minion is installed
salt-minion was not found.
Checking if salt-call is installed
salt-call was not found.
Checking if salt-master is installed
salt-master was not found.
Bootstrapping Salt... (this may take a while)
Salt successfully configured and installed!
run_overstate set to false. Not running state.overstate.
Calling state.highstate... (this may take a while)
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

salt '*' state.highstate --verbose --log-level=debug --no-color

Stdout from the command:

Executing job with jid 20160218131848608336
-------------------------------------------

development_environment:
    Data failed to compile:
----------
    No matching sls found for 'rabbitmq' in env 'base'
----------
    No matching sls found for 'nginx.ng' in env 'base'
----------
    No matching sls found for 'uwsgi' in env 'base'
----------
    No matching sls found for 'uwsgi.emperor' in env 'base'

Stderr from the command:

[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: development_environment
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] Configuration file path: /etc/salt/master
[DEBUG   ] Reading configuration from /etc/salt/master
[DEBUG   ] Using cached minion ID from /etc/salt/minion_id: development_environment
[DEBUG   ] Missing configuration file: /root/.saltrc
[DEBUG   ] MasterEvent PUB socket URI: ipc:///var/run/salt/master/master_event_pub.ipc
[DEBUG   ] MasterEvent PULL socket URI: ipc:///var/run/salt/master/master_event_pull.ipc
[DEBUG   ] Sending event - data = {'_stamp': '2016-02-18T13:18:48.601507'}
[DEBUG   ] LazyLoaded local_cache.get_load
[DEBUG   ] get_iter_returns for jid 20160218131848608336 sent to set(['development_environment']) will timeout at 13:18:53.612326
[DEBUG   ] jid 20160218131848608336 return from development_environment
[DEBUG   ] LazyLoaded highstate.output
[DEBUG   ] jid 20160218131848608336 found all minions set(['development_environment'])
ERROR: Minions returned with non-zero exit code

I've opened feature request in Vagrant project for ability to set delay before running state.highstate, but maybe it is possible to fix this in Saltstack? https://github.com/mitchellh/vagrant/issues/7053

jfindlay commented 8 years ago

@retrry, thanks for reporting. I am not sure there is a reasonable way to do this with salt, since the minion requesting the highstate does not know what state the master is in. Your best option may be to wait for the master to finish bootstrapping before setting up any minions.

stale[bot] commented 6 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

h3 commented 6 years ago

I faced the same issue, I had to call highstate after VM provisioning but couldn't find a way to wait for gitfs to be available.

If it can help someone, my solution turned out to be a custom script which determines when all remotes listed in the master configs are in cache by looking at /var/cache/salt/master/gitfs/remote_map.txt.

https://gitlab.com/snippets/1764021

rallytime commented 6 years ago

@h3 It might be better to open a new issue with all of your relevant issue information. That way we can get some fresh eyes on it.