Open multani opened 8 years ago
ping @terminalmage Can I get your input on this issue.
I am able to replicate this every time I run state.orchestrat
I see the attempt to update the gitfs cache:
[DEBUG ] pygit2 gitfs_provider enabled
[DEBUG ] LazyLoaded git.envs
[DEBUG ] Updating git fileserver cache
[DEBUG ] Set update lock for gitfs remote 'https://github.com/saltstack-formulas/salt-formula.git'
[DEBUG ] Fetching gitfs remote 'https://github.com/saltstack-formulas/salt-formula.git'
[DEBUG ] gitfs remote 'https://github.com/saltstack-formulas/salt-formula.git' is up-to-date
[DEBUG ] Removed update lock for gitfs remote 'https://github.com/saltstack-formulas/salt-formula.git'
Is there a way to set this so this does not occur each time?
There's no setting that lets you do this. I'll need to investigate.
In the duplicate issue (#35585) I requested that a option be added to disable the refresh of gitfs remotes with salt-call
, but thinking about this some more it might be a good idea to instead implement an option the sets the how often the cache is invalidated and updated.
According to the documentation the fileserver backends are updated every 60 seconds. Even when running with master mode this seems a bit too eager (at least in our environment). It would be good if this setting could be adjusted to any value (with '0' meaning "never") with the setting being honoured no matter how salt is invoked (ie. using either master or standalone).
This way if I know that the salt formulas I'm using change only occasionally I could set the fileserver backends to update once a day, with the option to invoke runner.fileserver.update
manually if I need to force an update of the backends.
I also thought about it yesterday and although we are not there yet, the ideal situation would be to be able to completely disable preemptively polling the Git repositories and to have a way to trigger this polling on demand with a trigger when a push has been made. As @jerrykan said, 99% of the time there's no changes to pull (in our case at least).
@multani while I agree it would be ideal to only trigger an update once a push has been made this may not always be possible (ie. you may not be using a git host that supports triggering such an event). So polling would still be useful, but would ideally be optional (for the use-case you mention) and adjustable (so polling doesn't occur too often).
FYI, I'm now running 2016.3.4 and running the orchestration still does a Git fetch on all the repositories configured on the master ... BUT, although there were changes to fetch from the Git remotes, these changes were NOT taken into account during the orchestration. Running first salt-run fileserver.update
before the orchestration does work though.
(to be clear:
Fetching gitfs remote ...
as explained in the first post, but they were no indication about pulled changessalt-run fileserver.update
, which indicated this time it got the changes (but I'm 200% sure they were already on the remote at the time I did the first orchestration))
@multani correct me if i'm wrong but this newest issue you just brought up seems to be a seperate issue where the orchestartion is not updating the gitfs. Would you mind opening another issue with more details if this is the case?
Also for now i'll label this as a feature until terminalimage can dive into this more. Thanks
Hi,
I found the same problem with svnfs:
[DEBUG ] Updating svn fileserver cache
[DEBUG ] Set lock for http://mysvn
[DEBUG ] svnfs is fetching from http://mysvn
Should i open a new issue? Is there a way to prevent this refresh on salt 2017.1 ?
Get subscribed on this... I've seen my calls to things like salt-run manage.status
get a lot slower moving from 2016.11 to 2018.3.0 recently and the gitfs syncing going on during every single salt-run looks like a bit part of the time. It looks like I was able to mitigate it a bit by using a blacklist of branches to exclude but I didn't time a before/after. Is this the same root cause under the hood?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
This is still an issue that hasn't been addressed.
Thank you for updating this issue. It is no longer marked as stale.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
This is still an issue that hasn't been addressed.
Thank you for updating this issue. It is no longer marked as stale.
+1 still slow and tries to update on running state.orch as above
Same problem with very slow salt-ssh
execution when using gitfs
repos - it rebuilds them on every execution (even with gitfs_update_interval: 86400
) - for example - simple command:
time salt-ssh brick-tmp1 test.ping
executes about 35 seconds!
Via adding -l debug
suffix I see a lot of strings like:
[DEBUG ] Current http.sslVerify for gitfs remote 'git://github.com/saltstack-formulas/nginx-formula': true (desired: true)
[DEBUG ] Current fetch URL for gitfs remote 'git://github.com/saltstack-formulas/php-formula': git://github.com/saltstack-formulas/php-formula (desired: git://github.com/saltstack-formulas/php-formula)
[DEBUG ] Current refspecs for gitfs remote 'git://github.com/saltstack-formulas/php-formula': ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*'] (desired: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*'])
...
[DEBUG ] Updating gitfs fileserver cache
[DEBUG ] Re-using gitfs object for process 77007
[DEBUG ] Set update lock for gitfs remote 'git://github.com/saltstack-formulas/salt-formula'
[DEBUG ] Fetching gitfs remote 'git://github.com/saltstack-formulas/salt-formula'
[DEBUG ] Popen(['git', 'fetch', '-v', 'origin'], cwd=/var/cache/salt/master/gitfs/3c36313a8faa2a7343c915489b39c009120f2e71e3e41ce151f4d4386d09d7df, universal_newlines=True, shell=None, istream=None)
[DEBUG ] Removed update lock for gitfs remote 'git://github.com/saltstack-formulas/salt-formula'
[DEBUG ] Set update lock for gitfs remote 'git://github.com/saltstack-formulas/docker-formula'
[DEBUG ] Fetching gitfs remote 'git://github.com/saltstack-formulas/docker-formula'
[DEBUG ] Popen(['git', 'fetch', '-v', 'origin'], cwd=/var/cache/salt/master/gitfs/3a51c3f210b582f945814687f90f6c318c2461d03a7d9f7d2134a0ca340b55f9, universal_newlines=True, shell=None, istream=None)
[DEBUG ] Removed update lock for gitfs remote 'git://github.com/saltstack-formulas/docker-formula'
[DEBUG ] Set update lock for gitfs remote 'git://github.com/saltstack-formulas/pam-formula'
...
[DEBUG ] Re-using gitfs object for process 76103
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/salt-formula duration=0.0003561973571777344 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/docker-formula duration=0.0002968311309814453 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/pam-formula duration=0.00031256675720214844 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/systemd-formula duration=0.00028514862060546875 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/users-formula duration=0.00030159950256347656 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/mysql-formula duration=0.0002944469451904297 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/postgres-formula duration=0.004677772521972656 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/prometheus-formula duration=0.00039696693420410156 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/grafana-formula duration=0.00029754638671875 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/nginx-formula duration=0.00029540061950683594 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/php-formula duration=0.0002942085266113281 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/apache-formula duration=0.00032329559326171875 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/node-formula duration=0.0002810955047607422 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/fail2ban-formula duration=0.000293731689453125 seconds
[PROFILE ] gitfs file_name cache rebuild repo=git://github.com/saltstack-formulas/letsencrypt-formula duration=0.0002884864807128906 seconds
[DEBUG ] Re-using gitfs object for process 76103
Version is 3002.6+ds-1
This needs investigation as to what could be done - we will review a PR if anyone on this issue is willing and able to submit one! :)
I've just hit this issue again when trying to set up some CI stuff. After a short bit of time digging through the code using the debugger I noticed that the __fs_update
config option gets set to True
once the gitfs remotes are updated. So for a really hacky workaround you could place __fs_update: True
in the Salt configuration to prevent updating the remotes each time salt is called. Though if you are considering this sort of work-around a better option might be to create a localconfig file and only include the setting as necessary.
ie, create the file /etc/salt/skip_remote_update
with the following contents:
__fs_update: True
And include it using the localconfig
option when you want to avoid updating the remotes.
salt-call state.apply --output-diff localconfig=/etc/salt/skip_remote_update
This may have other unintended side-effects, but may be a workable work-around until a better solution is implement in the Salt codebase.
Not read the full comment history. Was directed to this by a colleague just after we implemented a workaround of this issue on 3004. We have 3 repos used with gitfs for files and pillars. gitfs was causing pretty much every minion action to be CPU bound on the master on a single CPU core. With ~24GB/20,000 files in Git and 1,300 minions on a huge Salt master (64 cores, 128GB RAM), this was completely unusable.
Ended up ripping all of the gitfs out and replacing it with a very small shell script which will pull the updates from our repositories only when our CI pipeline notices a new commit.
This results in an order or magnitude better performance.
Description of Issue/Question
Running the
state.orchestrate
runner trigger each time a refresh of the Gitfsfile_server
backends configured on the Salt master:I'm considering cloning off the remote Git repositories locally on the Salt master as bare repositories, schedule a periodic out of the loop synchronization between the actual remotes and the local repositories and finally configuring the Salt master to use these local repositories instead.
I wonder if there's something wrong or if that should be the way to go.
Setup
A sample of my Salt master configuration file:
Then, launching an Orchestration does the following:
So, even to trigger an orchestration which doesn't actually exist, it takes 41 seconds, of which 39 seconds are spent updating the Git repositories (which were refreshed seconds ago.)
Steps to Reproduce Issue
(Include debug logs if possible and relevant.)
Versions Report
(Provided by running
salt --versions-report
. Please also mention any differences in master/minion versions.)(I don't want to upgrade to 2016 yet, as it still has some more serious issues that are affecting us.)