Closed bmess closed 10 years ago
Thanks for the report! We'll get this fixed.
Hi @bmess
I've been trying this out but I'm not able to reproduce this issue (yikes!). Hopefully it's not something I'm missing. Posting my config here
➜ salt git:(53d3943) salt --versions-report
Salt: 2014.7.0rc1
Python: 2.7.6 (default, Jan 22 2014, 18:56:28)
Jinja2: 2.7.2
M2Crypto: 0.22
msgpack-python: 0.4.0
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.0.1
RAET: Not Installed
ZMQ: 4.0.3
Mako: Not Installed
➜ salt git:(53d3943)
The following external git pillar is configured on my box
ext_pillar:
- git: master https://github.com/saltstack/pillar1.git
- git: master git@github.com:saltstack/pillar2.git
Output of pillar.data
is shown below (snipped for brevity)
➜ salt git:(53d3943) salt compute.home pillar.data
compute.home:
----------
abc:
def
info:
bar
...snip
Are you able to cross check this configuration and let us know how it differs from your setup?
Thank you for reporting this!
Salt version info:
salt --versions-report
Salt: 2014.1.5
Python: 2.6.6 (r266:84292, Jul 12 2013, 22:12:29)
Jinja2: unknown
M2Crypto: 0.20.2
msgpack-python: 0.1.9.final
msgpack-pure: Not Installed
pycrypto: 2.6
PyYAML: 3.10
PyZMQ: 2.2.0.1
ZMQ: 3.2.2
Snippet from our master config (names changed to protect the innocent):
ext_pillar:
- git: master git@local-github-enterprise:config/pillar-test1.git
- git: master git@local-github-enterprise:config/pillar-test2.git
Snippets of pillar data
pillar-test1 top.sls:
base:
salt-test-vm.localnetwork:
- foo
pillar-test1 foo.sls:
foo: baz
pillar-test2 top.sls:
base:
salt-test-vm.localnetwork:
- bar
pillar-test2 bar.sls
bar: qux
Ok so now some output with this configuration:
[root@salt-test-vm salt]# salt-call pillar.item foo
local:
----------
foo:
baz
And now bar:
salt-call pillar.item bar
local:
----------
And now, just for fun, flip the order of the external pillars in the master config:
ext_pillar:
- git: master git@local-github-enterprise:config/pillar-test2.git
- git: master git@local-github-enterprise:config/pillar-test1.git
And on our test vm, we get:
[root@salt-test-vm salt]# salt-call pillar.item foo
local:
----------
[root@salt-test-vm salt]# salt-call pillar.item bar
local:
----------
bar:
qux
Please let me know if you need anything else. We appreciate you looking into it.
@bmess Would you mind testing with the same repos as @pass-by-value? They're public. I just want to see whether our example just doesn't have the right pieces to reproduce the problem, or if you can reproduce it with our data, we'll know there's something else going on.
Hey @basepi No problem. I'm also trying to match his version # locally on a VM today. I hope to have results but I'm packed on meetings in the afternoon so I'll do my best.
Just to keep the dialog going:
We have tested your pillar URLS in our current config with our current salt version with no success.
@pass-by-value is now checking salt-call
vs salt
to see if there's any differences.
Note: we've used but masterless minion and master->minion relationships in testing
Did you mean to say you've used both masterless minion and master->minion relationships? Just wanted to clarify. Glad @pass-by-value is helping.
@bmess can you test on the latest 2014.1.10?
@basepi I wanted to make sure it was known we have tried both options (masterless as well as master + minion)
@UtahDave will do! Hopefully before the end of the day
Hey Guys,
using the testing version (2014.1.10) I get an infinite loop:
[DEBUG ] Results of YAML rendering:
OrderedDict([('base', OrderedDict([('*', ['data'])]))])
[DEBUG ] Jinja search path: ['/var/cache/salt/minion/pillar_gitfs/1']
[DEBUG ] Rendered data from file: /var/cache/salt/minion/pillar_gitfs/1/data.sls:
info: bar
[DEBUG ] Results of YAML rendering:
OrderedDict([('info', 'bar')])
[DEBUG ] Updating fileserver for git_pillar module
[DEBUG ] Loaded localemod as virtual locale
[DEBUG ] Loaded groupadd as virtual group
[DEBUG ] Loaded rh_service as virtual service
[DEBUG ] Loaded yumpkg as virtual pkg
[DEBUG ] Loaded parted as virtual partition
[DEBUG ] Loaded linux_sysctl as virtual sysctl
[DEBUG ] Loaded mdadm as virtual raid
[DEBUG ] Loaded sysmod as virtual sys
[DEBUG ] Loaded linux_acl as virtual acl
[DEBUG ] Loaded rpm as virtual lowpkg
[DEBUG ] Loaded zcbuildout as virtual buildout
[DEBUG ] Loaded useradd as virtual user
[DEBUG ] Loaded grub_legacy as virtual grub
[DEBUG ] Loaded rh_ip as virtual ip
[DEBUG ] Loaded virtualenv_mod as virtual virtualenv
[DEBUG ] Loaded djangomod as virtual django
[DEBUG ] Loaded cmdmod as virtual cmd
[DEBUG ] Loaded linux_lvm as virtual lvm
[DEBUG ] Loaded git_pillar as virtual git
[DEBUG ] Jinja search path: ['/var/cache/salt/minion/pillar_gitfs/0']
[DEBUG ] Rendered data from file: /var/cache/salt/minion/pillar_gitfs/0/top.sls:
base:
'*':
- data
[DEBUG ] Results of YAML rendering:
OrderedDict([('base', OrderedDict([('*', ['data'])]))])
[DEBUG ] Jinja search path: ['/var/cache/salt/minion/pillar_gitfs/0']
[DEBUG ] Rendered data from file: /var/cache/salt/minion/pillar_gitfs/0/data.sls:
info: foo
abc: def
@bmess Talked to @pass-by-value, and he's working on getting the fixes cherry-picked to the 2014.1 branch for the 2014.1.11 release.
Added #15067 to cherry-pick git pillar related fixes to 2014.1
Since this was cherry-picked, I'm going to close this issue.
My team can add two repository sources to the config file as follows:
The problem is we only see one of the repositories. I believe this is an error in the logic of the code:
https://github.com/saltstack/salt/blob/develop/salt/pillar/git_pillar.py#L224
It's placing the repos by branch, which is a unique key, So two different repos in the same environment are impossible.
I've tried soliciting input in the IRC channel and searched tickets.