Closed sjmh closed 7 years ago
This is because the master does not dynamically reread the nodegroups either.
This is the correct answer https://github.com/saltstack/salt/issues/570 that we should reread the configuration on a SIGHUP instead of having to restart.
@gtmanfred Well, it 'sorta' re-reads it, at least for targeting information. The salt-api however, doesn't even do that.
Oddly enough, the nodegroup
configuration is the only setting that I'm aware of that does not require the master to be restarted. I think it is an unintentional feature and due to the salt
CLI instantiating LocalClient directly rather than relying on the already-loaded __opts__
in the Salt Master daemon processes.
Now salt-api currently also instantiates LocalClient per-request so there's a potential to make this work like the CLI by specifiying the config path instead of reusing __opts__
via the mopts
argument. I don't know what the potential performance difference there is between the two approaches.
That said, I wonder if pulling nodegroups from sdb would be a better approach. Since the values are fetched when they are requested that would allow a very dynamic system without having to worry about whether the config needs to be reloaded at all. The lightest weight sdb module we currently have is sqlite, but I'd like to see json and yaml sdb modules that just read values from flat files. I will try a POC with nodegroups...
@whiteinge - I'd be down for doing it via sdb.
We use nodegroups extensively for targeting commands and jobs - so the ability for the master to read these dynamically is a huge feature for us ( even if it's unintentional at this point :) ). We have a job that runs every 10 minutes and regenerates the nodegroups.conf based off personal nodegroups user's keep in their home directories, along with some error-checking/namespace collision prevention. This allows our users to keep their own nodegroups up to date w/out providing them access to modify the root config.
Now that we've been diving more into using salt-api, the lack of it's ability to also dynamically read nodegroups is becoming an issue, so if we could move to some type of system that would allow these to be read dynamically by both salt-api and salt-master, that'd be fantastic.
Looks like sdb will indeed work for this. Steps:
Master config:
mysqlite:
driver: sqlite3
database: /path/to/sdb.sqlite3
table: sdb
create_table: True
nodegroups: sdb://mysqlite/nodegroups
Seed values:
% salt-run sdb.set sdb://mysqlite/nodegroups '{jerrys: "jerry*"}'
Use the nodegroup:
% salt -N jerrys test.ping
jerry:
True
% curl -b /tmp/cookies.txt -sSi localhost:8000 \
-H 'Accept: application/x-yaml' \
-d client=local \
-d tgt=jerrys \
-d expr_form=nodegroup \
-d fun=test.ping
HTTP/1.1 200 OK
return:
- jerry: true
Salt is throwing a warning with that config about nodegroups not being a dictionary because it doesn't know that the sdb URI (string) will eventually resolve to a dictionary. But it still works. I'll investigate a fix.
Relevant PR: #37563
@sjmh if the sdb approach above works for you I'd like to close this issue as a dupe of the SIGHUP addition that @gtmanfred linked -- both Salt and salt-api will benefit from that if it's ever implemented (but that's a tough one). The approach to re-read all the config files on each request has an undesirable performance hit for salt-api so on reflection I think that is worth avoiding.
@whiteinge sounds good! thanks!
It would appear that the above approach (specifying nodegroups: sdb://name/path
in the master config) is broken in 2016.11.5. When using the yaml-sdb source from PR #37563, and setting the whole thing up, after starting the master, the nodegroups are loaded as they should be. However, upon changing the yaml-file that is used as sdb-source, the changes do not propagate to opts['nodegroups'], unless the master is restarted.
Only salt-run salt.cmd test.get_opts
reflect the changed nodegroups.
@whiteinge Can you tell me if this is indeed broken now, or am I using it wrong? (or expecting the wrong functionality of it)
sdb values are evaluated when accessed so __opts__['nodegroups']
should only ever be the sdb://...
string and not the value.
What operation are you doing where you don't see the updated values? I'm don't know offhand all the mechanisms that trigger that evaluation, but I would not expect referencing the __opts__
dictionary directly to work, instead I think you'll need to go through an interface such as calling config.get.
I'm using the file-tree ext_pillar to supply pillar values to nodegroups.
When starting the salt-master with a minion listed in a nodegroup, it gets the pillar-data present in the matching nodegroup directory, but when I edit the yaml where the nodegroup-data is in, the pillar-data does not change. I added debugcode to display the value of __opts__['nodegroups']
in salt/pillar/file_tree.py:331 but the contents do not change when I update the yaml-with-nodegroups.
Edit: Using saltutil.refresh_pillar does not change anything.
Edit2: Line 292.5 in 2016.11 (330.5 in develop) branch
sdb values are evaluated when accessed so
__opts__['nodegroups']
should only ever be the sdb://... string and not the value.
Unfortunately, this is not the case (in 2016.11). In salt/config/__init__.py
the function apply_sdb(opts)
is used in order to replace the sdb://
-references in the (loaded) config with its evaluation. This causes all values in __opts__
(including nodegroups) to be replaced by their evaluation whenever opts is constructed, and not, as you mentioned (and I hoped would be true), whenever values in __opts__
are accessed.
Edit: Added the following:
instead I think you'll need to go through an interface such as calling
config.get
.
Unfortunately, config.get only allows access to the master configuration if you configure pillar_opts
which causes the master config to be added to the pillar. I'm not sure if this would also update the thusly imported pillardata in case the config is changed at runtime. Also, this would expose the entire master configuration to all minions, which is unwanted from a security point of view.
I think the best thing would be to get this working.
https://github.com/saltstack/salt/issues/570
On Fri, Jul 28, 2017 at 3:46 AM, github-abcde notifications@github.com wrote:
sdb values are evaluated when accessed so opts['nodegroups'] should only ever be the sdb://... string and not the value.
Unfortunately, this is not the case (in 2016.11). In salt/config/init.py the function apply_sdb(opts) is used in order to replace the sdb://-references in the configfile with its evaluation. This causes all values in opts (including nodegroups) to be replaced by their evaluation whenever opts is constructed, and not, as you mentioned (and I hoped would be true), whenever values in opts are accessed.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/saltstack/salt/issues/37554#issuecomment-318610184, or mute the thread https://github.com/notifications/unsubscribe-auth/AAssoblkk_vmdNAMIrANhwIFO6loEMaDks5sSa4QgaJpZM4Kss-p .
Would that work from within an orchestration run? Because that's where I'm running into this problem. The orchestration adds nodes to a (new) nodegroup, and then calls highstates to configure the newly added nodes. It now fails because the salt-master doesn't know about this nodegroup change, and thus ext_pillar file_tree doesn't pick it up either and provides the proper pillardata to the new minions.
Though #570 is a step in the right direction, I'm not entirely convinced that a re-init of a running master would fix this particular case.
This wouldn't be a reinit.
The SIGHUP is usually just a signal that is caught and the salt-master goes
and rereads the files on disk and remakes the __opts__
dictionary, which will then allow the salt-master
processes to know what the new node groups are.
salt-run state.orchestrate just runs the saltmod function etc, to call
saltutil.
https://github.com/saltstack/salt/blob/2016.11/salt/states/saltmod.py#L256
This execs a client, which can be the ssh client or one of the pythonapi clients (caller or local) https://github.com/saltstack/salt/blob/2016.11/salt/modules/saltutil.py#L1154
The LocalClient works essentially the same as salt
on the commandline and
reaches out the the unix socket for salt-master to tell it what to put on
the event bus, and the salt-master process will have the new nodegroup
configuration after it has been sighupped.
On Fri, Jul 28, 2017 at 8:52 AM, github-abcde notifications@github.com wrote:
Would that work from within an orchestration run? Because that's where I'm running into this problem. The orchestration adds nodes to a (new) nodegroup, and then calls highstates to configure the newly added nodes. It now fails because the salt-master doesn't know about this nodegroup change, and thus ext_pillar file_tree doesn't pick it up either and provides the proper pillardata to the new minions.
Though #570 https://github.com/saltstack/salt/issues/570 is a step in the right direction, I'm not entirely convinced that a re-init of a running master would fix this particular case.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/saltstack/salt/issues/37554#issuecomment-318673158, or mute the thread https://github.com/notifications/unsubscribe-auth/AAssoRYxB4YEsCIcq-I3VYh0ZYY4KAsrks5sSfWdgaJpZM4Kss-p .
The salt-api doesn't reread the nodegroups configuration when it's changed. This requires you to have to restart the salt-api every time a new nodegroup is changed/added. This wouldn't be too bad, but a restart seems to revoke everyone's api token as well, meaning they all have to relogin to the API this happens.