Closed NEOhidra closed 1 year ago
Hi there! Welcome to the Salt Community! Thank you for making your first contribution. We have a lengthy process for issues and PRs. Someone from the Core Team will follow up as soon as possible. In the meantime, here’s some information that may help as you continue your Salt journey. Please be sure to review our Code of Conduct. Also, check out some of our community resources including:
There are lots of ways to get involved in our community. Every month, there are around a dozen opportunities to meet with other contributors and the Salt Core team and collaborate in real time. The best way to keep track is by subscribing to the Salt Community Events Calendar. If you have additional questions, email us at saltproject@vmware.com. We’re glad you’ve joined our community and look forward to doing awesome things with you!
@NEOhidra Thanks for the report. The issue here is not with the schedule but with the state function that you're calling in the scheduler item, the function state.orchestrate
allows the minion to call the same functions as the state.orchestrate
runner but because it's calling the same code as the runner it's designed to work minions that are running without a master so it looks at the minion configuration for the file_roots
configuration.
You can keep the schedule information in the master configuration since it doesn't have a concept of pillar values. Alternatively if you want to keep the schedule item configured on a Salt minion, you can update the minion configuration on the Salt master so that the file_roots
for the Salt minion matches that of the Salt master.
Because this is working as expected, we don't consider it a bug, so I'll go ahead and close the issue. Thanks!
you can update the minion configuration on the Salt master so that the file_roots for the Salt minion matches that of the Salt master.
Thank you for this suggestion!
Because this is working as expected, we don't consider it a bug, so I'll go ahead and close the issue. Thanks!
OK, I am familiar with such situations. And I get your point of view.
On my side of things I see the same input producing different result depending on some condition which is confusing.
Here is the part which is confusing.
the function state.orchestrate allows the minion to call the same functions as the state.orchestrate runner but because it's calling the same code as the runner it's designed to work minions that are running without a master so it looks at the minion configuration for the file_roots configuration.
So the issue here is that file_root
change is done in the salt-master config.
The schedule is an issue when place into the pillar of the salt-minion (despite salt-master and salt-minion running on the same machine), but it all works when the schedule is define also in the salt-master configuration.
salt-run does not complain because it does not go through the minion process, but salt-call will fail because it goes directly to the minion process - which is missing some required information in such a setup.
You are saying that the salt-minion is executing this state but fails because it does know where exactly is the state file. If the issue is the salt-minion not having enough information how moving the schedule from the pillar (which as far as I know is assigned to the salt-minion) to the salt-master configuration file allows the minion to execute it correctly?
I ask because salt-call works just fine for any other states on minions which do not have file_roots
set to the actual path.
In summary the source of confusion is:
file_roots
but have the pillar for the schedulefile_roots
and does not have the schedule information (after moving the schedule to the salt-master configuration)The issue is that the state.orchestrate
function is designed to run on a master-less minion and the states and pillar data are expected to be on the minion itself. This is why using salt-run works when running the state but using salt-call does not, salt-run is using the Salt master configuration and the updated file_roots but salt-call is communicating to the Salt minion which is using the default file_roots. Again the state.orchestrate
command assumes that you're running without a Salt master.
Description Scheduling orchestrate state from a pillar will fail with "No matching sls" when a custom location is defined for the file server.
Setup Master running onedir 3006.2
master config
```yaml ## /etc/salt/master.d/overwrite.conf log_level_logfile: trace file_roots: base: - /srv/salt_new/states pillar_roots: base: - /srv/salt_new/pillars ```pillar
```yaml ## /srv/salt_new/pillars/minions/master-1.sls schedule: wait_event: function: state.orchestrate args: - orch.wait_event cron: '4 19 * * *' ```orchestrate state
```yaml ## /srv/salt_new/states/orch/wait_event.sls orch_wait_event_wol_minion: salt.runner: - name: network.wolmatch - arg: - minion-1 orch_wait_event_wait_minion: salt.wait_for_event: - name: minion_start - id_list: - minion-1 - timeout: 120 - failhard: True orch_wait_event_shutdown_minion: salt.function: - name: system.shutdown - tgt: minion-1 - arg: - 1 ```croniter installation
I am installing croniter because of: > 2023-08-17 18:59:10,001 [salt.utils.schedule:1190][ERROR ][3999] Missing python-croniter. Ignoring job wait_event. ```bash salt-call pip.install croniter systemctl restart salt-master salt-minion ``` Now with croniter 1.4.1 present this message is now absent.Please be as specific as possible and give set-up details.
Steps to Reproduce the behavior Just wait for the schedule and it will return:
Part of the generated log
``` 2023-08-17 19:04:01,224 [salt.fileserver :32 ][TRACE ][27628] Lockfile /var/cache/salt/minion/file_lists/roots/.base.w created 2023-08-17 19:04:01,224 [salt.fileserver :32 ][TRACE ][27628] Lockfile /var/cache/salt/minion/file_lists/roots/.base.w removed 2023-08-17 19:04:01,225 [salt.fileclient :1173][DEBUG ][27628] Could not find file 'salt://orch/wait_event.sls' in saltenv 'base' 2023-08-17 19:04:01,225 [salt.fileclient :1173][DEBUG ][27628] Could not find file 'salt://orch/wait_event/init.sls' in saltenv 'base' ... 2023-08-17 19:04:01,228 [salt.utils.event :823 ][DEBUG ][27628] Sending event: tag = __schedule_return; data = {'cmd': '_return', 'id': 'master-1', 'fun': 'state.orchestrate', 'fun_args': ['orch.wait_event'], 'schedule': 'wait_event', 'jid': 'req', 'pid': 27628, 'return': {'data': {'master-1': ["No matching sls found for 'orch.wait_event' in env 'base'"]}, 'outputter': 'highstate', 'retcode': 1}, 'retcode': 0, 'success': True, '_stamp': '2023-08-17T19:04:01.228711'} 2023-08-17 19:04:01,230 [salt.transport.ipc:372 ][DEBUG ][27628] Closing IPCMessageClient instance 2023-08-17 19:04:01,230 [salt.minion :2692][DEBUG ][26882] Minion of '192.168.12.17' is handling event tag '__schedule_return' 2023-08-17 19:04:01,230 [salt.utils.schedule:963 ][DEBUG ][27628] schedule.handle_func: Removing /var/cache/salt/minion/proc/20230817190400682652 2023-08-17 19:04:01,230 [salt.minion :2192][INFO ][26882] Returning information for job: req 2023-08-17 19:04:01,230 [salt.minion :32 ][TRACE ][26882] Return data: {'cmd': '_return', 'id': 'master-1', 'fun': 'state.orchestrate', 'fun_args': ['orch.wait_event'], 'schedule': 'wait_event', 'jid': 'req', 'pid': 27628, 'return': {'data': {'master-1': ["No matching sls found for 'orch.wait_event' in env 'base'"]}, 'outputter': 'highstate', 'retcode': 1}, 'retcode': 0, 'success': True, '_stamp': '2023-08-17T19:04:01.228711'} ... 2023-08-17 19:04:01,233 [salt.utils.event :771 ][DEBUG ][26882] Sending event: tag = __master_req_channel_payload; data = {'cmd': '_return', 'id': 'master-1', 'fun': 'state.orchestrate', 'fun_args': ['orch.wait_event'], 'schedule': 'wait_event', 'jid': 'req', 'pid': 27628, 'return': {'data': {'master-1': ["No matching sls found for 'orch.wait_event' in env 'base'"]}, 'outputter': 'highstate', 'retcode': 1}, 'retcode': 0, 'success': True, '_stamp': '2023-08-17T19:04:01.233180'} ... 2023-08-17 19:04:01,235 [salt.minion :890 ][DEBUG ][26882] Minion return retry timer set to 8 seconds (randomized) 2023-08-17 19:04:01,236 [salt.channel.client:32 ][TRACE ][26882] ReqChannel send crypt load={'cmd': '_return', 'id': 'master-1', 'fun': 'state.orchestrate', 'fun_args': ['orch.wait_event'], 'schedule': 'wait_event', 'jid': 'req', 'pid': 27628, 'return': {'data': {'master-1': ["No matching sls found for 'orch.wait_event' in env 'base'"]}, 'outputter': 'highstate', 'retcode': 1}, 'retcode': 0, 'success': True, '_stamp': '2023-08-17T19:04:01.233180'} ... 2023-08-17 19:04:01,246 [salt.utils.event :823 ][DEBUG ][26957] Sending event: tag = salt/job/20230817190401244131/ret/master-1; data = {'cmd': '_return', 'id': 'master-1', 'fun': 'state.orchestrate', 'fun_args': ['orch.wait_event'], 'schedule': 'wait_event', 'jid': '20230817190401244131', 'pid': 27628, 'return': {'data': {'master-1': ["No matching sls found for 'orch.wait_event' in env 'base'"]}, 'outputter': 'highstate', 'retcode': 1}, 'retcode': 0, 'success': True, '_stamp': '2023-08-17T19:04:01.246528', 'arg': ['orch.wait_event'], 'tgt_type': 'glob', 'tgt': 'master-1'} 2023-08-17 19:04:01,247 [salt.utils.event :32 ][TRACE ][25172] get_event() received = {'data': {'cmd': '_return', 'id': 'master-1', 'fun': 'state.orchestrate', 'fun_args': ['orch.wait_event'], 'schedule': 'wait_event', 'jid': '20230817190401244131', 'pid': 27628, 'return': {'data': {'master-1': ["No matching sls found for 'orch.wait_event' in env 'base'"]}, 'outputter': 'highstate', 'retcode': 1}, 'retcode': 0, 'success': True, '_stamp': '2023-08-17T19:04:01.246528', 'arg': ['orch.wait_event'], 'tgt_type': 'glob', 'tgt': 'master-1'}, 'tag': 'salt/job/20230817190401244131/ret/master-1'} ```Despite:
Expected behavior To execute the state file which is present - just like when the scheduler is defined in the master config or started manually like this:
Screenshots If applicable, add screenshots to help explain your problem.
Versions Report
salt --versions-report
(Provided by running salt --versions-report. Please also mention any differences in master/minion versions.) ```yaml Salt Version: Salt: 3006.2 Python Version: Python: 3.10.12 (main, Aug 3 2023, 21:47:10) [GCC 11.2.0] Dependency Versions: cffi: 1.14.6 cherrypy: 18.6.1 dateutil: 2.8.1 docker-py: Not Installed gitdb: Not Installed gitpython: Not Installed Jinja2: 3.1.2 libgit2: Not Installed looseversion: 1.0.2 M2Crypto: Not Installed Mako: Not Installed msgpack: 1.0.2 msgpack-pure: Not Installed mysql-python: Not Installed packaging: 22.0 pycparser: 2.21 pycrypto: Not Installed pycryptodome: 3.9.8 pygit2: Not Installed python-gnupg: 0.4.8 PyYAML: 6.0.1 PyZMQ: 23.2.0 relenv: 0.13.3 smmap: Not Installed timelib: 0.2.4 Tornado: 4.5.3 ZMQ: 4.3.4 System Versions: dist: fedora 38 locale: utf-8 machine: x86_64 release: 6.2.9-300.fc38.x86_64 system: Linux version: Fedora Linux 38 ```Additional context It looks like that the scheduler is looking for the state file in
/srv/salt/orch/
. But thensalt.wait_for_event
will timeout: