saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Install Salt from the Salt package repositories here:
https://docs.saltproject.io/salt/install-guide/en/latest/
Apache License 2.0
14.19k stars 5.48k forks source link

salt-minion fails to start "ERROR: Unable to look-up config values for /etc/salt" #40529

Closed colinjkevans closed 5 years ago

colinjkevans commented 7 years ago

Description of Issue/Question

I upgraded the salt-master to 2016.11.3, then upgraded salt-minion on the same server. The master is OK, but the minion fails to start with this error:

root@[salt-master]:~$service salt-minion start
ERROR: Unable to look-up config values for /etc/salt

I looked in the init script and that error appears to come from a failure to run this for various config items:

_get_salt_config_value() {
    _su_cmd \
        "$MINION_USER" \
        "\
            \"$SALTCALL\" \
            -c \"$CONFIG_DIR\" \
            --no-color \
            --local config.get \
            \"$1\" \
        " \
        2>$ERROR_TO_DEVNULL \
        | sed -r -e '2!d; s/^\s*//;'

If I run salt-call -c /etc/salt/minion --local config.get sock_dir directly I get (which is also the output in /var/log/salt/minion when I try to start the service):

root@[salt-master]:~$salt-call -c /etc/salt --local config.get sock_dir
[ERROR   ] An un-handled exception was caught by salt's global exception handler:
KeyError: 'pillar'
Traceback (most recent call last):
  File "/usr/bin/salt-call", line 11, in <module>
    salt_call()
  File "/usr/lib/python2.6/site-packages/salt/scripts.py", line 379, in salt_call
    client.run()
  File "/usr/lib/python2.6/site-packages/salt/cli/call.py", line 48, in run
    caller = salt.cli.caller.Caller.factory(self.config)
  File "/usr/lib/python2.6/site-packages/salt/cli/caller.py", line 79, in factory
    return ZeroMQCaller(opts, **kwargs)
  File "/usr/lib/python2.6/site-packages/salt/cli/caller.py", line 274, in __init__
    super(ZeroMQCaller, self).__init__(opts)
  File "/usr/lib/python2.6/site-packages/salt/cli/caller.py", line 102, in __init__
    self.minion = salt.minion.SMinion(opts)
  File "/usr/lib/python2.6/site-packages/salt/minion.py", line 658, in __init__
    self.gen_modules(initial_load=True)
  File "/usr/lib/python2.6/site-packages/salt/minion.py", line 689, in gen_modules
    pillarenv=self.opts.get('pillarenv'),
  File "/usr/lib/python2.6/site-packages/salt/pillar/__init__.py", line 836, in compile_pillar
    matches = self.top_matches(top)
  File "/usr/lib/python2.6/site-packages/salt/pillar/__init__.py", line 564, in top_matches
    self.opts.get('nodegroups', {}),
  File "/usr/lib/python2.6/site-packages/salt/minion.py", line 2732, in confirm_top
    return getattr(self, funcname)(match)
  File "/usr/lib/python2.6/site-packages/salt/minion.py", line 2965, in compound_match
    str(getattr(self, '{0}_match'.format(engine))(*engine_args, **engine_kwargs))
  File "/usr/lib/python2.6/site-packages/salt/minion.py", line 2826, in pillar_match
    self.opts['pillar'], tgt, delimiter=delimiter
KeyError: 'pillar'

If I simply run /usr/bin/salt-minion directly it connects to the master and I can apply states to it.

Setup

minion config:

master: localhost
id: master

I've tried yum removing and yum installing the salt-minion software.

Versions Report

Salt Version:
           Salt: 2016.11.3

Dependency Versions:
           cffi: Not Installed
       cherrypy: 3.2.2
       dateutil: Not Installed
          gitdb: 0.6.4
      gitpython: 2.0.8
          ioflo: Not Installed
         Jinja2: 2.7.3
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.20.2
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
   python-gnupg: 0.3.8
         PyYAML: 3.11
          PyZMQ: 14.5.0
           RAET: Not Installed
          smmap: 0.9.0
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5

System Versions:
           dist: centos 6.7 Final
        machine: x86_64
        release: 2.6.32-573.26.1.el6.x86_64
         system: Linux
        version: CentOS 6.7 Final
gtmanfred commented 7 years ago

What about when you run salt-call -c /etc/salt --local config.get sock_dir?

Configdir should be set to /etc/salt, not /etc/salt/minion.

Thanks, Daniel

colinjkevans commented 7 years ago

Oops, pasted the wrong command. I did run salt-call -c /etc/salt/ --local config.get sock_dir as well, and even salt-call -c /etc/salt --local config.get sock_dir too (without the /). Those gave the result as in the original message.

(with the trailing /minion the output actaully included WARNING: CONFIG '/etc/salt/minion' directory does not exist.)

gtmanfred commented 7 years ago

Can you provide the minion and master configs and the top files for state and pillar data?

Thanks, Daniel

gtmanfred commented 7 years ago

I am unable to reproduce this using the closest setup I can make.

[root@salt ~]# salt-call -c /etc/salt/ --local config.get sock_dir
local:
    /var/run/salt/minion
[root@salt ~]# salt-call --local --versions-report
Salt Version:
           Salt: 2016.11.3

Dependency Versions:
           cffi: Not Installed
       cherrypy: Not Installed
       dateutil: 1.4.1
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.8.1
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.20.2
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.4.6
   mysql-python: Not Installed
      pycparser: Not Installed
       pycrypto: 2.6.1
         pygit2: Not Installed
         Python: 2.6.6 (r266:84292, Aug  9 2016, 06:11:56)
   python-gnupg: Not Installed
         PyYAML: 3.11
          PyZMQ: 14.5.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 4.2.1
            ZMQ: 4.0.5

System Versions:
           dist: redhat 6.9 Santiago
        machine: x86_64
        release: 2.6.32-696.el6.x86_64
         system: Linux
        version: Red Hat Enterprise Linux Server 6.9 Santiago
colinjkevans commented 7 years ago

The minion config is in the original info.

I probably could provide, the master configuration you've requested, but I'd have to do some work to remove sensitive information. Doesn't the --local salt-call option mean the master configuration is irrelevant?

gtmanfred commented 7 years ago

Generally I would say yes, but you are still connected to the master?

And it looks like it is trying to match node groups to get pillars pulled down.

Even though it is set to local, i am wondering if the masterless minion still looks at the master config to find nodegroup configurations, because that is the part of the top matching that is failing.

If you could also provide the pillar top file, that would be great.

Thanks, Daniel

colinjkevans commented 7 years ago

master config:

nodegroups:
    site1_prod: I@host_info:environment:location:site1
    site2_prod: I@host_info:environment:location:site2
    site3_prod: I@host_info:environment:location:site3
    stage: I@host_info:liveness:stage
    site1_stage: I@host_info:liveness:stage and I@host_info:location:site1
    site2_stage: I@host_info:liveness:stage and I@host_info:location:site2
    site3_stage: I@host_info:liveness:stage and I@host_info:location:site3
    qa: I@host_info:liveness:qa
    site1_qa: I@host_info:liveness:qa and I@host_info:location:site1
    site2_qa: I@host_info:liveness:qa and I@host_info:location:site2
    local_hsm: G@hsm_installed:True

########################## General config options ###########################
# The user that will run salt and own all salt-related files
user: <redacted>

# Timeout on waiting for acks from minions when running salt cli command
# Note: NOT a timeout for how long a salt command is able to run for
timeout: 30

worker_threads: 8

########################## Job cache ########################################
# TODO add external job cache to keep audit trail indefinitely
keep_jobs: 72

####################### File server backemds ################################
# File server backends specify where salt is able to serve files from to send
# to minions. Our implementation uses 2 backends:
#     1. "gitfs" backend that pulls directly from a git repo. This is used for
#        provisioning the salt master
#     2. "roots" to serve files from the salt master local file system. This is
#         used to provide files to the app server minions.

fileserver_backend:
    - git
    - roots

# Locations of the local fileserver roots. Configuration is in /srv/salt and
# builds from jenkins are in a directory specified in the pillar.
file_roots:
  base:
    - /srv/salt
    - /srv/builds

# Configuration for gitfs file server backend.
# Git python requires a private key for github access be specified in ~/.ssh/config
# Fake domain names are used so that the ssh config can pick the correct
# identity, matching that repos deploy key, based on the domain. The domain
# used to call github is corrected to the real one in the ssh config file
gitfs_provider: gitpython
gitfs_remotes:
    - <redacted>
    - <redacted>
      - mountpoint: salt://devops_dashboard

#############################################################################

####################### External pillar config ##############################
# The external pillar allows us to access pillar data from somewhere other
# that sls files in the local file system. in our salt implementation we use
# a sqlite3 database (checked in the deploy git repo) to provide details of
# host names, environments, applications that should be installed and paths
# to find secret keys in secure yum repos in data centers.

# By evaluating the external pillar we can use the ext_pillar data to target
# the pillar top.sls file. For example the database will tell us that
# host1.example.com should be running only App1 in a stage environment, and
# we can use that data to make sure the pillar sls file provides only config
# relevant to that app in that environment to that host
ext_pillar_first: True

# Location of the databse file. These are actaully just the default values,
# given here for clarity.
pillar.sqlite3.database: /var/lib/salt/pillar.db
pillar.sqlite3.timeout: 5.0

# Queries to be performed on the database to populate the pillar data. See
# salt docs for explanation of the format.
ext_pillar:
    - sqlite3:
        - query: |
            <redacted>
        - query: |
            <redacted>
          as_list: True
        - query: |
            <redacted>
        - query: |
            <redacted>

#############################################################################

pillar top file:

base:
  'master':
    - master_vars
    - master_secrets
  '*.<redacted>.com':
    # Covers all application servers (i.e. everything except salt master)
    - global_vars.default.prod_vars
    - global_vars.default.global_secrets
    - users

  # Overrides for global vars in qa and stage
  I@host_info:liveness:qa:
    - global_vars.qa.qa_vars
  I@host_info:liveness:stage:
    - global_vars.stage.stage_vars

  #################
  # Network HSMs  #
  #################q
  G@hsm_installed:False and I@host_info:liveness:qa and I@host_info:location:site1:
    - global_vars.qa.site1.hsm_addrs

  G@hsm_installed:False and I@host_info:liveness:qa and I@host_info:location:site2:
    - global_vars.qa.site2.hsm_addrs

  # No QA HSMs available in site3

  G@hsm_installed:False and I@host_info:liveness:stage and I@host_info:location:site1:
    - global_vars.stage.site1.hsm_addrs

  G@hsm_installed:False and I@host_info:liveness:stage and I@host_info:location:site2:
    - global_vars.stage.site2.hsm_addrs

  G@hsm_installed:False and I@host_info:liveness:stage and I@host_info:location:site3:
    - global_vars.stage.site3.hsm_addrs

  G@hsm_installed:False and I@host_info:liveness:prod and I@host_info:location:site1:
    - global_vars.default.site1.hsm_addrs

  G@hsm_installed:False and I@host_info:liveness:prod and I@host_info:location:site2:
    - global_vars.default.site2.hsm_addrs

  G@hsm_installed:False and I@host_info:liveness:prod and I@host_info:location:site3:
    - global_vars.default.site3.hsm_addrs

  #################
  # app1 vars #
  #################
  # Prod
  I@apps:app1-service:
    - application_vars.app1-service.default.prod_vars
    - application_vars.app1-service.default.prod_secrets

  # Prod location overrides
  I@apps:app1-service and I@host_info:location:site1:
    - application_vars.app1-service.default.site1.prod_site1_vars
  I@apps:app1-service and I@host_info:location:site2:
    - application_vars.app1-service.default.site2.prod_site2_vars
  I@apps:app1-service and I@host_info:location:site3:
    - application_vars.app1-service.default.site3.prod_site3_vars

  # Stage overrides
  I@apps:app1-service and I@host_info:liveness:stage:
    - application_vars.app1-service.stage.stage_vars

  # Stage location overrides
  I@apps:app1-service and I@host_info:liveness:stage and I@host_info:location:site1:
    - application_vars.app1-service.stage.site1.stage_site1_vars
  I@apps:app1-service and I@host_info:liveness:stage and I@host_info:location:site2:
    - application_vars.app1-service.stage.site2.stage_site2_vars
  I@apps:app1-service and I@host_info:liveness:stage and I@host_info:location:site3:
    - application_vars.app1-service.stage.site3.stage_site3_vars

  # QA overrides
  I@apps:app1-service and I@host_info:liveness:qa:
    - application_vars.app1-service.qa.qa_vars
    - application_vars.app1-service.qa.qa_secrets

  # QA location overrides
  I@apps:app1-service and I@host_info:liveness:qa and I@host_info:location:site1:
    - application_vars.app1-service.qa.site1.qa_site1_vars
  I@apps:app1-service and I@host_info:liveness:qa and I@host_info:location:site2:
    - application_vars.app1-service.qa.site2.qa_site2_vars

  <... more of the same as above for different apps ...>

state top files:

base:
  'master':
    - master_config

  # Network hsm configured. Only run if a local hsm is not installed.
  '*.com and G@hsm_installed:False':
    - env_states.hsm_config

  # Add hsm library location to .bash_profile, install required java
  # version and add host status logging to crontab
  '*.com':
    # applies to everything except the master
    - env_states.users
    - env_states.env_vars
    - env_states.crontab
    - env_states.java

  # Application install states
  I@apps:app1-service:
    - app1-service
  <... more of the same for different apps ...>
colinjkevans commented 7 years ago

Could it be relevant that the master is set to run as a different user from the minion?

gtmanfred commented 7 years ago

Are both run as non root, or is the minion run as root and the master run as another user? On Wed, Apr 5, 2017 at 10:02 AM colinjkevans notifications@github.com wrote:

Could it be relevant that the master is set to run as a different user from the minion?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/saltstack/salt/issues/40529#issuecomment-291890064, or mute the thread https://github.com/notifications/unsubscribe-auth/AAssoU_F05MoitCwslh1xOzWpi6v2mnmks5rs6z6gaJpZM4MzTJ6 .

colinjkevans commented 7 years ago

Minion is root, master non-root

scbunn commented 7 years ago

I ran into an error similar to this in my testing pipeline, I tracked it down to a minion config of color: True which turns out not to be a valid minion config option. It is possible that there is an error in your minion's configuration.

colinjkevans commented 7 years ago

Thanks for the suggestion. My minion config is very minimal. All it contains is master and id, so I don't think the content of the config file is likely to be a factor.

Grey-Boy commented 7 years ago

head -n2 /usr/bin/salt-minion output: #!/usr/bin/python2.6

ls /usr/bin | grep python can not found python2.6

create soft link to python2.6 interpreter,it will work OK,like this: ln -s /usr/bin/python /usr/bin/python2.6 (notice: /usr/bin/python should be the default version which is 2.6.x)

gtmanfred commented 7 years ago

Sorry it took me so long to get back to this.

The problem is you are targeting with pillars in the pillar top file.

Traceback (most recent call last):
  File "/bin/salt-call", line 8, in <module>
    execfile(__file__)
  File "/root/src/salt/scripts/salt-call", line 11, in <module>
    salt_call()
  File "/root/src/salt/salt/scripts.py", line 372, in salt_call
    client.run()
  File "/root/src/salt/salt/cli/call.py", line 48, in run
    caller = salt.cli.caller.Caller.factory(self.config)
  File "/root/src/salt/salt/cli/caller.py", line 79, in factory
    return ZeroMQCaller(opts, **kwargs)
  File "/root/src/salt/salt/cli/caller.py", line 274, in __init__
    super(ZeroMQCaller, self).__init__(opts)
  File "/root/src/salt/salt/cli/caller.py", line 102, in __init__
    self.minion = salt.minion.SMinion(opts)
  File "/root/src/salt/salt/minion.py", line 629, in __init__
    self.gen_modules(initial_load=True)
  File "/root/src/salt/salt/minion.py", line 660, in gen_modules
    pillarenv=self.opts.get('pillarenv'),
  File "/root/src/salt/salt/pillar/__init__.py", line 851, in compile_pillar
    matches = self.top_matches(top)
  File "/root/src/salt/salt/pillar/__init__.py", line 581, in top_matches
    self.opts.get('nodegroups', {}),
  File "/root/src/salt/salt/minion.py", line 2756, in confirm_top
    return getattr(self, funcname)(match)
  File "/root/src/salt/salt/minion.py", line 2989, in compound_match
    str(getattr(self, '{0}_match'.format(engine))(*engine_args, **engine_kwargs))
  File "/root/src/salt/salt/minion.py", line 2850, in pillar_match
    self.opts['pillar'], tgt, delimiter=delimiter
KeyError: 'pillar'

The only way you can do this is by using ext_pillars. If you aren't using ext_pillars and have ext_pillar_first: True set, then it will fail.

set ext_pillar_first: True in the minion config (since --local is used`) and this will resolve the problem

Thanks, Daniel

mshade commented 7 years ago

@gtmanfred this helped me track down a pillar typo in my own setup with a similar error. Thanks for documenting the cause. Mine was due to a simple syntax error of a trailing ':' left from some editing.

Is there a utility (or other easy method) to parse pillar to test yaml/jinja syntax quickly?

MartinEmrich commented 7 years ago

The solution of @gtmanfred worked fine here, I added ext_pillar_first: True.

But starting the minion should not depend upon config files meant for the master (i.e. pillar top.sls, etc.)...

gtmanfred commented 7 years ago

@dmurphy18 is there a reason we check the /etc/salt/minion file like this in the init script?

Thanks, Daniel

dmurphy18 commented 7 years ago

@gtmanfred @plastikos the code was added

Revision |   | Date | Author | Comments

0efbbcd1 |   | 09-May-2016 | plastikos thayne@vintagethreads.com

Reading the config is causing issues, wondering if @plastikos may have a preferred solution ?

piersf commented 6 years ago

I've run into this issue also today. Tried what was suggested above and didn't work. I completely removed any salt-minion and salt packages, deleted /etc/salt, deleted /etc/init.d/salt-minion, ran a yum clean all && yum install salt-minion but nothing. The error still appears.

Was there any solution to this?

gtmanfred commented 6 years ago

Are you using pillars in a nodegroup on the master?

ttoyoo commented 6 years ago

Got same issue today, error message:

# yum remove salt-minion-2018.3.2-1.el6.noarch
...
error: %preun(salt-minion-2018.3.2-1.el6.noarch) scriptlet failed, exit status 1
Error in PREUN scriptlet in rpm package salt-minion
salt-minion-2018.3.2-1.el6.noarch was supposed to be removed but is not!
  Verifying  : salt-minion-2018.3.2-1.el6.noarch                                                                                                                                                        1/1 

Failed:
  salt-minion.noarch 0:2018.3.2-1.el6                                                                                                                                                                       

Complete!

Following steps work for me:

# yum remove salt-2018.3.2-1.el6.noarch
# cd /etc/salt
# rm -rf *

# remove salt-minion key at salt-master

# yum install salt
# yum remove salt-minion
# yum install salt-minion
muslumb commented 6 years ago

Hi i've faced this issue when i try "service salt-minion start/status/restart" . "ERROR: Unable to look-up config values for /etc/salt " then when i run "salt-call state.highstat" or /usr/bin/salt-minion
[ERROR ] 'NoneType' object is not iterable
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/utils/parsers.py", line 222, in parse_args
mixin_after_parsed_func(self)
File "/usr/lib/python2.7/dist-packages/salt/utils/parsers.py", line 852, in __setup_extended_logging
log.setup_extended_logging(self.config)
File "/usr/lib/python2.7/dist-packages/salt/log/setup.py", line 735, in setup_extended_logging
providers = salt.loader.log_handlers(opts)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 552, in log_handlers
tag='log_handlers',
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1092, in init
self.opts = self.prep_mod_opts(opts)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1383, in
prep_mod_opts
self.pack['grains'] = salt.utils.context.NamespacedDictWrapper(self.context_dict, 'grains', override_name='grains')
File "/usr/lib/python2.7/dist-packages/salt/utils/context.py", line 210, in init
super(NamespacedDictWrapper, self).init(self._dict())
TypeError: 'NoneType' object is not iterable
Usage: salt-minion [options]

salt-minion: error: Error while processing <bound method Minion.setup_extended_logging of <salt.cli.daemons.Minion object at 0x7f5211de1650>>: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/utils/parsers.py", line 222, in parse_args
mixin_after_parsed_func(self)
File "/usr/lib/python2.7/dist-packages/salt/utils/parsers.py", line 852, in
setup_extended_logging
log.setup_extended_logging(self.config)
File "/usr/lib/python2.7/dist-packages/salt/log/setup.py", line 735, in setup_extended_logging
providers = salt.loader.log_handlers(opts)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 552, in log_handlers
tag='log_handlers',
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1092, in init
self.opts = self.prep_mod_opts(opts)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1383, in
prep_mod_opts
self.pack['grains'] = salt.utils.context.NamespacedDictWrapper(self.context_dict, 'grains', override_name='grains')
File "/usr/lib/python2.7/dist-packages/salt/utils/context.py", line 210, in init
super(NamespacedDictWrapper, self).init(self._dict())
TypeError: 'NoneType' object is not iterable

anyone faced this issue?

Regards

muslumb commented 6 years ago

[root@salt]# salt --versions
Salt Version:
Salt: 2018.3.3

Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: 2.1
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.14 (default, May 2 2018, 18:31:34)
python-gnupg: Not Installed
PyYAML: 3.10
PyZMQ: 14.5.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.0.5

System Versions:
dist:
locale: UTF-8
machine: x86_64
release: 4.14.33-51.37.amzn1.x86_64
system: Linux
version: Not Installed

dmurphy18 commented 6 years ago

@muslumb The original issue is about RHEL 6, whereas your version report shows Amazon Linux 1. Could you please open a new issue for Amazon Linux 1. Understanding that Amazon Linux 1 is based on RHEL 6, and SaltStack only supports Python 2.7 at this point on both, however the packages are built differently for Amazon, and it would be better to address this in its own issue rather than conflagrate the two platforms.

dmurphy18 commented 6 years ago

@piersf @ttoyoo In addition to yum remove salt-minion, I usually do the following as well:

yum remove salt-minion salt rm -fr /var/cache/salt rm -fr /var/run/salt

as well as the steps outlined previously, to fully remove a salt-minion from a system.

diqing commented 5 years ago

I have the same problem as you, only install salt-minion on server like this: yum -y install salt-minion and My minion config is very minimal. All it contains is master and id,

[root@localhost salt]# /etc/init.d/salt-minion start ERROR: Unable to look-up config values for /etc/salt

That's how I solved it:

[root@localhost salt]# pip list Package Version pip 18.1
setuptools 40.6.3 wheel 0.32.3 [root@localhost salt]# pip install salt ... [root@localhost salt]# pip list Package Version
backports-abc 0.5
certifi 2018.11.29 chardet 3.0.4
futures 3.2.0
idna 2.8
Jinja2 2.10
MarkupSafe 1.1.0
msgpack 0.6.0
pip 18.1
pycrypto 2.6.1
PyYAML 3.13
pyzmq 17.1.2
requests 2.21.0
salt 2018.3.3
setuptools 40.6.3
singledispatch 3.4.0.3
six 1.12.0
tornado 5.1.1
urllib3 1.24.1
wheel 0.32.3 [root@localhost salt]#/etc/init.d/salt-minion start Starting salt-minion:root:SN20190111-QA daemon: OK

that's all and my python verson [root@localhost salt]# python -V Python 2.7.15

I hope all this can help you! thank you!

sathish627 commented 5 years ago

try to debut via sh -x /etc/init.d/salt-minion status check if the variables not setting up properly check the python version and run command salt-call -V (if the output is not expected then remove below files/rpm) rpm -e libsodium-debuginfo-0.4.5-3.el6.x86_64 rpm -e libsodium-devel-0.4.5-3.el6.x86_64 also remove python27-libnacl-1.6.1-1.el6.noarch

this will solve the issue !!!

dmurphy18 commented 5 years ago

@colinjkevans Wondering if this is still an issue for as I am unable to reproduce, and the original issue was on Salt 2016.11.3 with Python 2.6, for which support is both End-of-Lifed.

I have tried Salt 2018.3.4 on RHEL 6 with the following: sh /etc/init.d/salt-minion status sh /etc/init.d/salt-minion restart service salt-minion stop service salt-minion status service salt-minion restart

And have not found any issue.

dmurphy18 commented 5 years ago

@colinjkevans Wondering if there is any further information on this issue.

colinjkevans commented 5 years ago

This was resolved for me with this fix: https://github.com/saltstack/salt/issues/40529#issuecomment-293953806

I do concur with https://github.com/saltstack/salt/issues/40529#issuecomment-325339283 that the ext_pillar_first config that seems to be master config, shouldn't be needed in the minion file in order for it to start

dmurphy18 commented 5 years ago

@colinjkevans Closing this issue since you have a resolution.