Closed BLePera closed 10 months ago
This was a bit of a pain, but I figured it out. The CIS image we used from AWS had installed NFTables, with rules overriding UFW and blocking access to the ports. This was not preinstalled as part of earlier images. Removing this package from the salt master has allowed a successful provision and ping. Closing the issue.
Hello, Have created a new salt environment on version 3006.5. Minion is encountering error on salt-minion service after salt-cloud initialization:
Minion Log
Setup (Please provide relevant configs and/or SLS files (be sure to remove sensitive info. There is no general set-up of Salt.)
Map File qa12-50-asc-map
profile
profile extended
provider
Please be as specific as possible and give set-up details.
Steps to Reproduce the behavior sudo salt-cloud -m /srv/salt-cloud-tst/Nix/SaltMaster/nva-qa/qa12-50-asc-map -P
Expected behavior Salt-Minion service to start and able to run a test.ping from the Salt Master
Versions Report
Master Config
``` ##### Primary configuration settings ##### ########################################## # This configuration file is used to manage the behavior of the Salt Master. # Values that are commented out but have an empty line after the comment are # defaults that do not need to be set in the config. If there is no blank line # after the comment then the value is presented as an example and is not the # default. # Per default, the master will automatically include all config files # from master.d/*.conf (master.d is a directory in the same directory # as the main master config file). #default_include: master.d/*.conf # The address of the interface to bind to: #interface: 0.0.0.0 # Whether the master should listen for IPv6 connections. If this is set to True, # the interface option must be adjusted, too. (For example: "interface: '::'") #ipv6: False # The tcp port used by the publisher: #publish_port: 4505 # The user under which the salt master will run. Salt will update all # permissions to allow the specified user to run the master. The exception is # the job cache, which must be deleted if this user is changed. If the # modified files cause conflicts, set verify_env to False. #user: root # Tell the master to also use salt-ssh when running commands against minions. #enable_ssh_minions: False # The port used by the communication interface. The ret (return) port is the # interface used for the file server, authentication, job returns, etc. #ret_port: 4506 # Specify the location of the daemon process ID file: #pidfile: /var/run/salt-master.pid # The root directory prepended to these options: pki_dir, cachedir, # sock_dir, log_file, autosign_file, autoreject_file, extension_modules, # key_logfile, pidfile, autosign_grains_dir: #root_dir: / # The path to the master's configuration file. #conf_file: /etc/salt/master # Directory used to store public key data: #pki_dir: /etc/salt/pki/master # Key cache. Increases master speed for large numbers of accepted # keys. Available options: 'sched'. (Updates on a fixed schedule.) # Note that enabling this feature means that minions will not be # available to target for up to the length of the maintanence loop # which by default is 60s. #key_cache: '' # Directory to store job and cache data: # This directory may contain sensitive data and should be protected accordingly. # #cachedir: /var/cache/salt/master # Directory for custom modules. This directory can contain subdirectories for # each of Salt's module types such as "runners", "output", "wheel", "modules", # "states", "returners", "engines", "utils", etc. #extension_modules: /var/cache/salt/master/extmods # Directory for custom modules. This directory can contain subdirectories for # each of Salt's module types such as "runners", "output", "wheel", "modules", # "states", "returners", "engines", "utils", etc. # Like 'extension_modules' but can take an array of paths #module_dirs: [] # Verify and set permissions on configuration directories at startup: #verify_env: True # Set the number of hours to keep old job information in the job cache: keep_jobs: 120 # The number of seconds to wait when the client is requesting information # about running jobs. gather_job_timeout: 30 # Set the default timeout for the salt command and api. The default is 5 # seconds. timeout: 15 # The loop_interval option controls the seconds for the master's maintenance # process check cycle. This process updates file server backends, cleans the # job cache and executes the scheduler. #loop_interval: 60 # Set the default outputter used by the salt command. The default is "nested". #output: nested # To set a list of additional directories to search for salt outputters, set the # outputter_dirs option. #outputter_dirs: [] # Set the default output file used by the salt command. Default is to output # to the CLI and not to a file. Functions the same way as the "--out-file" # CLI option, only sets this to a single file for all salt commands. #output_file: None # Return minions that timeout when running commands like test.ping #show_timeout: True # Tell the client to display the jid when a job is published. #show_jid: False # By default, output is colored. To disable colored output, set the color value # to False. #color: True # Do not strip off the colored output from nested results and state outputs # (true by default). # strip_colors: False # To display a summary of the number of minions targeted, the number of # minions returned, and the number of minions that did not return, set the # cli_summary value to True. (False by default.) # cli_summary: True # Set the directory used to hold unix sockets: #sock_dir: /var/run/salt/master # The master can take a while to start up when lspci and/or dmidecode is used # to populate the grains for the master. Enable if you want to see GPU hardware # data for your master. # enable_gpu_grains: False # The master maintains a job cache. While this is a great addition, it can be # a burden on the master for larger deployments (over 5000 minions). # Disabling the job cache will make previously executed jobs unavailable to # the jobs system and is not generally recommended. #job_cache: True # Cache minion grains, pillar and mine data via the cache subsystem in the # cachedir or a database. #minion_data_cache: True # Cache subsystem module to use for minion data cache. #cache: localfs # Enables a fast in-memory cache booster and sets the expiration time. #memcache_expire_seconds: 0 # Set a memcache limit in items (bank + key) per cache storage (driver + driver_opts). #memcache_max_items: 1024 # Each time a cache storage got full cleanup all the expired items not just the oldest one. #memcache_full_cleanup: False # Enable collecting the memcache stats and log it on `debug` log level. #memcache_debug: False # Store all returns in the given returner. # Setting this option requires that any returner-specific configuration also # be set. See various returners in salt/returners for details on required # configuration values. (See also, event_return_queue, and event_return_queue_max_seconds below.) # event_return: - kinesis # On busy systems, enabling event_returns can cause a considerable load on # the storage system for returners. Events can be queued on the master and # stored in a batched fashion using a single transaction for multiple events. # By default, events are not queued. #event_return_queue: 0 # In some cases enabling event return queueing can be very helpful, but the bus # may not busy enough to flush the queue consistently. Setting this to a reasonable # value (1-30 seconds) will cause the queue to be flushed when the oldest event is older # than `event_return_queue_max_seconds` regardless of how many events are in the queue. #event_return_queue_max_seconds: 0 # Only return events matching tags in a whitelist, supports glob matches. #event_return_whitelist: # - salt/master/a_tag # - salt/run/*/ret # Store all event returns **except** the tags in a blacklist, supports globs. event_return_blacklist: - minion_ping - salt/auth # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # master event bus. The value is expressed in bytes. #max_event_size: 1048576 # Windows platforms lack posix IPC and must rely on slower TCP based inter- # process communications. Set ipc_mode to 'tcp' on such systems #ipc_mode: ipc # Overwrite the default tcp ports used by the minion when ipc_mode is set to 'tcp' #tcp_master_pub_port: 4510 #tcp_master_pull_port: 4511 # By default, the master AES key rotates every 24 hours. The next command # following a key rotation will trigger a key refresh from the minion which may # result in minions which do not respond to the first command after a key refresh. # # To tell the master to ping all minions immediately after an AES key refresh, set # ping_on_rotate to True. This should mitigate the issue where a minion does not # appear to initially respond after a key is rotated. # # Note that ping_on_rotate may cause high load on the master immediately after # the key rotation event as minions reconnect. Consider this carefully if this # salt master is managing a large number of minions. # # If disabled, it is recommended to handle this event by listening for the # 'aes_key_rotate' event with the 'key' tag and acting appropriately. # ping_on_rotate: False # By default, the master deletes its cache of minion data when the key for that # minion is removed. To preserve the cache after key deletion, set # 'preserve_minion_cache' to True. # # WARNING: This may have security implications if compromised minions auth with # a previous deleted minion ID. #preserve_minion_cache: False # Allow or deny minions from requesting their own key revocation #allow_minion_key_revoke: True # If max_minions is used in large installations, the master might experience # high-load situations because of having to check the number of connected # minions for every authentication. This cache provides the minion-ids of # all connected minions to all MWorker-processes and greatly improves the # performance of max_minions. # con_cache: False # The master can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main master configuration file lives in (this file). Paths can make use # of shell-style globbing. If no files are matched by a path passed to this # option, then the master will log a warning message. # # Include a config file from some other path: # include: /etc/salt/extra_config # # Include config from several files and directories: # include: # - /etc/salt/extra_config ##### Large-scale tuning settings ##### ########################################## # Max open files # # Each minion connecting to the master uses AT LEAST one file descriptor, the # master subscription connection. If enough minions connect you might start # seeing on the console (and then salt-master crashes): # Too many open files (tcp_listener.cpp:335) # Aborted (core dumped) # # By default this value will be the one of `ulimit -Hn`, ie, the hard limit for # max open files. # # If you wish to set a different value than the default one, uncomment and # configure this setting. Remember that this value CANNOT be higher than the # hard limit. Raising the hard limit depends on your OS and/or distribution, # a good way to find the limit is to search the internet. For example: # raise max open files hard limit debian # max_open_files: 100000 # The number of worker threads to start. These threads are used to manage # return calls made from minions to the master. If the master seems to be # running slowly, increase the number of threads. This setting can not be # set lower than 3. worker_threads: 75 # Set the ZeroMQ high water marks # http://api.zeromq.org/3-2:zmq-setsockopt # The listen queue size / backlog #zmq_backlog: 1000 # The publisher interface ZeroMQPubServerChannel #pub_hwm: 1000 # The master may allocate memory per-event and not # reclaim it. # To set a high-water mark for memory allocation, use # ipc_write_buffer to set a high-water mark for message # buffering. # Value: In bytes. Set to 'dynamic' to have Salt select # a value for you. Default is disabled. # ipc_write_buffer: 'dynamic' # These two batch settings, batch_safe_limit and batch_safe_size, are used to # automatically switch to a batch mode execution. If a command would have been # sent to more than