saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Install Salt from the Salt package repositories here:
https://docs.saltproject.io/salt/install-guide/en/latest/
Apache License 2.0
14.23k stars 5.49k forks source link

[BUG] salt-cloud delete cannot find instance in different AWS account #60734

Open tyhunt99 opened 3 years ago

tyhunt99 commented 3 years ago

Description I created an instance in a different AWS account than my salt master using salt-cloud -p aws-vp-docker-host dockertest. When I try to delete that minion via salt-cloud -d dockertest it fails to find the minion with following output: No machines were found to be destroyed

Setup

log output ``` $ salt-cloud --log-level debug -d dockertest [DEBUG ] Reading configuration from /etc/salt/cloud [DEBUG ] Reading configuration from /etc/salt/master [DEBUG ] Including configuration from '/etc/salt/master.d/gitfs.conf' [DEBUG ] Reading configuration from /etc/salt/master.d/gitfs.conf [DEBUG ] Including configuration from '/etc/salt/master.d/reactor.conf' [DEBUG ] Reading configuration from /etc/salt/master.d/reactor.conf [DEBUG ] Changed git to gitfs in master opts' fileserver_backend list [DEBUG ] Using cached minion ID from /etc/salt/minion_id: use1-salt01.ipa.prd.localnet.io [DEBUG ] Missing configuration file: /etc/salt/cloud.providers [DEBUG ] Including configuration from '/etc/salt/cloud.providers.d/aws_bv.conf' [DEBUG ] Reading configuration from /etc/salt/cloud.providers.d/aws_bv.conf [DEBUG ] Including configuration from '/etc/salt/cloud.providers.d/aws_vp.conf' [DEBUG ] Reading configuration from /etc/salt/cloud.providers.d/aws_vp.conf [DEBUG ] Missing configuration file: /etc/salt/cloud.profiles [DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/bv.conf' [DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/bv.conf [DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/datadog.conf' [DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/datadog.conf [DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/docker.conf' [DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/docker.conf [DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/mmmsg.conf' [DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/mmmsg.conf [DEBUG ] Including configuration from '/etc/salt/cloud.profiles.d/vos_application.conf' [DEBUG ] Reading configuration from /etc/salt/cloud.profiles.d/vos_application.conf [DEBUG ] Using pkg_resources to load entry points [DEBUG ] Override __grains__: [DEBUG ] Configuration file path: /etc/salt/cloud [WARNING ] Insecure logging configuration detected! Sensitive data may be logged. [INFO ] salt-cloud starting [DEBUG ] Using pkg_resources to load entry points [DEBUG ] Using pkg_resources to load entry points [DEBUG ] Reading configuration from /etc/salt/minion [DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf' [DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf [DEBUG ] Using cached minion ID from /etc/salt/minion_id: use1-salt01.ipa.prd.localnet.io [DEBUG ] Using pkg_resources to load entry points [DEBUG ] Marking 'base64_encode' as a jinja filter [DEBUG ] Marking 'base64_decode' as a jinja filter [DEBUG ] Marking 'md5' as a jinja filter [DEBUG ] Marking 'sha1' as a jinja filter [DEBUG ] Marking 'sha256' as a jinja filter [DEBUG ] Marking 'sha512' as a jinja filter [DEBUG ] Marking 'hmac' as a jinja filter [DEBUG ] Marking 'hmac_compute' as a jinja filter [DEBUG ] Marking 'random_hash' as a jinja filter [DEBUG ] Marking 'rand_str' as a jinja filter [DEBUG ] Marking 'file_hashsum' as a jinja filter [DEBUG ] Marking 'http_query' as a jinja filter [DEBUG ] Marking 'strftime' as a jinja filter [DEBUG ] Marking 'date_format' as a jinja filter [DEBUG ] Marking 'raise' as a jinja global [DEBUG ] Marking 'match' as a jinja test [DEBUG ] Marking 'equalto' as a jinja test [DEBUG ] Marking 'skip' as a jinja filter [DEBUG ] Marking 'sequence' as a jinja filter [DEBUG ] Marking 'to_bool' as a jinja filter [DEBUG ] Marking 'indent' as a jinja filter [DEBUG ] Marking 'tojson' as a jinja filter [DEBUG ] Marking 'quote' as a jinja filter [DEBUG ] Marking 'regex_escape' as a jinja filter [DEBUG ] Marking 'regex_search' as a jinja filter [DEBUG ] Marking 'regex_match' as a jinja filter [DEBUG ] Marking 'regex_replace' as a jinja filter [DEBUG ] Marking 'uuid' as a jinja filter [DEBUG ] Marking 'unique' as a jinja filter [DEBUG ] Marking 'min' as a jinja filter [DEBUG ] Marking 'max' as a jinja filter [DEBUG ] Marking 'avg' as a jinja filter [DEBUG ] Marking 'union' as a jinja filter [DEBUG ] Marking 'intersect' as a jinja filter [DEBUG ] Marking 'difference' as a jinja filter [DEBUG ] Marking 'symmetric_difference' as a jinja filter [DEBUG ] Marking 'method_call' as a jinja filter [DEBUG ] Marking 'yaml_dquote' as a jinja filter [DEBUG ] Marking 'yaml_squote' as a jinja filter [DEBUG ] Marking 'yaml_encode' as a jinja filter [DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False [DEBUG ] LazyLoaded parallels.avail_locations [DEBUG ] LazyLoaded proxmox.avail_sizes [DEBUG ] Using pkg_resources to load entry points [DEBUG ] Using pkg_resources to load entry points [DEBUG ] Could not LazyLoad parallels.avail_sizes: 'parallels' __virtual__ returned False [DEBUG ] LazyLoaded parallels.avail_locations [DEBUG ] LazyLoaded proxmox.avail_sizes [DEBUG ] Using AWS endpoint: ec2.us-east-1.amazonaws.com [DEBUG ] Starting new HTTP connection (1): 169.254.169.254 [DEBUG ] http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/ HTTP/1.1" 200 4 [DEBUG ] Starting new HTTP connection (1): 169.254.169.254 [DEBUG ] http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/salt HTTP/1.1" 200 1310 [INFO ] Assuming the role: arn:aws:iam::561166904391:role/salt [DEBUG ] Using cached minion ID from /etc/salt/minion_id: use1-salt01.ipa.prd.localnet.io [DEBUG ] Starting new HTTP connection (1): 169.254.169.254 [DEBUG ] http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/ HTTP/1.1" 200 4 [DEBUG ] Starting new HTTP connection (1): 169.254.169.254 [DEBUG ] http://169.254.169.254:80 "GET /latest/meta-data/iam/security-credentials/salt HTTP/1.1" 200 1310 [DEBUG ] Starting new HTTPS connection (1): sts.amazonaws.com [DEBUG ] https://sts.amazonaws.com:443 "GET /?Action=AssumeRole&DurationSeconds=3600&Policy=%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22Stmt1%22%2C%20%22Effect%22%3A%22Allow%22%2C%22Action%22%3A%22%2A%22%2C%22Resource%22%3A%22%2A%22%7D%5D%7D&RoleArn=arn%3Aaws%3Aiam%3A%3Annnnnnnnnn%3Arole%2Fsalt&RoleSessionName=use1-salt01.ipa.prd.localnet.io&Version=2011-06-15 HTTP/1.1" 200 899 [DEBUG ] AWS Request: https://ec2.us-east-1.amazonaws.com/?Action=DescribeInstances&Version=2014-10-01 [DEBUG ] Starting new HTTPS connection (1): ec2.us-east-1.amazonaws.com [DEBUG ] https://ec2.us-east-1.amazonaws.com:443 "GET /?Action=DescribeInstances&Version=2014-10-01 HTTP/1.1" 200 None [DEBUG ] AWS Response Status Code: 200 No machines were found to be destroyed ```

aws-vp-ec2 provider (the salt master instance is in a different aws account than the role_arn)

aws-vp-ec2:
  driver: ec2

  ssh_interface: private_ips

  ebs_optimized: True

  # AWS access keys
  id: 'use-instance-role-credentials'
  key: 'use-instance-role-credentials'
  role_arn: 'arn:aws:iam::xxxxxxxxxx:role/salt'

  # ssh config
  ssh_username: ubuntu
  private_key: /rootkey  
  keyname: devops

  # minion config
  minion:
    master: 172.30.0.153

  del_root_vol_on_destroy: True
  rename_on_destroy: True

  startup_states: highstate

profile being used:

aws-vp-docker-host:
  provider: aws-vp-ec2

  size: c5.2xlarge

  # ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20210720
  image: ami-06cdbd80022d89537

  subnetname: XXXXXXX

  block_device_mappings:
    # root device
    - DeviceName: /dev/sda1
      Ebs.VolumeSize: 250
      Ebs.VolumeType: gp2

  minion:
    grains:
      roles:
        - docker

Steps to Reproduce the behavior

  1. Have salt master in account AWS account nnnnnnnnnn
  2. Setup provider with role_arn pointing to another AWS account xxxxxxxxxx
  3. Provision a minion in salt-cloud with the provider
  4. Try to delete the minion with salt-cloud -d

Expected behavior The minion is found and deleted properly.

Screenshots N/A

Versions Report

salt --versions-report (Provided by running salt --versions-report. Please also mention any differences in master/minion versions.) ``` Salt Version: Salt: 3003.2 Dependency Versions: cffi: Not Installed cherrypy: Not Installed dateutil: 2.6.1 docker-py: Not Installed gitdb: 2.0.3 gitpython: 2.1.8 Jinja2: 2.10 libgit2: Not Installed M2Crypto: Not Installed Mako: Not Installed msgpack: 0.5.6 msgpack-pure: Not Installed mysql-python: Not Installed pycparser: Not Installed pycrypto: 2.6.1 pycryptodome: 3.4.7 pygit2: Not Installed Python: 3.6.9 (default, Jan 26 2021, 15:33:00) python-gnupg: 0.4.1 PyYAML: 3.12 PyZMQ: 17.1.2 smmap: 2.0.3 timelib: Not Installed Tornado: 4.5.3 ZMQ: 4.2.5 System Versions: dist: ubuntu 18.04 Bionic Beaver locale: UTF-8 machine: x86_64 release: 5.4.0-1054-aws system: Linux version: Ubuntu 18.04 Bionic Beaver ```

Additional context xxxxxxxxxx - refers to the other AWS account nnnnnnnnnn - refers to the main AWS where salt master lives

I also tried to specify the profile of the original salt-cloud call with salt-cloud -p aws-vp-docker-host -d dockertest and that did not work because I believe most other flags are ignored when -d is specified.

I also have the salt-cloud grains enabled and it has this in the minion:

$ salt dockertest grains.get salt-cloud
dockertest:
    ----------
    driver:
        ec2
    profile:
        aws-vp-docker-host
    provider:
        aws-vp-ec2:ec2
tyhunt99 commented 3 years ago

I was wondering if there were any updates on this. It is making the management of instances in AWS rather tedious. Thanks

tyhunt99 commented 3 years ago

I have also found this issue when using a mapfile. It does not check the correct AWS account for the presence of the servers defined inside the mapfile specified.

HerHde commented 2 years ago

I suspect this to be the case with other providers in general, as I seem to have the same problem with different providers/credentials for Hetzner Cloud