mitchellh / vagrant-aws

Use Vagrant to manage your EC2 and VPC instances.
MIT License
2.61k stars 574 forks source link

Vagrant fails when removing from chef server #444

Open karnold opened 8 years ago

karnold commented 8 years ago

I can successfully run vagrant up and vagrant provision on a machine with no issue. However when I run vagrant down I get the following error:

==> default: Terminating the instance...
==> default: Running cleanup tasks for 'chef_client' provisioner...
==> default: Deleting node "test-www1" from Chef server...
The provider for this Vagrant-managed machine is reporting that it
is not yet ready for SSH. Depending on your provider this can carry
different meanings. Make sure your machine is created and running and
try again. Additionally, check the output of `vagrant status` to verify
that the machine is in the state that you expect. If you continue to
get this error message, please view the documentation for the provider
you're using.

Note that I am provisioning this via amazon AWS. The aws instance is destroyed as expected, however it is not removed from the chef server.

Here is my Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

hostname = "test-www1"

Vagrant.configure(2) do |config|

  config.vm.box = "dummy"
  config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"

  config.vm.provider :aws do |aws, override|
    aws.access_key_id = ENV['AWS_ACCESS_KEY']
    aws.secret_access_key = ENV['AWS_SECRET_KEY']
    aws.region = "us-west-2"
    aws.monitoring = true
    aws.keypair_name = "my_keypair"
    aws.ami = "ami-47465826"

     # Tag each instance with developer name.
    aws.tags = {
      'Name' => hostname
    }

    override.ssh.username = "ubuntu"
    override.ssh.private_key_path = "~/.ssh/sfmoma-aws.pem"
  end

  # Provision Chef Server
  config.vm.provision "chef_client" do |chef|
    #chef.log_level = :debug
    chef.chef_server_url = "https://path-to-chef-server"
    chef.validation_key_path = "/usr/lib/ssl/certs/sfmoma-validator.pem"
    chef.validation_client_name = "sfmoma-validator"

    chef.node_name = hostname
    chef.delete_node = true
    chef.delete_client = true

    chef.environment = "sfmoma-web"
    chef.add_role "sfmoma_web_general"
    chef.add_recipe "sfmoma_web::www"
  end
end
Sharpie commented 8 years ago

This is probably happening because the aws provider runs provisioner cleanup after the instance has been terminated (at least, as of v0.7.0):

https://github.com/mitchellh/vagrant-aws/blob/v0.7.0/lib/vagrant-aws/action.rb#L56

If the chef cleanup routine is trying to read state from the agent, then it's out of luck because everything is gone at this point.

In contrast, core vagrant switched to running cleanup actions for VirtualBox et. al. before termination in v1.8.0:

https://github.com/mitchellh/vagrant/blob/master/CHANGELOG.md#180-december-21-2015

This also appears to be when the Chef provisioner was changed to read state from the agent.

Sharpie commented 8 years ago

Looks like PR #452 may address this.