jantman / vagrant-r10k

UNSUPPORTED - SEEKING MAINTAINER - Vagrant middleware plugin to retrieve puppet modules using r10k.
MIT License
35 stars 12 forks source link

Multiple deploys with vmware_workstation provider #7

Closed ghost closed 9 years ago

ghost commented 9 years ago

Hi

Firstly, thanks for the plugin!

I recently switched to the vmware_workstation provider for vagrant and I am having an issue with the plugin. The r10k deploys multiple times for up and destroy commands. It also runs for the ssh command. I haven't yet tested resume or suspend.

There is some output below and I posted a gist with the debug flag set during an up command.

This is probably a niche issue but if you can spot anything useful from the output please let me know.

$ vagrant up master
Bringing machine 'master' up with 'vmware_workstation' provider...
==> master: vagrant-r10k: Beginning r10k deploy of puppet modules into ~/code/puppet-dev/modules using ~/code/puppet-dev/puppet/Puppetfile
==> master: vagrant-r10k: Deploy finished
==> master: Verifying vmnet devices are healthy...
==> master: vagrant-r10k: Beginning r10k deploy of puppet modules into ~/code/puppet-dev/modules using ~/code/puppet-dev/puppet/Puppetfile
==> master: vagrant-r10k: Deploy finished
==> master: vagrant-r10k: Beginning r10k deploy of puppet modules into ~/code/puppet-dev/modules using ~/code/puppet-dev/puppet/Puppetfile
==> master: vagrant-r10k: Deploy finished
==> master: Preparing network adapters...
==> master: Starting the VMware VM...
==> master: Waiting for machine to boot. This may take a few minutes...
$ vagrant destroy master
==> master: vagrant-r10k: Beginning r10k deploy of puppet modules into ~/code/puppet-dev/modules using ~/code/puppet-dev/puppet/Puppetfile
==> master: vagrant-r10k: Deploy finished
    master: Are you sure you want to destroy the 'master' VM? [y/N] y
==> master: vagrant-r10k: Beginning r10k deploy of puppet modules into ~/code/puppet-dev/modules using ~/code/puppet-dev/puppet/Puppetfile
==> master: vagrant-r10k: Deploy finished
==> master: Stopping the VMware VM...
==> master: Deleting the VM...
==> master: Running cleanup tasks for 'shell' provisioner...
==> master: Running cleanup tasks for 'puppet' provisioner...
==> master: Running triggers after destroy...
==> master: Updating /etc/hosts file on active guest machines...
==> master: Updating /etc/hosts file on host machine (password may be required)...
$ vagrant ssh master
==> master: vagrant-r10k: Beginning r10k deploy of puppet modules into /home/scott/code/puppet-dev/modules using /home/scott/code/puppet-dev/puppet/Puppetfile
==> master: vagrant-r10k: Deploy finished
[vagrant@master ~]$ 
jantman commented 9 years ago

@wyrie Hmmm...I think I have a vague idea of what's going on here.

I believe this is really two issues.

  1. The plugin runs at both provision and config validation times (see the hooks), so the duplicate run on up is known. Though it's only supposed to run twice, not three times, so there definitely is something wrong here. I'm embarrassed to say that I don't remember the reason for running twice, but I believe it was the only method I could find to have the plugin both fail properly during config validation, and run before puppet in the provisioning phase. I could dig deeper into this, but I'd probably handle this more as a feature request than a bug. If you'd like, I'll open a separate issue for this part.
  2. The plugin running at ssh time is new to me; I've never seen this before, and I can't seem to reproduce it. Though I also haven't used the VMWare provider. Do you think that you could please post a debug log of that (either with VAGRANT_LOG=debug or just by running vagrant ssh --debug)? My guess is that the vmware_workstation provider is running an action hook that VirtualBox doesn't, during the ssh phase. The action hooks part of Vagrant is pretty much undocumented, so a lot of this was based on examination of debug output...

Just for clarity, could you please confirm the version of Vagrant that you're running, along with your OS and ruby version (and if you're running ruby via rvm or rbenv, etc.)?

ghost commented 9 years ago

@jantman Thanks for checking it out.

  1. I commented plugin.rb line 22, which probably breaks the check for plugin settings in the vagrant file, but it is running like it should i.e. once on up, resume and not on ssh, destroy etc.

So I have a workaround for now. If you want to move this to a feature request and need help testing in the future I'm happy with any of it.

  1. Here is a gist for the ssh command.

Ubuntu 14.04 64bit Vagrant version: 1.7.2 Ruby version: 2.0.0 (via apt)

jantman commented 9 years ago

Ok, thanks so much. It might take me a few days to get to it, but I'm going to try and dig through the code and see if there's a cleaner/safer way of hooking in for the config validation and then actual r10k run separately.

After doing a little more research, it appears that each of the providers handle hooks slightly differently. This isn't really well documented, so I based my code on how the virtualbox one works. I'll go through that debug ssh output and see if I can find something more concrete.

jantman commented 9 years ago

@wyrie I assume that you posted that gist after commenting out the hook.before Vagrant::Action::Builtin::ConfigValidate on line 22 in plugin.rb, as it only shows one deploy?

Assuming so, I'm going to close this in favor of #9.

If your offer of testing still holds, yeah, I might need some assistance with that, as it seems like I can't test with the vmware-workstation provider without actually buying both it and VMWare Workstation.

ghost commented 9 years ago

Yes that's right. Tested it again and there is definitely multiple deploys on the ssh command.

Just let me know when you have a commit you want to test and I'll send you the debug output.

bkc1 commented 9 years ago

I am also seeing the multiple r10k deploy issue in my vagrant/Virtual_box env. R10k deploys when running a 'vagrant destroy' as well.

jantman commented 9 years ago

TL;DR: Yeah, you're right. It should be quick, I'll try my best to work on it this weekend, but my attention has been pulled to other things lately.

Yes, I can confirm this is still happening. I guess I should be really embarrassed by the amount of time it's taken me to address this, but I've gotten sidetracked on other things, and also had serious problems trying to get working acceptance tests for this project (apparently that's still a... very new... area for Vagrant plugins).

I believe the fix for this should be relatively simple, and I'll do my best to get back up to speed on this and try something this weekend, but I'll need some help testing it.

Hashicorp was nice enough to grant me a limited license to the VMWare Workstation provider (to be used exclusively for testing this plugin), and I already have Workstation, but it appears that the automated acceptance tests for this are going to have to be completely rewritten for each provider, AND I've been having trouble getting the VMWare provider working inside a bundle/rspec install...

jantman commented 9 years ago

Like #8, this should be fixed by #9 / #23 which are now merged to master but not yet released. I'll ping back once I have a release cut; I'm having issues with VMWare and the Vagrant plugin (can't get automated tests to work for them), but I'll try to do a manual confirmation.