frapposelli / vagrant-vcloud

Vagrant provider for VMware vCloud Director®
MIT License
67 stars 38 forks source link

vagrant-vcloud with windows guest #120

Closed dorheini closed 8 years ago

dorheini commented 8 years ago

Hi. I'm facing a weird issue with deploying windows 7 guest VM using vagrant-vcloud. I have managed to create and configure the VM and it's being deployed. The following is the output coming from vagrant:

==> Test: Building vApp...
==> Test: vApp Vagrant-user-ubuntu-5c4e3412 successfully created.
==> Test: Setting VM hardware...
==> Test: Powering on VM...
==> Test: Waiting for machine to boot. This may take a few minutes...
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
==> Test: Waiting for WinRM Access on {IP}:5985 ... 
    Test: WinRM address: {IP}:5985
    Test: WinRM username: vagrant
    Test: WinRM transport: plaintext
==> Test: Machine booted and ready!
==> Test: Waiting for SSH Access on {IP}:22 ... 

After the above line, it just trying Test: Waiting for SSH Access on {IP}:22 ... infinitely
The Vagrantfile:

node= [
    {   :hostname => 'Test',
            :box => 'Test',
            :box_url => "{fileUrl}"
    }
]

Vagrant.configure('2') do |config|

  # vCloud Director provider settings
  config.vm.provider :vcloud do |vcloud|
    vcloud.hostname = '{url}'
    vcloud.username = '{user}'
    vcloud.password = '{password}'

    vcloud.org_name = '{orgName}'
    vcloud.vdc_name = '{vdcName}'
    vcloud.catalog_name = '{catalogName}'

    vcloud.network_bridge = false

    vcloud.vdc_network_name = '{networkName}'
  end
  nodes.each do |node1|
    config.vm.define node1[:hostname], autostart: false  do |node1_config|
        node1_config.vm.box = node1[:box]
            node1_config.vm.box_url = node1[:box_url]
    node1_config.vm.communicator = "winrm"
    node1_config.winrm.username = "vagrant"
    node1_config.winrm.password = "vagrant"
        node1_config.vm.network :forwarded_port,
                                guest: 5985,
                                host: 5985,
                id: "winrm",
                                auto_correct: true
    end
  end
end

Any idea?

StefanScherer commented 8 years ago

@dorheini the plugin uses rsync to add files to C:\vagrant and therefore you also need SSH in windows guest.

If you don't need additional files from host to guest you might try to turn off the synced folder.

As a first test try a vagrant up --no-provision With a fresh vApp to see if it still needs SSH.

dorheini commented 8 years ago

@StefanScherer thanks for the quick replay. I tried your suggestion and i here is what i get:

dor@ubuntu:~/Dev/vagrant$ sudo vagrant up --no-provision --provider=vcloud BBServer
[sudo] password for dor: 
Bringing machine 'BBServer' up with 'vcloud' provider...
==> BBServer: Building vApp...
==> BBServer: vApp Vagrant-dor-ubuntu-cb2dfd7d successfully created.
==> BBServer: Setting VM hardware...
==> BBServer: Powering on VM...
==> BBServer: Waiting for machine to boot. This may take a few minutes...
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
==> BBServer: Waiting for WinRM Access on {IP}:5985 ... 
    BBServer: WinRM address: {IP}:5985
    BBServer: WinRM username: vagrant
    BBServer: WinRM transport: plaintext
==> BBServer: Machine booted and ready!
==> BBServer: Waiting for SSH Access on {IP}:22 ... 
==> BBServer: Waiting for SSH Access on {IP}:22 ... 
==> BBServer: Waiting for SSH Access on {IP}:22 ... 
==> BBServer: Waiting for SSH Access on {IP}:22 ... 
==> BBServer: Waiting for SSH Access on {IP}:22 ... 
==> BBServer: Waiting for SSH Access on {IP}:22 ... 
==> BBServer: Waiting for SSH Access on {IP}:22 ... 

Just to line up, i do need the ability to provision the guest windows VM so if its a matter of customizing the guest VM to enable SSH to it (windows has ssh?), how can i do it? And why it always trying the SSH to the guest VM even if --no-provision is used in vagrant up?

After digging in the action.rb and wait_for_communicator.rb and adding log entries to these files, i dont know why for windows VM the wait_for_communicator.rb is being used twice (once for WinRM communicator and once for SSH communicator) while for linux guest it is being used once for SSH communicator.

Any advise?

StefanScherer commented 8 years ago

Probably the plugin doesn't use the latest vagrant core features so it does extra steps that could be done in core.

I created some Windows boxes for vCloud some months ago, my configs could be found in an extra branch at https://github.com/StefanScherer/packer-windows/tree/my_vagrant_vcloud

The difference to upstream packer-windows templates are (as far as I can remember):

dorheini commented 8 years ago

@StefanScherer thanks again for the quick replay. For security reason in my company, i cant use packer and the windows templates you mentioned before. But after you mentioned he OpenSSH, i managed to overcome the previous issue by installing the OpenSSH on the windows guest. Now i'm facing this:

==> BBServer: Warning! Folder sync disabled because the rsync binary is missing.
==> BBServer: Make sure rsync is installed and the binary can be found in the PATH.

Probably because the windows guest doesn't support yet rsync. Can you please provide a valid link for rsync.exe / service that i could donwload and add to the windows guest?

Thanks!

StefanScherer commented 8 years ago

@dorheini You might use this script to install rsync.exe: https://github.com/StefanScherer/packer-windows/blob/my_vagrant_vcloud/scripts/rsync.bat

dorheini commented 8 years ago

@StefanScherer Thanks! it trying to Rsync now.. I was searching also for the 'some kind of boot helper to work around an issue with the additional vCloud guest customization reboot' in https://github.com/StefanScherer/packer-windows/tree/my_vagrant_vcloud but i couldn't find it. Can you please also direct me to this helper? it will be awesome if i could find a work around for this auto reboot.

Thanks,

tsugliani commented 8 years ago

Thanks @StefanScherer for helping out !

I think this is the script that helps the auto reboot: https://github.com/StefanScherer/packer-windows/blob/my_vagrant_vcloud/scripts/delay-winrm-vcloud.bat

StefanScherer commented 8 years ago

Thanks @tsugliani that's the script. I probably should write a blog post about all these things :-)

StefanScherer commented 8 years ago

Plus https://github.com/StefanScherer/packer-windows/blob/my_vagrant_vcloud/scripts/enable-winrm-after-customization.bat that will be appended as an auto run script.

dorheini commented 8 years ago

@StefanScherer and @tsugliani thanks a lot for the support!! I have made a serious progress on customizing the Windows guest VM to match vagrant requirements. Unfortunately i'm still facing a problem on the Rsync step:

==> BBServer: Rsyncing folder: /home/dheinisc/Dev/vagrant/ => C:/vagrant
There was an error when attempting to rsync a share folder.
Please inspect the error message below for more info.

Host path: /home/dheinisc/Dev/vagrant/
Guest path: C:/vagrant
Error:     @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@    @@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@    @@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
22:11:91:3e:dc:fc:40:bd:15:82:ac:ae:bb:f1:b6:b9.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:11
  remove with: ssh-keygen -f "/root/.ssh/known_hosts" -R 139.181.196.3
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.

                            ****USAGE WARNING****

This is a private computer system. This computer system, including all
related equipment, networks, and network devices (specifically including
Internet access) are provided only for authorized use. This computer system
may be monitored for all lawful purposes, including to ensure that its use
is authorized, for management of the system, to facilitate protection against
unauthorized access, and to verify security procedures, survivability, and
operational security. Monitoring includes active attacks by authorized entities
to test or verify the security of this system. During monitoring, information
may be examined, recorded, copied and used for authorized purposes. All
information, including personal information, placed or sent over this system
may be monitored.

Use of this computer system, authorized or unauthorized, constitutes consent
to monitoring of this system. Unauthorized use may subject you to criminal
prosecution. Evidence of unauthorized use collected during monitoring may be
used for administrative, criminal, or other adverse action. Use of this system
constitutes consent to monitoring for these purposes.

Permission denied (publickey,password,keyboard-interactive).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.0]

Can you think of any other scripts / customization that i need to execute on the guest VM to solve this? I'm so closssee thanks to you guys!! :)

StefanScherer commented 8 years ago

But this helper depends on the original hostname of the VM in your catalog. All the packer-windows baseboxes have VAGRANT- in the hostname. This script opens WinRM port after vCloud has renamed the VM and rebooted it. After that reboot the vagrant-vcloud plugin is allowed to connect to winrm for provisoning.

StefanScherer commented 8 years ago

@dorheini you're running through all issues I got creating the packer-windows baseboxes ;-) I guess it has something to do with https://github.com/StefanScherer/packer-windows/blob/my_vagrant_vcloud/scripts/openssh.ps1#L48-L50 Newer versions of OpenSSH installed a key that vagrant didn't support. I don't know if this is still needed for Vagrant 1.7.4.

Otherwise check your local ~/.ssh/known_hosts file and remove the hosts with the IP address of your vCloud VM.

Probably this one:

ssh-keygen -R 139.181.196.3
dorheini commented 8 years ago

@StefanScherer i tried both of your suggestions (exeucting the https://github.com/StefanScherer/packer-windows/blob/my_vagrant_vcloud/scripts/openssh.ps1#L48-L50 power shell on the guest VM and re create a template for the guest VM and also removing the generated ssh key for the host or any other hosts in the know_host file on the host machine where i execute vagrant) and i get:

==> BBServer: Machine booted and ready!
==> BBServer: Rsyncing folder: /home/dheinisc/Dev/vagrant/ => /vagrant
There was an error when attempting to rsync a share folder.
Please inspect the error message below for more info.

Host path: /home/dheinisc/Dev/vagrant/
Guest path: /vagrant
Error:     @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@    @@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@    @@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
22:11:91:3e:dc:fc:40:bd:15:82:ac:ae:bb:f1:b6:b9.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:17
  remove with: ssh-keygen -f "/root/.ssh/known_hosts" -R 139.181.196.11
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
Permission denied (publickey,password,keyboard-interactive).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]

Also, i tried to remove the ~/.ssh/known_hosts file on the host but i still get the above...

StefanScherer commented 8 years ago

@dorheini what I don't understand is that the known_hosts file is taken from /root:

Offending ECDSA key in /root/.ssh/known_hosts:17

I assume you connect from a Linux machine with vagrant + vagrant-vcloud installed.

Just out of curiosity: Why does your company not allow to build well prepared Windows boxes for vCloud, you can generate them on-premise with such templates. As you now dig through all the scripts I think in the end you understand the whole packer build and can guarantee that there is no black magic inside ;-)

We also have used on-premise ISO files that we are allowed to use instead of the trial ISO from the internet. I once built a on-premise build pipeline for such boxes inside our vCloud. You might show it to your admins that provide you with your vCloud VM templates: https://github.com/StefanScherer/basebox-slave#basebox-slave

dorheini commented 8 years ago

@StefanScherer i'm not sure but it might be taken from /root because i'm running vagrant up with sudo (otherwise it has no access to .vagrant.d and .vagrant directories for unknown reason..) Anyway i'm not sure that the sudo command is the reason for the /root issue.

Regarding your question, the company has its own progress to allow any tools being used in it.. its a bureaucracy issue and to approve any outside tools (as packer or templates for Operating systems which is even worse) i need to file a report and it will probably take at least few months to approve it in case it will be approved at all... this is the reason we prefer to customize our own windows guest instead of starting this progress which will probably take much longer. We need it ASAP so we cannot wait that long for the packer to be approved if it will be approved at all.. Anyway i need to find a solution for this issue.. Its very strange because when i run vagrant up from the same host using vcloud plugin for centos guest VM, everything works... So i guess i'm missing something in the windows guest VM.

dorheini commented 8 years ago

@StefanScherer i removed the /root/.ssh/known_hosts and now i get:

==> BBServer: Machine booted and ready!
==> BBServer: Rsyncing folder: /home/dheinisc/Dev/vagrant/ => /vagrant
vagrant@139.181.196.5's password: 

Seems that its working now!!! But its asking for password... Should it be taken from somewhere? i need this process to be automatic.

StefanScherer commented 8 years ago

@dorheini I know such a situation very well. But what I have learnt from vagrant, packer, docker and such tools is that I will never again "repair" a VM to make it work. Sorry that introducing such tools is so hard, I've also learnt my lessons there. It feels like this: https://twitter.com/philipcotan/status/649727545655476224 Too busy to adopt improvements.

But back to your problem. It seems that the Vagrant public key is missing so that Vagrant is able to log in with its insecure private key.

https://github.com/StefanScherer/packer-windows/blob/my_vagrant_vcloud/scripts/vagrant-ssh.bat

dorheini commented 8 years ago

@StefanScherer @tsugliani Thank you very much guys for all the help! It seems to be working now... Great job!!!