rdickert / project-quicksilver

Single-command High-Performance Drupal/LEMP Deployment
Other
16 stars 5 forks source link

ansible playbook seems to hang at DigitalOcean #2

Open MidGe48 opened 11 years ago

MidGe48 commented 11 years ago

Trying to get over the ssh issues that I thought may have been related to Vagrant, I tried running the playbook alone on a Ubuntu Server 12,04 droplet at DigitalOcean.

It all starts as expected and then seems to hang:

<192.241.xxx.xxx> ESTABLISH CONNECTION FOR USER: vagrant <192.241.xxx.xxx> EXEC ['ssh', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/tmp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=2222', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '192.241.xxx.xxx', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1371804536.13-251237740311235 && chmod a+rx $HOME/.ansible/tmp/ansible-1371804536.13-251237740311235 && echo $HOME/.ansible/tmp/ansible-1371804536.13-251237740311235'"] <192.241.xxx.xxx> REMOTE_MODULE apt name='python-software-properties' state=installed update_cache=yes CHECKMODE=True <192.241.1xxx.xxx> PUT /tmp/tmpnavu5o TO /home/vagrant/.ansible/tmp/ansible-1371804536.13-251237740311235/apt <192.241.xxx.xxx> EXEC ['ssh', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/tmp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=2222', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '192.241.xxx.xxx', '/bin/sh -c \'sudo -k && sudo -H -S -p "[sudo via ansible, key=eobedjoxbbwuvbtdqqnxmcdlajbpaklb] password: " -u root /bin/sh -c \'"\'"\'/usr/bin/python /home/vagrant/.ansible/tmp/ansible-1371804536.13-251237740311235/apt; rm -rf /home/vagrant/.ansible/tmp/ansible-1371804536.13-251237740311235/ >/dev/null 2>&1\'"\'"\'\'']

On the droplet side, the connection worked and the footprints of execution are there (ie the directory ansible-1371804536.13-251237740311235 exists).

I have tried running it without fireball, but get pretty much the same result except that the first task is different and the hanging occurs in the running of the first task whatever the task is ( update the hostname to akita, in my case)

There are no errors anywhere to be found!

Any clues?

rdickert commented 11 years ago

One thing that looks problematic is that the output indicates that you are on port 2222. For a Digital Ocean server (not VirtualBox), you need to be on port 22. I don't know why it would hang instead of giving a helpful error, but that might be worth a look.

Also, if you are setting up the droplet without Vagrant, you'll need to make sure user vagrant is present (or modify the Project Quicksilver scripts to use the username you prefer) and that it has your public key (not the insecure Vagrant one) for login without password. For Vagrant, you might check that the Vagrantfile line override.ssh.private_key_path = '~/.ssh/id_rsa' points to your public key (yours might have a different path/filename than ~/.ssh/id_rsa, although this is probably often correct on macs).

I just ran the script on Digital Ocean again and noticed some changes. My default Vagrantfile setup (with id & key filled in) threw an error because they seem to have changed their list of supported OSes, so the correct string for 12.04 is now provider.image = 'Ubuntu 12.04 x64' (without the word 'Server'). With that change, it did start the droplet, but it seemed to hang for me as well. Perhaps I was just too impatient, but I stopped the script and restarted it with vagrant provision (after getting the IP address of the droplet, which had been created despite the "hang"). From there, the script worked fine. This is a sample of one only, but it seems likely that once it has kicked off the droplet, vagrant provision and ansible-playbook configure-server.yml should both work.

If the hang during droplet creation persists, it will need to be addressed upstream of this project, either with vagrant-digitalocean or vagrant itself. It may be that Digital Ocean changed something besides just the os strings.

MidGe48 commented 11 years ago

Regarding port 2222. I did change it to port 2222 in the droplet too and I can ssh to the droplet correctly thru that port.

Out of frustration, I destroyed my portlets and all its images and decided to start again. But meanwhile, I noticed that ansible is working on a module for digital_ocean and that module is already in the code of devel branch. So I am now thinking this may be even better than using vagrant (one piece of software instead of two). Of course your playbook is still the real value for Drupal + LEMP.

I am also wondering whether it would be better to use the DO api's id's rather than strings for the values of the OS, etc.. in the same way that ansible does it:

# Create a new Droplet
# Will return the droplet details including the droplet id (used for idempotence)

- digital_ocean: >
      state=present
      command=droplet
      name=mydropletname
      client_id=mydigitaloceanclientid
      api_key=mydigitaloceanapikey
      size_id=33                   # 33 = 512M, for instance
      region_id=3                  # region 3 is San Francisco 1
      image_id=455844         # ubuntu server 64 bit
      wait_timeout=500