ansible / ansible

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy and maintain. Automate everything from code deployment to network configuration to cloud management, in a language that approaches plain English, using SSH, with no agents to install on remote systems. https://docs.ansible.com.
https://www.ansible.com/
GNU General Public License v3.0
62.75k stars 23.87k forks source link

[v2] delegate_to runs task on local machine instead of Vagrant VM #12817

Closed mgedmin closed 9 years ago

mgedmin commented 9 years ago

Issue Type: Bug Report Ansible Version:

ansible 2.0.0 (devel 1280e2296c) last updated 2015/10/19 08:38:39 (GMT +300) lib/ansible/modules/core: (detached HEAD 5da7cf696c) last updated 2015/10/19 08:39:02 (GMT +300) lib/ansible/modules/extras: (detached HEAD 632de528a0) last updated 2015/10/19 08:39:02 (GMT +300)

Ansible Configuration:

[defaults]
inventory = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
remote_user = vagrant
private_key_file = ~/.vagrant.d/insecure_private_key
host_key_checking = false
gathering = smart
fact_caching = jsonfile
fact_caching_connection = .cache/facts/
fact_caching_timeout = 86400

[privilege_escalation]
become = true

[ssh_connection]
ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null

and the inventory file has

trusty ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201
precise ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200

Summary:

I've a role that sets up SSH authenticated backup pushing between two hosts. One of the tasks is creating a dedicated user:

- name: user for accepting pushed backups on the backup buddy
  user: name="{{ backup_user }}" state=present
  delegate_to: "{{ backup_buddy }}"
  when: backup_buddy != ""

I'm testing this with a couple of Vagrant virtual machines, called trusty and precise. trusty is the target, precise is the value of {{ backup_buddy }}. Here's what Ansible v2 does:

TASK [backup-pusher : user for accepting pushed backups on the backup buddy] ***
ESTABLISH LOCAL CONNECTION FOR USER: vagrant
127.0.0.1 EXEC (umask 22 && mkdir -p "$(echo $HOME/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589)" && echo "$(echo $HOME/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589)")
127.0.0.1 PUT /tmp/tmp5c2bRG TO /home/mg/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589/user
127.0.0.1 EXEC /bin/sh -c 'sudo -H -n -S -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-lpsktbokipyfwgtigsbpkqadldelsutb; LANG=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/mg/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589/user; rm -rf "/home/mg/.ansible/tmp/ansible-tmp-1445235090.23-187473370409589/" > /dev/null 2>&1'"'"''
fatal: [trusty -> precise]: FAILED! => {"changed": false, "failed": true, "msg": "sudo: a password is required\n", "parsed": false}

Note how it's using a local connection and attempting to change stuff on my laptop, instead of SSHing into the vagrant VM. This fails because sudo requires a password (thank you sudo!), unlike in Vagrant.

mgedmin commented 9 years ago

Steps to Reproduce:

vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200
---
- hosts: localhost
  gather_facts: no
  tasks:
    - command: hostname
      delegate_to: vagrant

Expected Results:

(because I didn't bother setting up SSH keys for successful Vagrant auth)

$ ansible-playbook test.yml -vvv

PLAY [localhost] ************************************************************** 

TASK: [command hostname] ****************************************************** 
<127.0.0.1> ESTABLISH CONNECTION FOR USER: mg
<127.0.0.1> REMOTE_MODULE command hostname
<127.0.0.1> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/mg/.ansible/cp/ansible-ssh-%h-%p-%r" -o Port=2200 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 127.0.0.1 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330 && echo $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330'
The authenticity of host '[127.0.0.1]:2200 ([127.0.0.1]:2200)' can't be established.
ECDSA key fingerprint is 51:56:fb:c9:66:05:4f:1e:54:e0:ba:bb:c4:00:24:e9.
Are you sure you want to continue connecting (yes/no)? no 
fatal: [localhost -> vagrant] => SSH Error: Host key verification failed.
    while connecting to 127.0.0.1:2200
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/mg/test.retry

localhost                  : ok=0    changed=0    unreachable=1    failed=0   

Actual Results:

1 plays in test.yml

PLAY ***************************************************************************

TASK [command] *****************************************************************
ESTABLISH LOCAL CONNECTION FOR USER: mg
127.0.0.1 EXEC (umask 22 && mkdir -p "$(echo $HOME/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791)" && echo "$(echo $HOME/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791)")
127.0.0.1 PUT /tmp/tmp1cYUgW TO /home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/command
127.0.0.1 EXEC LANG=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/command; rm -rf "/home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/" > /dev/null 2>&1
changed: [localhost -> localhost] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.010268", "end": "2015-10-19 09:30:41.436348", "rc": 0, "start": "2015-10-19 09:30:41.426080", "stderr": "", "stdout": "platonas", "stdout_lines": ["platonas"], "warnings": []}

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=0   

(platonas is the hostname of my laptop)

jimi-c commented 9 years ago

@mgedmin this is happening because we see the host is localhost, and therefor reset the connection to local. If you add ansible_connection=ssh to the inventory vars for the vagrant host, things work as expected:

TASK [command] *****************************************************************
changed: [localhost] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.002595", "end": "2015-10-20 02:20:00.874443", "rc": 0, "start": "2015-10-20 02:20:00.871848", "stderr": "", "stdout": "jimi", "stdout_lines": ["jimi"], "warnings": []}
TASK [command] *****************************************************************
changed: [localhost -> vagrant] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.001528", "end": "2015-10-20 06:20:01.094318", "rc": 0, "start": "2015-10-20 06:20:01.092790", "stderr": "", "stdout": "precise64", "stdout_lines": ["precise64"], "warnings": []}

The first task is run without delegate_to, the second is as you have it above, just to show there is a difference.

Really, I believe this behavior (always using the local connection method for localhost) is more correct than 1.x, as before it would sometimes try to ssh to localhost (which typically failed).

mgedmin commented 9 years ago

Note: the inventory file is generated dynamically by Vagrant's Ansible provisioner, since the port numbers change all the time. This makes it hard to apply the workaround (add ansible_connection=ssh to the inventory file). It also increases the scope of the issue (anyone using Vagrant's Ansible provisioner is affected.)

mgedmin commented 9 years ago

BTW this issue only affects delegation: when a Vagrant host is used as a regular target, Ansible uses SSH. This inconsistency bugs me.

halberom commented 9 years ago

I think if the logic is going to change, it would be nice if it took the port into account. delegate_to host, where host is ip+port, is pretty obviously not a localhost connection. This change will affect all multi-host vagrant setups which use the nat port for access.

jimi-c commented 9 years ago

Per discussion, I think if any ansible_<connection>_* variable is set, we can safely assume that <connection> is what's wanted rather than local. I'll look at doing it that way, rather than the method used in #12834, which does not take inventory variables into account.