Closed mgedmin closed 9 years ago
Steps to Reproduce:
vagrant init ubuntu/trusty64 && vagrant up
)hosts
, e.g.vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200
test.yml
:---
- hosts: localhost
gather_facts: no
tasks:
- command: hostname
delegate_to: vagrant
ansible-playbook -i hosts test.yml -vvv
Expected Results:
(because I didn't bother setting up SSH keys for successful Vagrant auth)
$ ansible-playbook test.yml -vvv
PLAY [localhost] **************************************************************
TASK: [command hostname] ******************************************************
<127.0.0.1> ESTABLISH CONNECTION FOR USER: mg
<127.0.0.1> REMOTE_MODULE command hostname
<127.0.0.1> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/mg/.ansible/cp/ansible-ssh-%h-%p-%r" -o Port=2200 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 127.0.0.1 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330 && echo $HOME/.ansible/tmp/ansible-tmp-1445236160.47-241359585522330'
The authenticity of host '[127.0.0.1]:2200 ([127.0.0.1]:2200)' can't be established.
ECDSA key fingerprint is 51:56:fb:c9:66:05:4f:1e:54:e0:ba:bb:c4:00:24:e9.
Are you sure you want to continue connecting (yes/no)? no
fatal: [localhost -> vagrant] => SSH Error: Host key verification failed.
while connecting to 127.0.0.1:2200
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/home/mg/test.retry
localhost : ok=0 changed=0 unreachable=1 failed=0
Actual Results:
1 plays in test.yml
PLAY ***************************************************************************
TASK [command] *****************************************************************
ESTABLISH LOCAL CONNECTION FOR USER: mg
127.0.0.1 EXEC (umask 22 && mkdir -p "$(echo $HOME/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791)" && echo "$(echo $HOME/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791)")
127.0.0.1 PUT /tmp/tmp1cYUgW TO /home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/command
127.0.0.1 EXEC LANG=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/command; rm -rf "/home/mg/.ansible/tmp/ansible-tmp-1445236241.37-18813461032791/" > /dev/null 2>&1
changed: [localhost -> localhost] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.010268", "end": "2015-10-19 09:30:41.436348", "rc": 0, "start": "2015-10-19 09:30:41.426080", "stderr": "", "stdout": "platonas", "stdout_lines": ["platonas"], "warnings": []}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0
(platonas
is the hostname of my laptop)
@mgedmin this is happening because we see the host is localhost, and therefor reset the connection to local
. If you add ansible_connection=ssh
to the inventory vars for the vagrant host, things work as expected:
TASK [command] ***************************************************************** changed: [localhost] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.002595", "end": "2015-10-20 02:20:00.874443", "rc": 0, "start": "2015-10-20 02:20:00.871848", "stderr": "", "stdout": "jimi", "stdout_lines": ["jimi"], "warnings": []} TASK [command] ***************************************************************** changed: [localhost -> vagrant] => {"changed": true, "cmd": ["hostname"], "delta": "0:00:00.001528", "end": "2015-10-20 06:20:01.094318", "rc": 0, "start": "2015-10-20 06:20:01.092790", "stderr": "", "stdout": "precise64", "stdout_lines": ["precise64"], "warnings": []}
The first task is run without delegate_to, the second is as you have it above, just to show there is a difference.
Really, I believe this behavior (always using the local connection method for localhost) is more correct than 1.x, as before it would sometimes try to ssh to localhost (which typically failed).
Note: the inventory file is generated dynamically by Vagrant's Ansible provisioner, since the port numbers change all the time. This makes it hard to apply the workaround (add ansible_connection=ssh
to the inventory file). It also increases the scope of the issue (anyone using Vagrant's Ansible provisioner is affected.)
BTW this issue only affects delegation: when a Vagrant host is used as a regular target, Ansible uses SSH. This inconsistency bugs me.
I think if the logic is going to change, it would be nice if it took the port into account. delegate_to host, where host is ip+port, is pretty obviously not a localhost connection. This change will affect all multi-host vagrant setups which use the nat port for access.
Per discussion, I think if any ansible_<connection>_*
variable is set, we can safely assume that <connection>
is what's wanted rather than local
. I'll look at doing it that way, rather than the method used in #12834, which does not take inventory variables into account.
Issue Type: Bug Report Ansible Version:
Ansible Configuration:
and the inventory file has
Summary:
I've a role that sets up SSH authenticated backup pushing between two hosts. One of the tasks is creating a dedicated user:
I'm testing this with a couple of Vagrant virtual machines, called
trusty
andprecise
.trusty
is the target,precise
is the value of{{ backup_buddy }}
. Here's what Ansible v2 does:Note how it's using a local connection and attempting to change stuff on my laptop, instead of SSHing into the vagrant VM. This fails because sudo requires a password (thank you sudo!), unlike in Vagrant.