Open larsks opened 2 years ago
@larsks , just curious if this pattern works for you as defined at: https://www.vagrantup.com/docs/provisioning/ansible
# Vagrant 1.7+ automatically inserts a different
# insecure keypair for each new VM created. The easiest way
# to use the same keypair for all the machines is to disable
# this feature and rely on the legacy insecure key.
# config.ssh.insert_key = false
#
# Note:
# As of Vagrant 1.7.3, it is no longer necessary to disable
# the keypair creation when using the auto-generated inventory.
N = 3
(1..N).each do |machine_id|
config.vm.define "machine#{machine_id}" do |machine|
machine.vm.hostname = "machine#{machine_id}"
machine.vm.network "private_network", ip: "192.168.77.#{20+machine_id}"
# Only execute once the Ansible provisioner,
# when all the machines are up and ready.
if machine_id == N
machine.vm.provision :ansible do |ansible|
# Disable default limit to connect to all the machines
ansible.limit = "all"
ansible.playbook = "playbook.yml"
end
end
end
end
The way this pattern is supposed to work is that it makes sure that all the machines are running before it starts to provision.
This used to work fine for me, but recently it has stopped working. I think the issue might be a bug in the latest version of Vagrant.
@dkinzer that seems to work for me (using Vagrant 2.2.16 with libvirt), and would work around the race condition with collections support.
Vagrant version
Host operating system
Fedora 35
Guest operating system
Fedora 36
Vagrantfile
Galaxy role file
Ansible playbook
Debug output
Attached to this issue: vagrant.log.txt
Expected behavior
Vagrant should successfully provision the two hosts
Actual behavior
Vagrant fails with:
Running
vagrant up --no-parallel
avoids this error (but can take substantially longer with more hosts and more complex playbooks).