Open skotzko opened 10 years ago
Have you tested the 1.6.5 release? That has fixes for SSH in it and resolves a similar issue I had with 1.6.4.
Same situation, but no gem errors. Seems to ignore config values. I have a multimachine environment, so my Vagrant file is slightly different.
Here is the relevant vagrant file's section (I use Nugrant for filling up config values, double checked for no null values already):
config.vm.define :aws_development do |aws_development_config|
aws_development_config.vm.hostname = config.user.site.hostname
if !Vagrant.has_plugin?('vagrant-aws')
abort('ERROR: Please install vagrant-aws. Use: vagrant plugin install vagrant-aws')
end
aws_development_config.vm.provider :aws do |aws, override|
override.vm.box = "dummy"
override.vm.box_url = "https://vagrantcloud.com/dimroc/boxes/awsdummy"
override.ssh.username = config.user.site.aws.username
override.ssh.private_key_path = config.user.site.aws.private_key_path
aws.access_key_id = config.user.aws.access_key_id
aws.secret_access_key = config.user.aws.secret_access_key
aws.region = config.user.site.aws.region
if(config.user.site.aws.has_key?(:ami) && (!config.user.site.aws.ami.nil?) && (!config.user.site.aws.ami.empty?))
aws.ami = config.user.site.aws.ami
end
aws.ebs_optimized = config.user.site.aws.ebs_optimized
aws.associate_public_ip = config.user.site.aws.associate_public_ip
aws.instance_type = config.user.site.aws.instance_type
aws.keypair_name = config.user.site.aws.keypair_name || config.user.site.nodename
if(config.user.site.aws.has_key?(:elb) && (!config.user.site.aws.elb.nil?) && (!config.user.site.aws.elb.empty?))
aws.elb = config.user.site.aws.elb
end
if(config.user.site.aws.has_key?(:security_groups) && (!config.user.site.aws.security_groups.nil?) && (!config.user.site.aws.security_groups.empty?))
aws.security_groups = config.user.site.aws.security_groups
end
end
aws_development_config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ["./cookbooks", "./site-cookbooks"]
chef.roles_path = "./roles"
chef.data_bags_path = "./data_bags"
chef.environments_path = "./environments"
chef.add_role "service_machine"
chef.add_role "web_server"
chef.environment = "development"
chef.node_name = config.user.site.nodename
chef.json = {sphinx:{enabled:false}, site:{repository: config.user.site.repository}, rsync_local_site: false, media_disk: nil}
end
end
BTW:
Vagrant 1.6.5
Ok, double checked. Seems not to work with multimachine Vagrantfile config. When I "flatten" the config to single machine (writing the box, etc in "config" value) it works.
Same here. Multiple machines do not work.
Same for me with vagrant 1.6.5 vagrant-aws plugin 0.5 and single machine config
Same problem here on Vagrant 1.7; I can have a Vagrantfile that supports VMware and Virtualbox, but as soon as I add AWS support into the same file I get this error every time: There are errors in the configuration of this machine. Please fix the following errors and try again:
AWS Provider:
I even tried putting logic into the file to skip the aws stuff, but the problem still remains:
if ENV['VAGRANT_DEFAULT_PROVIDER'] == 'aws'
w.vm.provider :aws do |aws, override|
aws.access_key_id = ENV['AWS_ACCESS_KEY_ID']
aws.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
aws.keypair_name = ENV['AWS_SSH_KEY_ID']
aws.region = ENV['AWS_DEFAULT_REGION']
aws.instance_type = 'm3.medium'
aws.security_groups = ['default']
# Make sure to pick an Amazon Linux AMI that exists for the zone you
# want to deploy it in. This one's for us-west-2:
aws.ami = 'ami-3d50120d'
aws.tags = {
'Name' => 'Training_Workstation'
}
override.ssh.username = 'ubuntu'
override.ssh.private_key_path = '~/.ec2/scarolan_chef.pem'
end
w.vm.box = "dummy"
else
w.vm.box = "chef/ubuntu-14.04"
end
I just ran into this issue today as well with Vagrant 1.7.2 and vagrant-aws (0.6.0) in a multi-vm setup with one VirtualBox and one AWS VM
Still running into this all the time; constantly having to downgrade to 1.6.3. We continuously deliver boxes and this is forcing anyone trying to use them to downgrade.
I am running into this right now. I can't seem to use 1.7.4 on ubuntu 14.04 because it won't evaluate the aws provider at all, but earlier versions (which do see the aws provider) won't evaluate the environment variables in a multi-machine config.
Multi Machine fails to use the aws_profile and insists on explicit keys:
Vagrant 1.8.1
AWS Provider:
* An access key ID must be specified via "access_key_id"
* A secret access key is required via "secret_access_key"
I upgraded to Vagrant 1.6.4 yesterday and a functioning Vagrantfile from 1.6.3 now breaks.
Here is the console output raised when I run
vagrant up --provider=aws
. The errors at top about access keys and vm.box are new (those are present), as is the error at bottom about the communicator:And here is the formerly functioning
Vagrantfile
(slightly redacted):