We've hit the memory limit on our local machines and I am looking into a sustainable way to run a multi-provider development environment in Vagrant where we have 1-3 VMs locally and one node on EC2 (per developer).
The problem is not communicating workstation => EC2, but between VMs and EC2. Obviously the EC2 instance will not be on the local network. I'm trying to figure out how to get services and tests running on the other local VMs to know about and be able to route to this node.
I'm curious how best to accomplish this. Parameters:
Our Vagrantfile is checked into source control and shared.
Each user has their own AWS IAM account with permissions to a shared AWS account.
We are okay with an "online only" development environment.
Hybrid environment (local VMs plus one EC2 instance per dev). We would prefer everything to run locally. But that ship has sailed.
Some options we considered:
Elastic IPs + DNS - Not a great use case for Elastic IPs.
Dynamic IPs + VPN + assign DNS as part of Vagrantfile - Doesn't seem to be supported by vagrant-AWS.
Run DNS locally that all boxes can resolve to. Enter DNS entry with dynamically assigned IP on box up - Doesn't seem to be supported by vagrant-DNS.
Service discovery running locally - How would the agent on the box dial back to each developer's local environment to register?
Service discovery running on AWS - If a we could have the node register with Consul on each vagrant up as #{user}.remote_box.domain_name, it would be easy enough to configure the VMs to use that consul cluster as a resolver and use the individualized FQDNS to route to each developers box.
Centralized service-discovery may be the route we go. I'm curious how we would add the register/deregister step to a vagrant up for this node, and/or if there is an easier way that I haven't thought of?
Tackling and hosting service discovery right now just seems like lot of overhead to solve this one problem. I would prefer some method where vagrant-aws could just return the external IP as a variable, that could then be configured in other nodes. Is something like that available?
Sample Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.require_version ">= 1.5.0"
require 'etc'
user = Etc.getlogin
# A combo of the analytics box, the database server, and the cleanroom matching server
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "matcher" do |matcher|
matcher.vm.hostname = "match.local"
matcher.vm.box = "centos-6.5"
matcher.vm.network "private_network", ip: "33.33.33.12"
matcher.vm.network "forwarded_port", guest: 27017, host: 2700, auto_correct: true #Mongo
matcher.vm.network "forwarded_port", id: "ssh", guest: 22, host: 2202 # override SSH port
matcher.ssh.insert_key = false
# NFS mounts cannot be owned by any user (adsummos) that doesn't exist on the base image.
matcher.vm.synced_folder "../../../O2O", "/home/adsummos/analytics", type: "nfs", create: true
matcher.vm.synced_folder "../../../chef-repo", "/var/chef-repo", type: "nfs", create: true
matcher.vm.provider :virtualbox do |virtualbox|
virtualbox.customize ["modifyvm", :id, "--memory", 1280]
virtualbox.customize ["modifyvm", :id, "--name", "CleanroomMatchServer"] # Sets VM name in VirtualBox.
end
end
config.vm.define "hdp" do |hdp|
hdp.vm.box = "hdp_2.3_box"
hdp.vm.provider :aws do |aws, override|
aws.ami = "ami-57cd8732" # Stock CentOS 6 with HVM
aws.instance_type = "t2.large"
aws.subnet_id = "subnet-aebab1da" # Public subnet
aws.security_groups = "sg-b69836d0" # SG name = "development_HDP_VMs"
aws.keypair_name = "korrelate2012" # Assign the default keypair for SSH.
# Configure user AWS keys in .profile to be exported as ENV variables.
aws.access_key_id = ENV['AWS_ACCESS_KEY']
aws.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
# Tag each instance with developer name.
aws.tags = {
'Name' => "dev_HDP_#{user}"
}
# Must have private component korrelate2012 key pair in .ssh dir.
override.ssh.username = "ec2-user"
override.ssh.private_key_path = "~/.ssh/korrelate2012.pem"
end
end
end
We've hit the memory limit on our local machines and I am looking into a sustainable way to run a multi-provider development environment in Vagrant where we have 1-3 VMs locally and one node on EC2 (per developer).
The problem is not communicating workstation => EC2, but between VMs and EC2. Obviously the EC2 instance will not be on the local network. I'm trying to figure out how to get services and tests running on the other local VMs to know about and be able to route to this node.
I'm curious how best to accomplish this. Parameters:
Some options we considered:
Centralized service-discovery may be the route we go. I'm curious how we would add the register/deregister step to a vagrant up for this node, and/or if there is an easier way that I haven't thought of?
Tackling and hosting service discovery right now just seems like lot of overhead to solve this one problem. I would prefer some method where vagrant-aws could just return the external IP as a variable, that could then be configured in other nodes. Is something like that available?
Sample Vagrantfile:
Regards, Joe Reid