Closed itsjohncs closed 10 years ago
Automating the initial installation of the CentOS 6 minimal image can be done with Red Hat's kickstart feature. I found a repo that uses packer to set up a CentOS 6 minimal install here, but it's not licensed so I can only really glance through it to get ideas.
I'll want to make sure to give OpenVZ its own partition since the VMs won't be running on RAM in this VM (mostly because my laptop can't handle that load).
Running a packer build right now for the first time. Seeing if it's going to work. It's trying to bring up a CentOS minimal install. No OpenVZ provisioning has been added yet. That will be done with Ansible though so there shouldn't be anything needed besides just pointing Packer at the Ansible playbook I already created.
I enabled headless mode in the Packer configuration which is making it difficult to see what's happening inside of the VM. Probably wasn't the best idea. I'll want to disable headless mode by default I think, and only enable it during automatic builds (which I'm not creating right now so I don't have to worry about).
Disabling headless mode was insanely helpful...
Anyways, it seems that the default 30s timeout for the entire install to finish and bring up SSH appears to be too slow (surprise surprise).
5 minutes isn't a long enough time either, I'll probably want to try a timeout of 10 minutes. Additionally, I have it set up so that a yum update and certain packages are installed as part of the kickstart process, but for transarency I think that stuff should be handled by ansible during the provisioning.
:basketball: time.
Got packer to create a CentOS 6 minimal virtual box :grinning:. Works great, impressive piece of software Packer is.
Need to hook up provisioning and the Vagrant post-processor now. I'll take this as an opportunity to organize my Ansible playbooks a bit better as well since they're now being used by a couple different tools (Vagrant and Packer).
Was starting to set up the ansible playbooks to use Roles and get a little more organization that way. Was also making sure the roles were idempotent so we don't get any weird issues from reapplying the playbooks (which could happen during the Vagrant provisioning on top of the provisioning that was done on the VM images by Packer).
I'd like to keep things super simple to begin with, assuming that MongoDB and Redis aren't going to have any replication or sharding for example. We can add in more features to the playbooks as needed in the future.
Packer's Ansible provisioner requires Ansible to be installed on the system already. To test the Ansible provisioning the whole 15 minute setup process also needs to be done first :cry:.
The vagrantbox
role I added to Ansible does not do enough. It seems that I need to actually handle installing guest additions. There was also an error that sudo
output saying that in order to use sudo
we must be on a tty. That's probably me misconfiguring the sudoers file. Will have to investigate.
I'll also want to look around at examples of other people who have provisioned a Vagrant box using Ansible.
The error from sudo
is likely explained by the configuration described in this SO post. I've modified the Ansible role to deal with this (it just overrides the default in the sudoers file).
I've also added code to install guest additions on the box.
Testing it now. Something I think I will do after this test is complete is split the packer build up into two parts. Generation of an OVF, then a provisioning step on that OVF. This will make testing considerably simpler now and in the future. See this page for the packer documentation for doing this.
Ansible failed with msg: Destination directory /root/.ssh does not exist
Generating the base OVA file works now. Guest additions are installed and nothing else. Now making the OpenVZ image creator.
Packer can't seem to SSH onto the machine. This looks like a networking problem... http://mmckeen.net/blog looks like it has the solution.
{
"type": "virtualbox-ovf",
"source_path": "./openSUSE_13.1_Packer_Base-1.0.0/openSUSE_13.1_Packer_Base.x86_64-1.0.0.ovf",
"ssh_username": "root",
"ssh_password": "",
"ssh_wait_timeout": "2m",
"vboxmanage": [
["modifyvm", "", "--nic1", "nat"]
],
"shutdown_command": "shutdown -P now"
}
So i'm not sure why the networking isn't working. I think its a failure on Packer's part so I might have to dip into its internals to figure out what it's doing differently than what it was doing with the ISO builder.
So this was a straightforward problem once I tracked it down, though the solution is a little hacky. udev writes a file at /etc/udev/rules.d/70-persistent-net.rules
with the MAC address of eth0
when it's set up with the virtualbox-iso
builder. The feature this supports is that when a new network device is brought up with a different MAC address, it will be given a different interface name, so the names stay sticky. VirtualBox randomizes the MAC address of the device when its imported, therefore sadness ensues. To fix this, I've linked the rules file to /dev/null
which seems to be the suggested way to disable this feature according to the udev developers.
I also remove the HWADDR="bla"
setting in the network script for eth0.
What's been happening n ow is that the guest additions don't seem to be getting installed correctly, investigating right now.
The kernel getting changed when we install OpenVZ is messing it up. I'll want to create an ova-virtualboxadditions
configuration to combat this, which is something I was thinking of doing anyways. Going home now, been a long day.
This should be fairly straightforward and should involve creating a virtual machine from the latest CentOS 6 minimal release and provisioning it with the Ansible scripts I've already created to be running the OpenVZ kernel and the latest version of VirtualBox Guest Additions.
The Vagrant boxes, with the outdated version of guest additions, don't come back up correctly after being suspended. Their shared directories are unmounted and won't come back with a reload, therefore a destroy and remake cycle has to be done in order to start working again. This is silly given that this is a simple thing to fix.