Closed garrettr closed 9 years ago
@garrettr, to reboot the VM, you have to use vagrant reload
(which is basically the same as vagrant halt && vagrant up
). Otherwise some of the Vagrant specific configuration is lost, seemingly including that synced folder.
In general it is advisable to try to avoid the need to reboot the VM in the first place, and instead to build a base box with the wanted kernel version etc.
I'll close this as I don't think there is nothing that vagrant-cahier could do. Please reopen if I'm missing something. =)
In general it is advisable to try to avoid the need to reboot the VM in the first place, and instead to build a base box with the wanted kernel version etc.
It is useful to use Vagrant to test things like Ansible provisoning, which may cause VMs to restart, or unattended-upgrades configuration, which necessarily causes a restart (this is how we discovered this issue).
At the least, you should document that using vagrant-cachier breaks restarting the VM normally. We wasted several hours debugging this issue, which is always frustrating.
There is nothing that vagrant-cachier could do
Depends on the approach taken by vagrant-cachier. The current approach appears to involve replacing state folders used by package managers with shared folders, so their contents are automatically persisted in the host across VM destroy/up. I do not understand the details well enough to know why this is incompatible with the VM being restarted without vagrant reload
. It does not seem like it should be incompatible - is the shared folder not sync'ed unless Vagrant is able to control shutdown?
Another approach could be to hook into the network requests made by apt and provide cached reponses, like squid-deb-proxy
. But that seems like a fundamentally different approach than the one currently taken by vagrant-cachier.
It is useful to use Vagrant to test things like Ansible provisoning, which may cause VMs to restart, or unattended-upgrades configuration, which necessarily causes a restart
Well, personally I don't like that my provisioners more or less randomly restart live servers either. =)
At the least, you should document that using vagrant-cachier breaks restarting the VM normally. We wasted several hours debugging this issue, which is always frustrating.
I'm sorry for your frustration and wasted time. :/ Still this is not vagrant-cachier's fault per se. Vagrant does a lot of configuration when it bootstraps a VM, and some of that is transient. vagrant-cachier just relies on synced folders to work, which IMHO is a fair assumption. And while for example the vboxsf devices itself probably stay configured after the VM reboots, Vagrant does the mounting over ssh. The actual behavior also depends on the provider (VirtualBox, AWS, ...), synced folder type (vboxsf, NFS, rsync), and host/guest OS.
Another approach could be to hook into the network requests made by apt and provide cached reponses, like squid-deb-proxy.
For this kind of approach you could take a look at the vagrant-proxyconf plugin with a local caching proxy, like my polipo-box setup.
For this kind of approach you could take a look at the vagrant-proxyconf plugin with a local caching proxy, like my polipo-box setup.
That's very helpful, thank you!
vagrant-cachier appears to be interfering with some of apt's state in
/var/lib/apt
, which leads to apt being unable to update after rebooting the VM.Vagrantfile for POC
Steps to reproduce:
vagrant up
vagrant ssh
In the VM,
sudo apt-get update
sudo shutdown -r now
Once the VM has rebooted,
vagrant ssh
, then:This error breaks apt and prevents it from being able to update the package lists.
Here's the POC with
VAGRANT_LOG=DEBUG
for every command (it's quite verbose): https://gist.github.com/garrettr/d20812c77b6d7a9dc143If you comment out the the lines that configure vagrant-cachier in Vagrantfile (16-18) and re-run the POC,
apt-get update
works without issue after reboot.