Open marcindulak opened 7 years ago
The first error is due to the storageattach command failing to find the IDE Controller. These devices are different depending on underlying virtualbox machine (some call them simply "IDE", some others "IDE Controller", while others use "SATA" or "SATA Controller"). A way of enumerating and/or standardizing the attachment would be very nice.
The second error is because you failed to delete the sdb.vdi from virtualbox most likely. This sometimes happens when vagrant fails and is a real annoyance imho. vagrant destroy should be able to pick it up and destroy it on its own (not sure which party performs this, but the virtualbox machine is destroyed by it, and at the same time the vdi disappears so...). I think if storageattach fails, the vdi will just stay there and not be deleted.
Edit: finding the name of the storage controller can be done by doing:
vboxmanage showvminfo boxname|grep "Storage Controller Name"
I have the same problem, except I can reproduce it 100% of the time. Every time I "vagrant up" this one, it says the node1_disk1.vdi file already exists. It does not, but when it dies the vdi files have been created (both of them).
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# Spin up the VM's and make sure they are updated.
(1..3).each do |i|
config.vm.define "node#{i}" do |node|
node.vm.box = "ubuntu/xenial64"
node.vm.hostname = "node#{i}"
node.vm.network "private_network", ip: "192.168.50.1#{i}"
# Each VM needs to have two block devices. 1GB local storage each
# for the purpose of this exercise, and attached to the VM's.
disk1 = "./node#{i}_disk1.vdi"
disk2 = "./node#{i}_disk2.vdi"
node.vm.provider "virtualbox" do |vb|
# If disks don't exist, create them
unless FileTest.exist?(disk1)
vb.customize ['createhd', '--filename', disk1, '--variant', 'Fixed', '--size', 1 * 1024]
end
unless FileTest.exist?(disk2)
vb.customize ['createhd', '--filename', disk2, '--variant', 'Fixed', '--size', 1 * 1024]
end
# Attach the drives to the SCSI controller
vb.customize ['storageattach', :id, '--storagectl', 'SCSI', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', disk1]
vb.customize ['storageattach', :id, '--storagectl', 'SCSI', '--port', 3, '--device', 0, '--type', 'hdd', '--medium', disk2]
end
# Start doing some real work
# If the block device won't mount then it needs an FS
node.vm.provision "shell", inline: <<-SHELL
sudo mkdir /mnt/persistent1
sudo mkdir /mnt/persistent2
if ! (sudo mount /dev/sdc /mnt/persistent1); then sudo mkfs.ext4 /dev/sdc; sudo mount /dev/sdc /mnt/persistent1; fi
if ! (sudo mount /dev/sdd /mnt/persistent2); then sudo mkfs.ext4 /dev/sdd; sudo mount /dev/sdd /mnt/persistent2; fi
# Update and upgrade
sudo apt-get update
sudo apt-get upgrade -y
# Install Docker stable
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce
SHELL
end
end
end
I see the same behavior tonight. The work around that worked was to rename the file.
And leave a wayward .vdi of who knows what size just hanging there? Not much of a workaround if you ask me...
When this happens I just "vagrant destroy", delete the whole directory, assign a new name to the disk and retry, the second time succeeds. Not very pleasant experience indeed @flybd5
+1, same on OSX with vagrant 2.0.1 and VirtualBox 5.2.4. Happens after using the reload plugin when starting the VM for the second time. Workaround is moving the working dir...
This bug is still pretty annoying. And it is present since December 2016? Wow...
This seems to be a bug in Virtualbox. Still present in latest Windows version at this time; 5.2.8. Error can be reproduced just running the command vboxmanage.exe createdhd --filename foo.vdi --size 10240. If you then delete that .vdi file and run the same command again, you get VERR_ALREADY_EXISTS.
To workaround this, you can run vboxmanage.exe list hdds which will give you a list of virtual hard disks along with their UUIDs, then select the problem disk's UUID and run...
vboxmanage.exe closemedium
After doing this you can then create the disk again using the createhd option.
Just to clarify the previous comment with an example -
`PS C:\Users\ksvietme\Documents\Projects\vagrant\virtualbox\ceph> vboxmanage list hdds UUID: 73296b3f-99e2-4384-929d-f68d9b2d1633 Parent UUID: base State: locked write Type: normal (base) Location: C:\Users\ksvietme\VirtualBox VMs\CentosAdminSystem\CentosAdminSystem Clone-disk1.vdi Storage format: VDI Capacity: 35720 MBytes Encryption: disabled
UUID: 90b633b2-b672-445d-9a1b-36549c370785 Parent UUID: base State: inaccessible Type: normal (base) Location: C:\Users\ksvietme\Documents\Projects\vagrant\virtualbox\centos\Disk-0.vdi Storage format: VDI Capacity: 512 MBytes Encryption: disabled`
PS C:\Users\ksvietme\Documents\Projects\vagrant\virtualbox\ceph> vboxmanage closemedium disk 90b633b2-b672-445d-9a1b-36549c370785 --delete 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% PS C:\Users\ksvietme\Documents\Projects\vagrant\virtualbox\ceph>
And the "createhdd" command is wrong - correct syntax with newer Virtualbox - on Windows is: `PS C:\Users\ksvietme\Documents\Projects\vagrant\virtualbox\MultiServer> vboxmanage.exe createmedium --filename foo.vdi --size 10240
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 1d5b7e7b-0cc1-4ca5-b879-4d70bf6fb184`
I don't believe this is a Virtualbox issue.
Why?
I was able to successfullly run the "createmedium" subcommand in the project directory
I deleted the .vagrant directory - no change
I have other projects using the exact same syntax for disk creation that work fine
Deleting all of the stale disks did not help (see pervious comment) - so not a VBox state issue. Unless VBox is storing state somewhere else wse can't see with "list hdd".
Moving the Vagrantfile to a new folder resolved the issue.
Vagrant is storing this state somewhere. I thought it might be in .vagrant but deleting and recreating it didn't help. I dug through the vagrant.d directories and didn't see anything storing disk information.
Vagrant - "current_version":"2.1.2","current_release":1530046733 VBox - 5.2.12r122591
Perhaps a seperate issue but I'm wondering if "createhd" isn't deprecated now. There is no longer a "createhd" subcommand in the latest Virtualbox and "createhd" and "createmedium" are interchangable in the Vagrantfile. I am using createhd in my customizations because that is what all the examples out there use.
Just to clarify, I was specifically commenting on this being an issue with Virtualbox 5.2.8 which I think was the latest version at the time I posted. Possible things have changed on the Virtualbox side of things with newer versions, but I haven't checked this in a while.
Vagrant4Windows can end up leaving a bit of cruft behind. I find that removing the .vagrant folder (Windows) fixes most situations. So does making sure you use the latest VBox, plugins, and extension pack.
Vagrant 2.2.3 $>vboxmanage -v 6.0.4r128413 Host: macOS mojave, version 10.14.2
The same as above, and the BUG still exists there, it exhausts me.
Probably there are still references in the ~/.config/VirtualBox/VirtualBox.xml
Instead using vagrant destroy
, delete the machine manually in VBox.
Check whether the
vboxmanage list hdds
if any items with
vboxmanage closemedium <UUID> --delete
Normally, the problem may be solved.
2023 and still the same.
Vagrant 2.3.4 Virtualbox 7.0
Same issue here
vagrant --version Vagrant 2.2.19
vboxmanage --version 6.1.38_Ubuntur153438
Deleting the disk or the vm from Virtualbox doesn't do the trick, but renaming the disk file in the Vagrant file does it
Vagrant version
$ vagrant -v Vagrant 1.9.1 $ vagrant plugin list vagrant-share (1.1.6, system)
$ vboxmanage -v 5.1.10r112026 $ vboxmanage list extpacks Extension Packs: 1 Pack no. 0: Oracle VM VirtualBox Extension Pack Version: 5.1.10 Revision: 112026
Host operating system
$ cat /etc/os-release NAME="Ubuntu" VERSION="14.04.5 LTS, Trusty Tahr" $ uname -a Linux ubuntu 3.13.0-85-generic #129-Ubuntu SMP Thu Mar 17 20:50:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Guest operating system
https://atlas.hashicorp.com/centos/boxes/7/versions/1610.01
Vagrantfile
Vagrantfile available also at https://gist.githubusercontent.com/marcindulak/1b0ee3eda0bc94617023e85a62e1cac6/raw/ec8706ad136ddac8dc86adb90c30c0e76752a414/Vagrantfile
Debug output
Expected behavior
Actually I'm not sure, but I'm expecting that
vagrant destroy -f
should bring me back to the initial state.Actual behavior
vagrant up
followed byvagrant destroy -f
and anothervagrant up
results in two different errors from these twovagrant up
runsSteps to reproduce
As reported at https://github.com/mitchellh/vagrant/issues/8105 the first vagrant up results in
After
vagrant destroy -f; rm -f sdb.vdi
there is nosdb.vdi
in the current directory and no VM~/VirtualBox VMs/t00_ideControllerProblem*
, nevertheless the second vagrant up fails withNote that this behavior is non-reproducible, and happens more frequently when running all the
vagrant
commands from a script. You may need to run the script several times in order for it to happen. Inserting asleep 10
before the secondvagrant up
makes the problem appear less frequently.References
The problem with
VDI: cannot create image '*.vdi' (VERR_ALREADY_EXISTS)
is common when attaching storage, and if the error becomes persistent (yes, it may happen) it is "solved" by changing the path to the storage file. The problem seems to disappear also when moving the whole vagrant project (Vagrantfile) to another directory (with mv).http://stackoverflow.com/questions/36861101/vagrant-up-failing-when-calling-createhd-with-error-verr-already-exists-on-new-v https://github.com/aidanns/vagrant-reload/issues/6
Another strange mention of this issue https://github.com/mitchellh/vagrant/issues/7743