Closed JasonGiedymin closed 11 years ago
I tried to disable NFS, and then fall back to shared (which I'm not sure I got working right in my branch... too tired). See my 3.1.3 branch.
Once we move to openshift this won't matter because a lot of this app
won't apply to us. Most of the rake commands will actually interact with an api behind the box rather than interacting with vagrant, chef, scripts shelled/scp'd or nfs'd.
However after we move to openshift i want to open source parts of 'older' app
and call it vagrant cluster
or yojimbo. Before we do that this bug should be fixed.
So I created a test project to try to get at the heart of the problem (keep in mind I don't have much knowledge about filesystems and such). Here's the project: https://github.com/jamiely/vagrant_nfs_test.
Note the README.md and that both VMs were able to mount the nfs_export directory. Looking at the fairly simple Vagrantfile
s, do you have an idea of what we might be doing differently in the Holobot Vagrantfile
?
Also, if you look at the example and note that it's actually not illustrative of the issue, let me know.
Check the box we are using Then make sure your using nfs Update us after
-Jason
On Oct 16, 2013, at 9:35 PM, Jamie Ly notifications@github.com wrote:
So I created a test project to try to get at the heart of the problem (keep in mind I don't have much knowledge about filesystems and such). Here's the project: https://github.com/jamiely/vagrant_nfs_test. Note the README.md and that both VMs were able to mount the nfs_export directory. Looking at the fairly simple Vagrantfiles, do you have an idea of what we might be doing differently in the Holobot Vagrantfile? Also, if you look at the example and note that it's actually not illustrative of the issue, let me know.
— Reply to this email directly or view it on GitHub.
I see now.
Yeah this is wack.
Ill stew over it in a few, packing now.
-Jason
On Oct 16, 2013, at 9:35 PM, Jamie Ly notifications@github.com wrote:
So I created a test project to try to get at the heart of the problem (keep in mind I don't have much knowledge about filesystems and such). Here's the project: https://github.com/jamiely/vagrant_nfs_test. Note the README.md and that both VMs were able to mount the nfs_export directory. Looking at the fairly simple Vagrantfiles, do you have an idea of what we might be doing differently in the Holobot Vagrantfile? Also, if you look at the example and note that it's actually not illustrative of the issue, let me know.
— Reply to this email directly or view it on GitHub.
Just for clarification, the base box I used was precise64.
One thing I will try the next time I get a chance is to manually run the mount commands from both the ubuntu and strider boxes. Now that I know the mount shouldn't have an issue with locking, I'll have a better idea of what to expect.
Jot down your env too
Through the versions, Jason and I have noticed inconsistent behavior across the board on all kinds of vagrant features with vbox. A thought shared with the coreos guys too. At this point it's like chicken little.
Here is an idea
Take the Ubuntu section of our vagrantfile and manually specify all the things which we are automating.
-Jason
On Oct 16, 2013, at 10:06 PM, Jamie Ly notifications@github.com wrote:
One thing I will try the next time I get a chance is to manually run the mount commands from both the ubuntu and strider boxes. Now that I know the mount shouldn't have an issue with locking, I'll have a better idea of what to expect.
— Reply to this email directly or view it on GitHub.
My host machine is a macbook air running OS X 10.8. VBox 4.2.18.
I tried manually mounting today from my ubuntu guest but got the following error:
sudo mount -v -t nfs -o vers=3 10.0.2.2:/Users/jamiely/code/Holobot/scripts/nfs_mount /mnt/holobot-nfs/
mount.nfs: timeout set for Fri Oct 18 01:18:25 2013
mount.nfs: trying text-based options 'vers=3,addr=10.0.2.2'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 10.0.2.2 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 10.0.2.2 prog 100005 vers 3 prot UDP port 835
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 10.0.2.2:/Users/jamiely/code/Holobot/scripts/nfs_mount
I tried various things including trying to get some logs from nfsd without luck.
I did notice when I checked out /etc/exports
on the host that the guest IP specified was the one in the 10.10.10.* range. I added the guest IP in the 10.0.2.* range. This didn't seem to have an effect. (I did restart nfsd).
Unfortunately, when I reload the VMs, I'm getting the following error fairly consistently. It doesn't get to mounting the nfs shares.
Guest-specific operations were attempted on a machine that is not
ready for guest communication. This should not happen and a bug
should be reported.
I'll keep at it. I think my next step is to try to continue to get the nfsd logs from the host and then rebuild the Vagrantfiles to figure out what might be the issue.
Is this yours or our Vagrant file?
-Jason
On Oct 17, 2013, at 9:20 PM, Jamie Ly notifications@github.com wrote:
My host machine is a macbook air running OS X 10.8. VBox 4.2.18.
I tried manually mounting today from my ubuntu guest but got the following error:
sudo mount -v -t nfs -o vers=3 10.0.2.2:/Users/jamiely/code/Holobot/scripts/nfs_mount /mnt/holobot-nfs/ mount.nfs: timeout set for Fri Oct 18 01:18:25 2013 mount.nfs: trying text-based options 'vers=3,addr=10.0.2.2' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 10.0.2.2 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 10.0.2.2 prog 100005 vers 3 prot UDP port 835 mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 10.0.2.2:/Users/jamiely/code/Holobot/scripts/nfs_mount I tried various things including trying to get some logs from nfsd without luck.
I did notice when I checked out /etc/exports on the host that the guest IP specified was the one in the 10.10.10.* range. I added the guest IP in the 10.0.2.* range. This didn't seem to have an effect. (I did restart nfsd).
Unfortunately, when I reload the VMs, I'm getting the following error fairly consistently:
Guest-specific operations were attempted on a machine that is not ready for guest communication. This should not happen and a bug should be reported. I'll keep at it. I think my next step is to try to continue to get the nfsd logs from the host and then rebuild the Vagrantfiles to figure out what might be the issue.
— Reply to this email directly or view it on GitHub.
Thanks for looking man!
-Jason
On Oct 17, 2013, at 9:20 PM, Jamie Ly notifications@github.com wrote:
My host machine is a macbook air running OS X 10.8. VBox 4.2.18.
I tried manually mounting today from my ubuntu guest but got the following error:
sudo mount -v -t nfs -o vers=3 10.0.2.2:/Users/jamiely/code/Holobot/scripts/nfs_mount /mnt/holobot-nfs/ mount.nfs: timeout set for Fri Oct 18 01:18:25 2013 mount.nfs: trying text-based options 'vers=3,addr=10.0.2.2' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 10.0.2.2 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 10.0.2.2 prog 100005 vers 3 prot UDP port 835 mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 10.0.2.2:/Users/jamiely/code/Holobot/scripts/nfs_mount I tried various things including trying to get some logs from nfsd without luck.
I did notice when I checked out /etc/exports on the host that the guest IP specified was the one in the 10.10.10.* range. I added the guest IP in the 10.0.2.* range. This didn't seem to have an effect. (I did restart nfsd).
Unfortunately, when I reload the VMs, I'm getting the following error fairly consistently:
Guest-specific operations were attempted on a machine that is not ready for guest communication. This should not happen and a bug should be reported. I'll keep at it. I think my next step is to try to continue to get the nfsd logs from the host and then rebuild the Vagrantfiles to figure out what might be the issue.
— Reply to this email directly or view it on GitHub.
Closing this for now as we discussed.
Start two VMs up. Each pointing to nfs share.
Only one will fs stat/mount (as in the VM should see files).