Open ghost opened 9 years ago
Can you post all the steps you did and also all their output, if you still have it?
Also, listing your windows/vbox/vagrant versions will be helpful.
Thanks!
thanks for the quick feedback... thats was fast
i dont have all the output logged. If you can tell me how to do that i am happy to run it all again.
here is what i did:
This ran to completion. lots of apt-get stuff. If you can tell me how to get the logs of it, I can add it to this issue.
cheers
I am using the latest version of vagrant (1.6.5) & virtual box (4.3.16)
Please leave the postinstall script alone. It is an artifact of the vagrant base box and has nothing to do with our setup.
Since you mentioned, that you had a few problems getting the boxes up, I guess they are not fully provisioned yet. Please run "vagrant provision" to do that.
Also, where in the world are you located? We sometimes run into time-outs if we can't find a fast apache mirror close to you.
Ok. I have already run it :) hehe I should start from scratch ?
i am in Berlin, Germany.
Here is the output from running "vagrant provision"
C:_data_temp\vagrant-cascading-hadoop-cluster>vagrant provision
==> hadoop1: Running provisioner: puppet...
Shared folders that Puppet requires are missing on the virtual machine.
This is usually due to configuration changing after already booting the
machine. The fix is to run a vagrant reload
so that the proper shared
folders will be prepared and mounted on the VM.
C:_data_temp\vagrant-cascading-hadoop-cluster>
please do a "vagrant reload --provision", that should bring things back to a consistent state.
Since I am also in Berlin, I think the downloads will work from here...
small world....
i shut down all the vm's i ran vagrant up (this did its thing), then the "vagrant reload --provision"
i noticed in the log that the version of guest additions on my host, and in the vm's is different. is this the problem ? Also, the command stopped at node hadoop1 (i waited 10 minutes)
LOG BELOW:
C:_data_temp\vagrant-cascading-hadoop-cluster>vagrant reload --provision ==> hadoop1: Attempting graceful shutdown of VM... ==> hadoop1: Clearing any previously set forwarded ports... ==> hadoop1: Clearing any previously set network interfaces... ==> hadoop1: Preparing network interfaces based on configuration... hadoop1: Adapter 1: nat hadoop1: Adapter 2: hostonly ==> hadoop1: Forwarding ports... hadoop1: 22 => 2222 (adapter 1) ==> hadoop1: Running 'pre-boot' VM customizations... ==> hadoop1: Booting VM... ==> hadoop1: Waiting for machine to boot. This may take a few minutes... hadoop1: SSH address: 127.0.0.1:2222 hadoop1: SSH username: vagrant hadoop1: SSH auth method: private key hadoop1: Warning: Connection timeout. Retrying... ==> hadoop1: Machine booted and ready! ==> hadoop1: Checking for guest additions in VM... hadoop1: The guest additions on this VM do not match the installed version of hadoop1: VirtualBox! In most cases this is fine, but in rare cases it can hadoop1: prevent things such as shared folders from working properly. If you see hadoop1: shared folder errors, please make sure the guest additions within the hadoop1: virtual machine match the version of VirtualBox you have installed on hadoop1: your host and reload your VM. hadoop1: hadoop1: Guest Additions Version: 4.2.0 hadoop1: VirtualBox Version: 4.3 ==> hadoop1: Setting hostname... ==> hadoop1: Configuring and enabling network interfaces... ==> hadoop1: Mounting shared folders... hadoop1: /vagrant => C:/_data/_temp/vagrant-cascading-hadoop-cluster hadoop1: /tmp/vagrant-puppet-1/manifests => C:/_data/_temp/vagrant-cascading-hadoop-cluster/mani fests hadoop1: /tmp/vagrant-puppet-1/modules-0 => C:/_data/_temp/vagrant-cascading-hadoop-cluster/modu les ==> hadoop1: Running provisioner: puppet... ==> hadoop1: Running Puppet with datanode.pp... ==> hadoop1: stdin: is not a tty ==> hadoop1: notice: /Stage[main]/Base/Exec[apt-get update]/returns: executed successfully
This is odd. It could be one of the famous windows vs. unix line ending problems... How did you clone the project? How is your git configured w.r.t. line endings?
ok makes sense.
i will set git line ending globally, and then start fresh with a new git pull.
will take a few minutes :)
BTW used "git config --global core.autocrlf true"
so, did it work now?
i did not get a chance. I will do it on Ubuntu instead. Must put this on back burner for a bit though; got to work on GUI
So as an easy "Fix" just download the repo as a zip and everything will work until stuff is patched.
on windows
i booted the 4 nodes. had to run it a few times for it to complete.
When sshing into master, the only sh file in the root directory is "postinstall.sh", not "prepare-cluster.sh". Also start-all.sh is not there.
Would appreciate confirmation o this so i know i am doing things correctly...