pires / kubernetes-vagrant-coreos-cluster

Kubernetes cluster (for testing purposes) made easy with Vagrant and CoreOS.
Apache License 2.0
597 stars 205 forks source link

/path/kubernetes-vagrant-coreos-cluster/temp/calico.yaml must exist #300

Closed and1990 closed 6 years ago

and1990 commented 6 years ago

It shows '/path/kubernetes-vagrant-coreos-cluster/temp/calico.yaml must exist' when i execute command 'vagrant up'. What should i do?

bmcustodio commented 6 years ago

Did you run vagrant up on a pre-existing cluster? Can you please post the output of vagrant global-status? Also, can you please try running on a fresh clone of the repo (i.e., without a pre-existing .vagrant directory)?

elderbig commented 6 years ago

I meet the same: OS=Windows 10 ARCH=x86_64 ERROR

==> master: Loading metadata for box 'http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json'
    master: URL: http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json
==> master: Adding box 'coreos-alpha' (v1786.1.0) for provider: virtualbox
    master: Downloading: https://alpha.release.core-os.net/amd64-usr/1786.1.0/coreos_production_vagrant.box
    master:
    master: Calculating and comparing box checksum...
==> master: Successfully added box 'coreos-alpha' (v1786.1.0) for 'virtualbox'!
There are errors in the configuration of this machine. Please fix
the following errors and try again:

File provisioner:
* File upload source file C:/Users/com/git/kubernetes-vagrant-coreos-cluster/temp/calico.yaml must exist

When I run vagrant global-status,I got this:

C:\Users\com\git\kubernetes-vagrant-coreos-cluster>vagrant global-status
WARNING: Vagrant has detected the `vagrant-triggers` plugin. This plugin conflicts
with the internal triggers implementation. Please uninstall the `vagrant-triggers`
plugin and run the command again if you wish to use the core trigger feature. To
uninstall the plugin, run the command shown below:

  vagrant plugin uninstall vagrant-triggers

Note that the community plugin `vagrant-triggers` and the core trigger feature
in Vagrant do not have compatible syntax.

To disable this warning, set the environment variable `VAGRANT_USE_VAGRANT_TRIGGERS`.
id       name    provider   state    directory
-------------------------------------------------------------------------
ab4eebe  default virtualbox poweroff D:/vagrant/centos

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date. To interact with any of the machines, you can go to
that directory and run Vagrant, or you can use the ID directly
with Vagrant commands from any directory. For example:
"vagrant destroy 1a2b3c4d"

It seem there is no more vm running. next,I delete .vagrant dir and retry once,the same error appears!

elderbig commented 6 years ago

On Centos7,I try this by edit synced_folders.yaml and Vagrantfile,it skip,and another error appears.

moreirfi-zz commented 6 years ago

Im having the same problem with calico...is there any file that we can download with the correct configurations? In \plugins\calico there is a file calico.yaml.tmpl that if i rename to calico.yaml and copy it to /temp/calico.yaml the problem is solved but i dont know if we can use the settings in that file as they are. Can anyone help?

matejfico commented 6 years ago

Same here, also on Windows 10. Vagrant version: 2.1.1

I've tried this multiple times with fresh clones of the repo. I get the same error as @and1990 every time.

However if I create an empty calico.yaml file in the /temp directory I get the following error after running vagrant up:

WARNING: Vagrant has detected the `vagrant-triggers` plugin. This plugin conflicts
with the internal triggers implementation. Please uninstall the `vagrant-triggers`
plugin and run the command again if you wish to use the core trigger feature. To
uninstall the plugin, run the command shown below:

  vagrant plugin uninstall vagrant-triggers

Note that the community plugin `vagrant-triggers` and the core trigger feature
in Vagrant do not have compatible syntax.

To disable this warning, set the environment variable `VAGRANT_USE_VAGRANT_TRIGGERS`.
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'node-01' up with 'virtualbox' provider...
Bringing machine 'node-02' up with 'virtualbox' provider...
==> master: Running triggers before up...
==> master: 2018-06-17 12:24:02 +0200: setting up Kubernetes master...
==> master: Setting Kubernetes version 1.10.4
==> master: Importing base box 'coreos-alpha'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'coreos-alpha' is up to date...
==> master: Setting the name of the VM: kubernetes-vagrant-coreos-cluster_master_1529231059791_60014
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
    master: Adapter 1: nat
    master: Adapter 2: hostonly
==> master: Forwarding ports...
    master: 22 (guest) => 2222 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
    master: SSH address: 127.0.0.1:2222
    master: SSH username: core
    master: SSH auth method: private key
==> master: Machine booted and ready!
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Exporting NFS shared folders...
==> master: Preparing to edit nfs mounting file.
[NFS] Status: halted
[NFS] Start: started
==> master: Mounting NFS shared folders...
==> master: Setting time zone...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: file...
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running triggers after up...
==> master: Waiting for Kubernetes master to become ready...
==> master: 2018-06-17 12:29:11 +0200: successfully deployed master
==> master: Installing kubectl for the Kubernetes version we just bootstrapped...
==> master: Executing remote command "sudo -u core /bin/sh /home/core/kubectlsetup install"...
==> master: Downloading and installing linux version of 'kubectl' v1.10.4 into /opt/bin. This may take a couple minutes, depending on your internet speed..
==> master: Configuring environment..
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-cluster default-cluster --server=https://172.17.8.101 --certificate-authority=/vagrant/artifacts/tls/ca.pem"...
==> master: Cluster "default-cluster" set.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-credentials default-admin --certificate-authority=/vagrant/artifacts/tls/ca.pem --client-key=/vagrant/artifacts/tls/admin-key.pem --client-certificate=/vagrant/artifacts/tls/admin.pem"...
==> master: User "default-admin" set.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-context local --cluster=default-cluster --user=default-admin"...
==> master: Context "local" created.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config use-context local"...
==> master: Switched to context "local".
==> master: Remote command execution finished.
==> master: Configuring Calico...
==> master: Executing remote command "/opt/bin/kubectl apply -f /home/core/calico.yaml"...
==> master: error: no objects passed to apply
==> master: Remote command execution finished.
The remote command "/opt/bin/kubectl apply -f /home/core/calico.yaml" returned a failed exit
code or an exception. The error output is shown below:

error: no objects passed to apply

vagrant global-status output:


WARNING: Vagrant has detected the `vagrant-triggers` plugin. This plugin conflicts
with the internal triggers implementation. Please uninstall the `vagrant-triggers`
plugin and run the command again if you wish to use the core trigger feature. To
uninstall the plugin, run the command shown below:

  vagrant plugin uninstall vagrant-triggers

Note that the community plugin `vagrant-triggers` and the core trigger feature
in Vagrant do not have compatible syntax.

To disable this warning, set the environment variable `VAGRANT_USE_VAGRANT_TRIGGERS`.
id       name   provider   state   directory
--------------------------------------------------------------------------------------------------------
23b79b2  master virtualbox running C:/Users/ME/Desktop/vagrant/kubernetes-vagrant-coreos-cluster

So the VM is running and this can also be seen by accessing the VM via Virtualbox GUI. I get a login prompt. So I assume there is an issue with calico configuration?

bhargavm-zymr commented 6 years ago

@bmcstdio get same error file upload source file must exist /tmp/calico.yaml have checked it's not generated thorugh vagrant setup as of line 311 it must be there.

gkoudjou commented 6 years ago

The only fix "i found" is to copy the file from plugins/calico/calico.yaml.tmpl to temp/calico.yaml. With this done, it works nicely.

NikitinIgor commented 6 years ago

@gkoudjou with your workaroud I am getting the following error:

vagrant up
WARNING: Vagrant has detected the `vagrant-triggers` plugin. This plugin conflicts
with the internal triggers implementation. Please uninstall the `vagrant-triggers`
plugin and run the command again if you wish to use the core trigger feature. To
uninstall the plugin, run the command shown below:

  vagrant plugin uninstall vagrant-triggers

Note that the community plugin `vagrant-triggers` and the core trigger feature
in Vagrant do not have compatible syntax.

To disable this warning, set the environment variable `VAGRANT_USE_VAGRANT_TRIGGERS`.
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'node-01' up with 'virtualbox' provider...
Bringing machine 'node-02' up with 'virtualbox' provider...
==> master: Running triggers before up...
==> master: 2018-06-25 13:39:26 +0300: setting up Kubernetes master...
==> master: Setting Kubernetes version 1.10.4
==> master: Checking if box 'coreos-alpha' is up to date...
==> master: Clearing any previously set network interfaces...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["hostonlyif", "create"]

Stderr: 0%...
Progress state: E_INVALIDARG
VBoxManage.exe: error: Failed to create the host-only adapter
VBoxManage.exe: error: Assertion failed: [!aInterfaceName.isEmpty()] at 'F:\tinderbox\win-5.2\src\VBox\Main\src-server\HostNetworkInterfaceImpl.cpp' (76) in long __cdecl HostNetworkInterface::init(class com::Bstr,class com::Bstr,class com::Guid,enum __MIDL___MIDL_itf_VirtualBox_0000_0000_0038).
VBoxManage.exe: error: Please contact the product vendor!
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage.exe: error: Context: "enum RTEXITCODE __cdecl handleCreate(struct HandlerArg *)" at line 94 of file VBoxManageHostonly.cpp
bmcustodio commented 6 years ago

I am sorry you are having trouble. I have opened #312 in an attempt to fix this. Could one of you have a look at said PR and let us know if it works for you?

gkoudjou commented 6 years ago

Hi,

@NikitinIgor : you already have a fix in the troubleshoot section of the main page. ==> https://github.com/pires/kubernetes-vagrant-coreos-cluster

Said, you have to specify this env variable : VAGRANT_USE_VAGRANT_TRIGGERS=false NODES=2 vagrant up

This is just a warning and everything should work as expected. If you are on windows (like me), you may want to kill the Ruby process to stop getting anoying warning.

Hope it helps

matejfico commented 6 years ago

@bmcstdio

Still having issues, with fresh clone of the updated repo.

vagrant up output:

WARNING: Vagrant has detected the `vagrant-triggers` plugin. This plugin conflicts
with the internal triggers implementation. Please uninstall the `vagrant-triggers`
plugin and run the command again if you wish to use the core trigger feature. To
uninstall the plugin, run the command shown below:

  vagrant plugin uninstall vagrant-triggers

Note that the community plugin `vagrant-triggers` and the core trigger feature
in Vagrant do not have compatible syntax.

To disable this warning, set the environment variable `VAGRANT_USE_VAGRANT_TRIGGERS`.
Bringing machine 'master' up with 'virtualbox' provider...
Bringing machine 'node-01' up with 'virtualbox' provider...
Bringing machine 'node-02' up with 'virtualbox' provider...
==> master: Running triggers before up...
==> master: 2018-06-26 09:03:09 +0200: setting up Kubernetes master...
==> master: Setting Kubernetes version 1.10.5
==> master: Importing base box 'coreos-alpha'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'coreos-alpha' is up to date...
==> master: Setting the name of the VM: kubernetes-vagrant-coreos-cluster_master_1529996611539_90205
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
    master: Adapter 1: nat
    master: Adapter 2: hostonly
==> master: Forwarding ports...
    master: 22 (guest) => 2222 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
    master: SSH address: 127.0.0.1:2222
    master: SSH username: core
    master: SSH auth method: private key
==> master: Machine booted and ready!
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Exporting NFS shared folders...
==> master: Preparing to edit nfs mounting file.
[NFS] Status: running
==> master: Mounting NFS shared folders...
==> master: Setting time zone...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: file...
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: file...
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running triggers after up...
==> master: Waiting for Kubernetes master to become ready...
==> master: 2018-06-26 09:06:11 +0200: successfully deployed master
==> master: Installing kubectl for the Kubernetes version we just bootstrapped...
==> master: Executing remote command "sudo -u core /bin/sh /home/core/kubectlsetup install"...
==> master: Downloading and installing linux version of 'kubectl' v1.10.5 into /opt/bin. This may take a couple minutes, depending on your internet speed..
==> master: Configuring environment..
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-cluster default-cluster --server=https://172.17.8.101 --certificate-authority=/vagrant/artifacts/tls/ca.pem"...
==> master: Cluster "default-cluster" set.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-credentials default-admin --certificate-authority=/vagrant/artifacts/tls/ca.pem --client-key=/vagrant/artifacts/tls/admin-key.pem --client-certificate=/vagrant/artifacts/tls/admin.pem"...
==> master: User "default-admin" set.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config set-context local --cluster=default-cluster --user=default-admin"...
==> master: Context "local" created.
==> master: Remote command execution finished.
==> master: Executing remote command "/opt/bin/kubectl config use-context local"...
==> master: Switched to context "local".
==> master: Remote command execution finished.
==> master: Configuring Calico...
==> master: Executing remote command "/opt/bin/kubectl apply -f /home/core/calico.yaml"...
==> master: configmap "calico-config" created
==> master: service "calico-typha" created
==> master: deployment.apps "calico-typha" created
==> master: daemonset.extensions "calico-node" created
==> master: customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" created
==> master: customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created
==> master: serviceaccount "calico-node" created
==> master: Remote command execution finished.
==> master: Configuring Kubernetes DNS...
==> master: Executing remote command "/opt/bin/kubectl create -f /home/core/coredns-deployment.yaml"...
==> master: error: no objects passed to create
==> master: Remote command execution finished.
The remote command "/opt/bin/kubectl create -f /home/core/coredns-deployment.yaml" returned a failed exit
code or an exception. The error output is shown below:

error: no objects passed to create

vagrant global-status output:

WARNING: Vagrant has detected the `vagrant-triggers` plugin. This plugin conflicts
with the internal triggers implementation. Please uninstall the `vagrant-triggers`
plugin and run the command again if you wish to use the core trigger feature. To
uninstall the plugin, run the command shown below:

  vagrant plugin uninstall vagrant-triggers

Note that the community plugin `vagrant-triggers` and the core trigger feature
in Vagrant do not have compatible syntax.

To disable this warning, set the environment variable `VAGRANT_USE_VAGRANT_TRIGGERS`.
id       name   provider   state   directory
---------------------------------------------------------------------------------------------------------------
63289bc  master virtualbox running C:/Users/ME/Desktop/vagrant/vgrFix/kubernetes-vagrant-coreos-cluster

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date. To interact with any of the machines, you can go to
that directory and run Vagrant, or you can use the ID directly
with Vagrant commands from any directory. For example:
"vagrant destroy 1a2b3c4d"
bmcustodio commented 6 years ago

@cybermyth this error seems to be related to CoreDNS (not to Calico). In any case, I will try to understand what is happening.

NikitinIgor commented 6 years ago

@gkoudjou the same error:

==> master: Clearing any previously set network interfaces...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["hostonlyif", "create"]

Stderr: 0%...
Progress state: E_INVALIDARG
VBoxManage.exe: error: Failed to create the host-only adapter
VBoxManage.exe: error: Assertion failed: [!aInterfaceName.isEmpty()] at 'F:\tinderbox\win-5.2\src\VBox\Main\src-server\HostNetworkInterfaceImpl.cpp' (76) in long __cdecl HostNetworkInterface::init(class com::Bstr,class com::Bstr,class com::Guid,enum __MIDL___MIDL_itf_VirtualBox_0000_0000_0038).
VBoxManage.exe: error: Please contact the product vendor!
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage.exe: error: Context: "enum RTEXITCODE __cdecl handleCreate(struct HandlerArg *)" at line 94 of file VBoxManageHostonly.cpp