Closed mitchds closed 2 years ago
Correct. StrictHostKeyChecking
is enabled now. Are you attempting a re-provision?
Assuming you have nothing you care about, you can:
vagrant destroy -f
vagrant up
I have done exactly that (twice). Same issue.
Would you paste your .env.local
file? Or any env variables for: STANDALONE_OPERATOR
, MULTI_MASTER
?
For the interest of science, kindly modify your script/provision.sh
and comment-out the following lines:
echo_do rm /vagrant/id_rsa /vagrant/id_rsa.pub
echo_do rm /vagrant/known_hosts
in the passwordless_ssh()
function.
I have no .env.local and have only changed
# Deploy the Helm module?
DEPLOY_HELM=true
HELM_MODULE_NAME="olcne-helm"
# Deploy the Istio module? Requires the Helm module and will set DEPLOY_HELM to 1 if not set.
DEPLOY_ISTIO=true
ISTIO_MODULE_NAME="olcne-istio"
Will add the lines above i the provision.sh and re-run and report
With Istio, I'm also assuming you increased:
WORKER_CPUS=2
WORKER_MEMORY=3072
(though not related to your problem with SSH)
With Istio, I'm also assuming you increased:
WORKER_CPUS=2 WORKER_MEMORY=3072
(though not related to your problem with SSH)
I did not, and didn't see that in either the README or .env file comments (i know it makes sense to do it). Maybe this should be done automatically for the user, or at least add a comment in the .env file so the user is hinted appropriately.
Failed again. I hashed out the lines requested
git diff scripts/provision.sh
diff --git a/OLCNE/scripts/provision.sh b/OLCNE/scripts/provision.sh
index 619b1cd..ffdd01f 100755
--- a/OLCNE/scripts/provision.sh
+++ b/OLCNE/scripts/provision.sh
@@ -444,7 +444,7 @@ passwordless_ssh() {
# Last node removes the key
if [[ ${OPERATOR} == 1 ]]; then
msg "Removing the shared SSH keypair"
- echo_do rm /vagrant/id_rsa /vagrant/id_rsa.pub
+# echo_do rm /vagrant/id_rsa /vagrant/id_rsa.pub
msg "Copying SSH Host Keys"
echo_do sudo cp /vagrant/known_hosts /etc/ssh/ssh_known_hosts
@@ -452,7 +452,7 @@ passwordless_ssh() {
echo_do ssh "${node}" "sudo cp /vagrant/known_hosts /etc/ssh/ssh_known_hosts"
done
msg "Removing the shared SSH Known Hosts file"
- echo_do rm /vagrant/known_hosts
+# echo_do rm /vagrant/known_hosts
fi
}
Full log below
mitch main ~ vagrant-projects OLCNE 1 vagrant destroy -f
==> vagrant: You have requested to enabled the experimental flag with the following features:
==> vagrant:
==> vagrant: Features: disks
==> vagrant:
==> vagrant: Please use with caution, as some of the features may not be fully
==> vagrant: functional yet.
==> master1: Forcing shutdown of VM...
==> master1: Destroying VM and associated drives...
==> worker2: Forcing shutdown of VM...
==> worker2: Destroying VM and associated drives...
==> worker1: Forcing shutdown of VM...
==> worker1: Destroying VM and associated drives...
mitch main ~ vagrant-projects OLCNE vagrant up
==> vagrant: You have requested to enabled the experimental flag with the following features:
==> vagrant:
==> vagrant: Features: disks
==> vagrant:
==> vagrant: Please use with caution, as some of the features may not be fully
==> vagrant: functional yet.
Bringing machine 'worker1' up with 'virtualbox' provider...
Bringing machine 'worker2' up with 'virtualbox' provider...
Bringing machine 'master1' up with 'virtualbox' provider...
==> worker1: Importing base box 'oraclelinux/8'...
==> worker1: Matching MAC address for NAT networking...
==> worker1: Checking if box 'oraclelinux/8' version '8.5.320' is up to date...
==> worker1: Setting the name of the VM: OLCNE_worker1_1651058983168_94692
==> worker1: Clearing any previously set network interfaces...
==> worker1: Preparing network interfaces based on configuration...
worker1: Adapter 1: nat
worker1: Adapter 2: hostonly
==> worker1: Forwarding ports...
worker1: 22 (guest) => 2222 (host) (adapter 1)
==> worker1: Running 'pre-boot' VM customizations...
==> worker1: Booting VM...
==> worker1: Waiting for machine to boot. This may take a few minutes...
worker1: SSH address: 127.0.0.1:2222
worker1: SSH username: vagrant
worker1: SSH auth method: private key
worker1:
worker1: Vagrant insecure key detected. Vagrant will automatically replace
worker1: this with a newly generated keypair for better security.
worker1:
worker1: Inserting generated public key within guest...
worker1: Removing insecure key from the guest if it's present...
worker1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> worker1: Machine booted and ready!
==> worker1: Checking for guest additions in VM...
==> worker1: Setting hostname...
==> worker1: Configuring and enabling network interfaces...
==> worker1: Mounting shared folders...
worker1: /vagrant => /home/mitch/vagrant-projects/OLCNE
==> worker1: Running provisioner: shell...
worker1: Running: /tmp/vagrant-shell20220427-163506-93gb0s.sh
worker1: ===== Removing extra NetworkManager connection =====
worker1: Connection 'Wired connection 1' (479a9818-70ec-3435-80d2-5bb1e6962dad) successfully deleted.
worker1: ===== Configure repos for Oracle Linux Cloud Native Environment =====
worker1: sudo dnf install -y oracle-olcne-release-el8
worker1: sudo dnf config-manager --enable ol8_olcne14 ol8_baseos_latest ol8_appstream ol8_addons ol8_UEKR6
worker1: sudo dnf config-manager --disable ol8_olcne12 ol8_olcne13
worker1: ===== Fulfil requirements =====
worker1: sudo swapoff -a
worker1: sudo sed -i '/ swap /d' /etc/fstab
worker1: sudo modprobe br_netfilter
worker1: sudo sh -c 'echo br_netfilter > /etc/modules-load.d/br_netfilter.conf'
worker1: net.bridge.bridge-nf-call-ip6tables = 1
worker1: net.bridge.bridge-nf-call-iptables = 1
worker1: net.ipv4.ip_forward = 1
worker1: sudo /sbin/sysctl -p /etc/sysctl.d/k8s.conf
worker1: sudo systemctl enable --now firewalld
worker1: sudo firewall-cmd --zone=public --add-interface=eth0 --permanent
worker1: ===== Installing the Oracle Linux Cloud Native Environment Platform Agent =====
worker1: sudo dnf install -y olcne-agent olcne-utils
worker1: sudo systemctl enable olcne-agent.service
worker1: sudo firewall-cmd --add-masquerade --permanent
worker1: sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
worker1: sudo firewall-cmd --add-port=8090/tcp --permanent
worker1: sudo firewall-cmd --add-port=10250/tcp --permanent
worker1: sudo firewall-cmd --add-port=10255/tcp --permanent
worker1: sudo firewall-cmd --add-port=8472/udp --permanent
worker1: sudo firewall-cmd --add-port=30000-32767/tcp --permanent
worker1: sudo firewall-cmd --reload
worker1: ===== Allow passwordless ssh between VMs =====
worker1: ===== Generating shared SSH keypair in PEM format =====
worker1: ssh-keygen -m PEM -t rsa -f /vagrant/id_rsa -q -N '' -C 'vagrant@olcne'
worker1: cp /vagrant/id_rsa /home/vagrant/.ssh
worker1: cp /vagrant/id_rsa.pub /home/vagrant/.ssh
worker1: cat /vagrant/id_rsa.pub >> ~/.ssh/authorized_keys
worker1: chmod 0700 /home/vagrant/.ssh
worker1: chmod 0600 /home/vagrant/.ssh/authorized_keys /home/vagrant/.ssh/id_rsa
worker1: chmod 0644 /home/vagrant/.ssh/authorized_keys /home/vagrant/.ssh/id_rsa.pub
worker1: eval echo "`hostname -s`,`hostname -f`,`hostname -i` `cat /etc/ssh/ssh_host_ed25519_key.pub`" >> /vagrant/known_hosts
worker1: ===== Oracle Linux base software installation complete. =====
==> worker2: Importing base box 'oraclelinux/8'...
==> worker2: Matching MAC address for NAT networking...
==> worker2: Checking if box 'oraclelinux/8' version '8.5.320' is up to date...
==> worker2: Setting the name of the VM: OLCNE_worker2_1651059237972_77368
==> worker2: Fixed port collision for 22 => 2222. Now on port 2200.
==> worker2: Clearing any previously set network interfaces...
==> worker2: Preparing network interfaces based on configuration...
worker2: Adapter 1: nat
worker2: Adapter 2: hostonly
==> worker2: Forwarding ports...
worker2: 22 (guest) => 2200 (host) (adapter 1)
==> worker2: Running 'pre-boot' VM customizations...
==> worker2: Booting VM...
==> worker2: Waiting for machine to boot. This may take a few minutes...
worker2: SSH address: 127.0.0.1:2200
worker2: SSH username: vagrant
worker2: SSH auth method: private key
worker2:
worker2: Vagrant insecure key detected. Vagrant will automatically replace
worker2: this with a newly generated keypair for better security.
worker2:
worker2: Inserting generated public key within guest...
worker2: Removing insecure key from the guest if it's present...
worker2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> worker2: Machine booted and ready!
==> worker2: Checking for guest additions in VM...
==> worker2: Setting hostname...
==> worker2: Configuring and enabling network interfaces...
==> worker2: Mounting shared folders...
worker2: /vagrant => /home/mitch/vagrant-projects/OLCNE
==> worker2: Running provisioner: shell...
worker2: Running: /tmp/vagrant-shell20220427-163506-172ga6.sh
worker2: ===== Removing extra NetworkManager connection =====
worker2: Connection 'Wired connection 1' (5220fb93-1630-3a1c-b985-80a5452c02c4) successfully deleted.
worker2: ===== Configure repos for Oracle Linux Cloud Native Environment =====
worker2: sudo dnf install -y oracle-olcne-release-el8
worker2: sudo dnf config-manager --enable ol8_olcne14 ol8_baseos_latest ol8_appstream ol8_addons ol8_UEKR6
worker2: sudo dnf config-manager --disable ol8_olcne12 ol8_olcne13
worker2: ===== Fulfil requirements =====
worker2: sudo swapoff -a
worker2: sudo sed -i '/ swap /d' /etc/fstab
worker2: sudo modprobe br_netfilter
worker2: sudo sh -c 'echo br_netfilter > /etc/modules-load.d/br_netfilter.conf'
worker2: net.bridge.bridge-nf-call-ip6tables = 1
worker2: net.bridge.bridge-nf-call-iptables = 1
worker2: net.ipv4.ip_forward = 1
worker2: sudo /sbin/sysctl -p /etc/sysctl.d/k8s.conf
worker2: sudo systemctl enable --now firewalld
worker2: sudo firewall-cmd --zone=public --add-interface=eth0 --permanent
worker2: ===== Installing the Oracle Linux Cloud Native Environment Platform Agent =====
worker2: sudo dnf install -y olcne-agent olcne-utils
worker2: sudo systemctl enable olcne-agent.service
worker2: sudo firewall-cmd --add-masquerade --permanent
worker2: sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
worker2: sudo firewall-cmd --add-port=8090/tcp --permanent
worker2: sudo firewall-cmd --add-port=10250/tcp --permanent
worker2: sudo firewall-cmd --add-port=10255/tcp --permanent
worker2: sudo firewall-cmd --add-port=8472/udp --permanent
worker2: sudo firewall-cmd --add-port=30000-32767/tcp --permanent
worker2: sudo firewall-cmd --reload
worker2: ===== Allow passwordless ssh between VMs =====
worker2: cp /vagrant/id_rsa /home/vagrant/.ssh
worker2: cp /vagrant/id_rsa.pub /home/vagrant/.ssh
worker2: cat /vagrant/id_rsa.pub >> ~/.ssh/authorized_keys
worker2: chmod 0700 /home/vagrant/.ssh
worker2: chmod 0600 /home/vagrant/.ssh/authorized_keys /home/vagrant/.ssh/id_rsa
worker2: chmod 0644 /home/vagrant/.ssh/authorized_keys /home/vagrant/.ssh/id_rsa.pub
worker2: eval echo "`hostname -s`,`hostname -f`,`hostname -i` `cat /etc/ssh/ssh_host_ed25519_key.pub`" >> /vagrant/known_hosts
worker2: ===== Oracle Linux base software installation complete. =====
==> master1: Importing base box 'oraclelinux/8'...
==> master1: Matching MAC address for NAT networking...
==> master1: Checking if box 'oraclelinux/8' version '8.5.320' is up to date...
==> master1: Setting the name of the VM: OLCNE_master1_1651059485247_75024
==> master1: Fixed port collision for 22 => 2222. Now on port 2201.
==> master1: Clearing any previously set network interfaces...
==> master1: Preparing network interfaces based on configuration...
master1: Adapter 1: nat
master1: Adapter 2: hostonly
==> master1: Forwarding ports...
master1: 22 (guest) => 2201 (host) (adapter 1)
==> master1: Running 'pre-boot' VM customizations...
==> master1: Booting VM...
==> master1: Waiting for machine to boot. This may take a few minutes...
master1: SSH address: 127.0.0.1:2201
master1: SSH username: vagrant
master1: SSH auth method: private key
master1:
master1: Vagrant insecure key detected. Vagrant will automatically replace
master1: this with a newly generated keypair for better security.
master1:
master1: Inserting generated public key within guest...
master1: Removing insecure key from the guest if it's present...
master1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> master1: Machine booted and ready!
==> master1: Checking for guest additions in VM...
==> master1: Setting hostname...
==> master1: Configuring and enabling network interfaces...
==> master1: Mounting shared folders...
master1: /vagrant => /home/mitch/vagrant-projects/OLCNE
==> master1: Running provisioner: shell...
master1: Running: /tmp/vagrant-shell20220427-163506-17g1k5d.sh
master1: ===== Removing extra NetworkManager connection =====
master1: Connection 'Wired connection 1' (f113148d-b1a2-3f0d-b6b1-f016ef3f0dd1) successfully deleted.
master1: ===== Configure repos for Oracle Linux Cloud Native Environment =====
master1: sudo dnf install -y oracle-olcne-release-el8
master1: sudo dnf config-manager --enable ol8_olcne14 ol8_baseos_latest ol8_appstream ol8_addons ol8_UEKR6
master1: sudo dnf config-manager --disable ol8_olcne12 ol8_olcne13
master1: ===== Fulfil requirements =====
master1: sudo swapoff -a
master1: sudo sed -i '/ swap /d' /etc/fstab
master1: sudo modprobe br_netfilter
master1: sudo sh -c 'echo br_netfilter > /etc/modules-load.d/br_netfilter.conf'
master1: net.bridge.bridge-nf-call-ip6tables = 1
master1: net.bridge.bridge-nf-call-iptables = 1
master1: net.ipv4.ip_forward = 1
master1: sudo /sbin/sysctl -p /etc/sysctl.d/k8s.conf
master1: sudo systemctl enable --now firewalld
master1: sudo firewall-cmd --zone=public --add-interface=eth0 --permanent
master1: ===== Installing the Oracle Linux Cloud Native Environment Platform API Server and Platform CLI tool to the operator node. =====
master1: sudo dnf install -y olcnectl olcne-api-server olcne-utils
master1: sudo systemctl enable olcne-api-server.service
master1: sudo firewall-cmd --add-port=8091/tcp --permanent
master1: sudo firewall-cmd --add-masquerade --permanent
master1: ===== Installing the Oracle Linux Cloud Native Environment Platform Agent =====
master1: sudo dnf install -y olcne-agent olcne-utils
master1: sudo systemctl enable olcne-agent.service
master1: sudo firewall-cmd --add-masquerade --permanent
master1: sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
master1: sudo firewall-cmd --add-port=8090/tcp --permanent
master1: sudo firewall-cmd --add-port=10250/tcp --permanent
master1: sudo firewall-cmd --add-port=10255/tcp --permanent
master1: sudo firewall-cmd --add-port=8472/udp --permanent
master1: sudo firewall-cmd --add-port=30000-32767/tcp --permanent
master1: sudo dnf install -y bash-completion
master1: sudo firewall-cmd --add-port=6443/tcp --permanent
master1: sudo firewall-cmd --add-port=8001/tcp --permanent
master1: sudo firewall-cmd --add-port=10251/tcp --permanent
master1: sudo firewall-cmd --add-port=10252/tcp --permanent
master1: sudo firewall-cmd --add-port=2379/tcp --permanent
master1: sudo firewall-cmd --add-port=2380/tcp --permanent
master1: sudo firewall-cmd --add-port=6444/tcp --permanent
master1: sudo firewall-cmd --add-protocol=vrrp --permanent
master1: sudo firewall-cmd --reload
master1: ===== Allow passwordless ssh between VMs =====
master1: cp /vagrant/id_rsa /home/vagrant/.ssh
master1: cp /vagrant/id_rsa.pub /home/vagrant/.ssh
master1: cat /vagrant/id_rsa.pub >> ~/.ssh/authorized_keys
master1: chmod 0700 /home/vagrant/.ssh
master1: chmod 0600 /home/vagrant/.ssh/authorized_keys /home/vagrant/.ssh/id_rsa
master1: chmod 0644 /home/vagrant/.ssh/authorized_keys /home/vagrant/.ssh/id_rsa.pub
master1: eval echo "`hostname -s`,`hostname -f`,`hostname -i` `cat /etc/ssh/ssh_host_ed25519_key.pub`" >> /vagrant/known_hosts
master1: ===== Removing the shared SSH keypair =====
master1: ===== Copying SSH Host Keys =====
master1: sudo cp /vagrant/known_hosts /etc/ssh/ssh_known_hosts
master1: ssh 192.168.99.101 sudo cp /vagrant/known_hosts /etc/ssh/ssh_known_hosts
master1: Returned a non-zero code: 255
master1: Last output lines:
master1: Host key verification failed.
master1: See /var/tmp/cmd_dPZIH.log for details
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
mitch main ~ vagrant-projects OLCNE 1
Thank you @mitchds. Would you kindly send me the content of your known_hosts
file from your OLCNE Vagrant project folder? I'm suspecting it's corrupt.
worker1,worker1.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICFNVKbd0ki99Hkvf6cYAOvXrFMEfGSmzTKs377Dcmxg
worker2,worker2.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ18vqS5Os1AEAMVT6x1JXfwGfH5EvbAtG8VWd1nozIw
master1,master1.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo0BJGaEeoxPrPt1/n1ikI3EuppiAQwLy8Go55n4G4W
worker1,worker1.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICXJDlsiUsjREJtd2XpLC57tdwdpZnluz30SN21MZjCn
worker2,worker2.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJHXvc1J1HqbpLYIkVwDN75Tud1Qrfzn5KMoqZk/VgHg
master1,master1.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICYTZmYYsHx5Mlp730H8QRLAExJ3F/DIyNZy9PTV006v
worker1,worker1.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKbuzdbWISwHfOkCMePs9kLQnGPw9gmlc8wo+nrClUYe
worker2,worker2.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOaPEbT5mFAoijXMS2D2UGhmhf/vE02tUXY4BiXbQI5H
master1,master1.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDrU8YNly/lKIWV5Z2iru6aGSA4wVC1CjnzNG6PiwzID
Maybe this should be done automatically for the user, or at least add a comment in the .env file so the user is hinted appropriately.
It's mentioned in the .env
file.
Maybe this should be done automatically for the user, or at least add a comment in the .env file so the user is hinted appropriately.
It's mentioned in the
.env
file.
You are right. I missed that.
worker1,worker1.vagrant.vm,127.0.1.1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICFNVKbd0ki99Hkvf6cYAOvXrFMEfGSmzTKs377Dcmxg
Thanks. That explains the problem.
hostname -i
is returning127.0.0.1
in your case.
I have a way to fix this. To help me out, would you log in to one of your VMs, and run the hostname -I
, e.g.:
$ vagrant ssh worker1
[vagrant@worker1 ~]$ hostname -i
[vagrant@worker1 ~]$ hostname -I
mitch main ~ vagrant-projects OLCNE vagrant ssh worker1
==> vagrant: You have requested to enabled the experimental flag with the following features:
==> vagrant:
==> vagrant: Features: disks
==> vagrant:
==> vagrant: Please use with caution, as some of the features may not be fully
==> vagrant: functional yet.
Welcome to Oracle Linux Server release 8.5 (GNU/Linux 5.4.17-2136.302.7.2.2.el8uek.x86_64)
The Oracle Linux End-User License Agreement can be viewed here:
* /usr/share/eula/eula.en_US
For additional packages, updates, documentation and community help, see:
* https://yum.oracle.com/
[vagrant@worker1 ~]$ hostname -i
127.0.1.1
[vagrant@worker1 ~]$ hostname -I
10.0.2.15 192.168.99.111
Perfect. For now,
1) Please delete id_rsa
, id_rsa.pub
and known_hosts
from your vagrant project directory.
2) Put back the previously comment-out lines:
echo_do rm /vagrant/id_rsa /vagrant/id_rsa.pub
echo_do rm /vagrant/known_hosts
3) Replace the following line:
echo_do eval 'echo "`hostname -s`,`hostname -f`,`hostname -i` `cat /etc/ssh/ssh_host_ed25519_key.pub`" >> /vagrant/known_hosts'
with:
echo_do eval 'echo "`hostname -s`,`hostname -f`,`hostname -I|tr " " ","|sed "s/,$//"` `cat /etc/ssh/ssh_host_ed25519_key.pub`" >> /vagrant/known_hosts'
4) Perform
vagrant destroy -f
vagrant up
And report back, so I can submit a PR.
Yes works now. Though your sed/tr seems a bit of an over kill. This works better with just one sed
echo "`hostname -s`,`hostname -f`,`hostname -I|sed 's/ $//;s/ /,/g'`"
I learn something new everyday, thanks. I really wanted to use ssh-keyscan
but couldn't figure out an intelligent way of concatenating all the hostnames and IP addresses in a single line.
No probs. My build finished but i didn't see helm or istio installed.
[vagrant@master1 ~]$ olcnectl module instances -E olcne-env
INSTANCE MODULE STATE
192.168.99.101:8090 node installed
192.168.99.111:8090 node installed
192.168.99.112:8090 node installed
olcne-cluster kubernetes installed
[vagrant@master1 ~]$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
externalip-validation-system externalip-validation-webhook-7988bff847-hcq4w 1/1 Running 0 16m
kube-system coredns-66f6bdb7bc-cbblb 1/1 Running 0 17m
kube-system coredns-66f6bdb7bc-mgs6b 1/1 Running 0 17m
kube-system etcd-master1.vagrant.vm 1/1 Running 0 17m
kube-system kube-apiserver-master1.vagrant.vm 1/1 Running 0 17m
kube-system kube-controller-manager-master1.vagrant.vm 1/1 Running 0 17m
kube-system kube-flannel-ds-7p5q4 1/1 Running 0 16m
kube-system kube-flannel-ds-j97mh 1/1 Running 0 16m
kube-system kube-flannel-ds-rvgpr 1/1 Running 0 16m
kube-system kube-proxy-2fd7r 1/1 Running 0 17m
kube-system kube-proxy-494j5 1/1 Running 0 17m
kube-system kube-proxy-mvlm4 1/1 Running 0 17m
kube-system kube-scheduler-master1.vagrant.vm 1/1 Running 0 17m
kubernetes-dashboard kubernetes-dashboard-5d5d4947b5-cp7tg 1/1 Running 0 16m
No obvious errors in install (as i didn't increase cpu or memory however). I will update those and try again
helm
would be a command line utility and not a pod. What happens when type helm
in the terminal?
Yes i know that. Helm in CLI shows the usual command not found. I'm rebuilding now with increased CPU and MEMORY settings
I suspect you don’t have the vagrant-env
plug-in installed. Set VERBOSE=true
in your .env.local
if you don’t have the vagrant-env
plug-in installed, make changes to the Vagrantfile
directly or set the appropriate environment variable.
Yup, no vagrant-env. Ta. Let's close this issue and get this committed to git.
Describe the issue
vagrant up fails after c330fff8039bc3ac389eb77ef14a588b1ef4e8ac
I believe we enabled StrictHostKeyChecking in that commit