Closed pzg250 closed 1 year ago
I assume that the failure on step 7 is caused by abnormal status on step 6. Any advice ? thanks in advance!
Note: when I run setup_node there is an error. not sure it is the root cause of this issue.
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
@pzg250 thank you for the details. Can you try the quickstart guide, does it work?
You seem to combine incompatible technologies: run setup scripts with the stock-only
option, i.e., with containers, and with estargz (works only for containers) but also run firecracker-containerd... then you install flannel even though our scripts install calico
may I ask what you are trying to achieve?
Hi @ustiugov , thank you for your response. Yes, will try to reinstall. So that means if I use stock-only, I should run as follow steps, right ?
may I ask what you are trying to achieve? Someone want me to help him setup the vhive ENV, I think he want to run some algorithm by vhive.
Hi @ustiugov , thank you for your response. Yes, will try to reinstall. So that means if I use stock-only, I should run as follow steps, right ?
- On both nodes
1. git clone --depth=1 https://github.com/vhive-serverless/vhive.git 2. cd vhive 3. mkdir -p /tmp/vhive-logs 4. ./scripts/cloudlab/setup_node.sh stock-only use-stargz > >(tee -a /tmp/vhive-logs/setup_node.stdout) 2> >(tee -a /tmp/vhive-logs/setup_node.stderr >&2)
- On worker node
1. ./scripts/cluster/setup_worker_kubelet.sh stock-only > >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stdout) 2> >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stderr >&2) 2. sudo screen -dmS containerd bash -c "containerd > >(tee -a /tmp/vhive-logs/containerd.stdout) 2> >(tee -a /tmp/vhive-logs/containerd.stderr >&2)"
- On master node
1. sudo screen -dmS containerd bash -c "containerd > >(tee -a /tmp/vhive-logs/containerd.stdout) 2> >(tee -a /tmp/vhive-logs/containerd.stderr >&2)" 2. ./scripts/cluster/create_multinode_cluster.sh stock-only > >(tee -a /tmp/vhive-logs/create_multinode_cluster.stdout) 2> >(tee -a /tmp/vhive-logs/create_multinode_cluster.stderr >&2)
may I ask what you are trying to achieve? Someone want me to help him setup the vhive ENV, I think he want to run some algorithm by vhive.
Seem it works by following these steps. Thanks @ustiugov , and I met example test error. will open a new ticket for that. Close this.
Describe the bug After install vhive on 2 aws EC2 instances, it cannot deploy functions. Ubuntu 20.04
To Reproduce
On master and worker instances.
git clone --depth=1 https://github.com/vhive-serverless/vhive.git
cd vhive
mkdir -p /tmp/vhive-logs
./scripts/cloudlab/setup_node.sh stock-only use-stargz > >(tee -a /tmp/vhive-logs/setup_node.stdout) 2> >(tee -a /tmp/vhive-logs/setup_node.stderr >&2)
On the worker instance
./scripts/cluster/setup_worker_kubelet.sh stock-only > >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stdout) 2> >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stderr >&2)
sudo screen -dmS containerd bash -c "containerd > >(tee -a /tmp/vhive-logs/containerd.stdout) 2> >(tee -a /tmp/vhive-logs/containerd.stderr >&2)"
sudo PATH=$PATH screen -dmS firecracker bash -c "/usr/local/bin/firecracker-containerd --config /etc/firecracker-containerd/config.toml > >(tee -a /tmp/vhive-logs/firecracker.stdout) 2> >(tee -a /tmp/vhive-logs/firecracker.stderr >&2)"
source /etc/profile && go build
sudo screen -dmS vhive bash -c "./vhive > >(tee -a /tmp/vhive-logs/vhive.stdout) 2> >(tee -a /tmp/vhive-logs/vhive.stderr >&2)"
On the Master instance
sudo screen -dmS containerd bash -c "containerd > >(tee -a /tmp/vhive-logs/containerd.stdout) 2> >(tee -a /tmp/vhive-logs/containerd.stderr >&2)"
./scripts/cluster/create_multinode_cluster.sh stock-only > >(tee -a /tmp/vhive-logs/create_multinode_cluster.stdout) 2> >(tee -a /tmp/vhive-logs/create_multinode_cluster.stderr >&2)
On another Master instance terminal
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
On the worker instance
kubeadm join 172.31.31.170:6443 --token zu75b2.gcq3a7pgf6rt17zz --discovery-token-ca-cert-hash sha256:7ebc70e3c5fd3672183b2ac41c0e0f136dca8cbc77f918c9419931ca4e177dec
On original Master instance terminal
press y
watch kubectl get pods --all-namespaces
On the Master instance terminal
source /etc/profile && pushd ./examples/deployer && go build && popd && ./examples/deployer/deployer
Expected behavior A clear and concise description of what you expected to happen.
Logs step 4 logs
step 5 logs
step 6 logs
step 7 logs
Notes Currently, we support only Ubuntu 18 (x86) bare-metal hosts, however we encourage the users to reports Issues that appear in different settings. We will try to help and potentially include these scenarios into our CI if given enough interest from the community.