LandRover / StaffjoyV2

Staffjoy V2 - all microservices in a monorepo
MIT License
141 stars 53 forks source link

Issues running dev environment #19

Open 0x41mmarVM opened 2 years ago

0x41mmarVM commented 2 years ago

We use Github to track bugs in Staffjoy. Please answer these questions before submitting your issue. All of our code is in one place, so please preface the title with the system where the bug is (e.g. "company api" or "www"). Thanks!

What environment did you encounter the issue in?

development. I'm trying to run on a fresh Ubuntu 18.04 machine. VirtualBox 6.0 and Vagrant installed successfully

What did you do?

Cloned the repo, installed virtualbox and vagrant, then ran make dev.

What did you see instead?

First, there were issues with ubuntu_mirror_replace_to_fastest.sh, it would choose http://uk.mirror.worldbus.ge/ , which does not have packages for Jammy, leading the provisioning to fail. I fixed that manually.

Next, there were issues with Go installing packages:

go: 'go install' requires a version when current directory is not in a module

Cool. No problem. Edited golang.sh to add "@latest" at the end of every install command. Error messages went away. Now the provisioning ends with no clear error message, and I get:

==> default: Attempting graceful shutdown of VM...

Attempting to manually bring the dev vm up produces no better results. I can ssh into it, but nothing runs on port 80 but a gateway error.

Are there any logs indicating an issue?

   default: alias docker-rmall='docker-rma && docker-rmia'
    default: [v] /home/vagrant/.bash_aliases path docker-rmall added.
    default: + echo '[v] /home/vagrant/.bash_aliases path docker-rmall added.'
    default: + addAlias /home/vagrant/.bash_aliases docker-nuke 'alias docker-nuke='\''docker-rmall; docker-rmnet; docker-rmvol'\'''
    default: + local FILENAME=/home/vagrant/.bash_aliases
    default: + local MATCH_PATTREN=docker-nuke
    default: + local 'CMD_EXPORT=alias docker-nuke='\''docker-rmall; docker-rmnet; docker-rmvol'\'''
    default: + local SUDO=false
    default: + grep -q docker-nuke /home/vagrant/.bash_aliases
    default: + [[ false == \t\r\u\e ]]
    default: + echo 'alias docker-nuke='\''docker-rmall; docker-rmnet; docker-rmvol'\'''
    default: + tee -a /home/vagrant/.bash_aliases
    default: alias docker-nuke='docker-rmall; docker-rmnet; docker-rmvol'
    default: [v] /home/vagrant/.bash_aliases path docker-nuke added.
    default: + echo '[v] /home/vagrant/.bash_aliases path docker-nuke added.'
    default: + addAlias /home/vagrant/.bash_aliases docker-clean 'alias docker-clean='\''docker-rma -f status=exited; docker-rmia -f dangling=true; docker-rmnet; docker-rmvol -f dangling=true'\'''
    default: + local FILENAME=/home/vagrant/.bash_aliases
    default: + local MATCH_PATTREN=docker-clean
    default: + local 'CMD_EXPORT=alias docker-clean='\''docker-rma -f status=exited; docker-rmia -f dangling=true; docker-rmnet; docker-rmvol -f dangling=true'\'''
    default: + local SUDO=false
    default: + grep -q docker-clean /home/vagrant/.bash_aliases
    default: + [[ false == \t\r\u\e ]]
    default: + tee -a /home/vagrant/.bash_aliases
    default: + echo 'alias docker-clean='\''docker-rma -f status=exited; docker-rmia -f dangling=true; docker-rmnet; docker-rmvol -f dangling=true'\'''
    default: alias docker-clean='docker-rma -f status=exited; docker-rmia -f dangling=true; docker-rmnet; docker-rmvol -f dangling=true'
    default: [v] /home/vagrant/.bash_aliases path docker-clean added.
    default: + echo '[v] /home/vagrant/.bash_aliases path docker-clean added.'
==> default: Attempting graceful shutdown of VM...

If I SSH into the env and curl port 80:

vagrant@staffjoy-v2:~$ curl 127.0.0.1
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>

no response on localhost:8001 or http://10.0.0.99:80 either.

Am I missing something? How do I get this to run?

Many thanks for your time.

0x41mmarVM commented 2 years ago

Trying a manual build inside environment ends with:

+ echo 'Running database migration'
Running database migration
+ migrate '-database=mysql://root:SHIBBOLETH@tcp(10.0.0.100:3306)/account' -path=/home/vagrant/golang/src/v2.staffjoy.com/account/migrations/ up
error: dial tcp 10.0.0.100:3306: connect: connection timed out
make: *** [makefile:48: dev-build] Error 1
0x41mmarVM commented 2 years ago

Managed to zoom in even further. the docker.sh script fails to find dh-systemd, and then the minikube one goes:

 minikube v1.25.2 on Ubuntu 22.04 (vbox/amd64)
 Specified Kubernetes version 1.23.6 is newer than the newest supported version: v1.23.4-rc.0
 Using the none driver based on user configuration 
👍 Starting control plane node minikube in cluster minikube
 Running on localhost (CPUs=2, Memory=5935MB, Disk=40818MB) ... 
 OS release is Ubuntu 22.04 LTS
 Preparing Kubernetes v1.23.6 on Docker 20.10.17 ...
 kubelet.cluster-dns=10.0.0.10 ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf ▪ kubelet.housekeeping-interval=5m E0620 13:26:34.340353 3995 kubeadm.go:682
 sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml failed - will try once more: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 stdout: 
[certs]  Using certificateDir folder "/var/lib/minikube/certs" 
[certs]  Using existing ca certificate authority 
[certs]  Using existing apiserver certificate and key on disk  stderr: error execution phase certs/apiserver-kubelet-client: 
[certs]  certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") To see the stack trace of this error execute with --v=5 or higher 🤦 Unable to restart cluster, will reset it: run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 
stdout: 
[certs]  Using certificateDir folder "/var/lib/minikube/certs" 
[certs]  Using existing ca certificate authority 
[certs]  Using existing apiserver certificate and key on disk  stderr: error execution phase certs/apiserver-kubelet-client: 
[certs]  certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") To see the stack trace of this error execute with --v=5 or higher  ▪ Generating certificates and keys ... 💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-mani
fests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-mnifests-$kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 stdout: [init
 Using Kubernetes version: v1.23.6 
[preflight]  Running pre-flight checks 
[preflight]  Pulling images required for setting up a Kubernetes cluster 
[preflight]  This might take a minute or two, depending on the speed of your internet connection 
[preflight]  You can also perform this action in beforehand using 'kubeadm config images pull' 
[certs]  Using certificateDir folder "/var/lib/minikube/certs" 
[certs]  Using existing ca certificate authority 
[certs]  Using existing apiserver certificate and key on disk  stderr: [WARNING FileExisting-socat
: socat not found in system path [WARNING Service-Kubelet
: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase certs/apiserver-kubelet-client: 
[certs]  certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") To see the stack trace of this error execute with --v=5 or higher  ▪ Generating certificates and keys ... 
 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-mani
fests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-mnifests-$kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 stdout: [init
 Using Kubernetes version: v1.23.6 
[preflight]  Running pre-flight checks 
[preflight]  Pulling images required for setting up a Kubernetes cluster 
[preflight]  This might take a minute or two, depending on the speed of your internet connection 
[preflight]  You can also perform this action in beforehand using 'kubeadm config images pull' 
[certs]  Using certificateDir folder "/var/lib/minikube/certs" 
[certs]  Using existing ca certificate authority 
[certs]  Using existing apiserver certificate and key on disk stderr: 
[WARNING FileExisting-socat: socat not found in system path 
[WARNING Service-Kubelet: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase certs/apiserver-kubelet-client: 
[certs]  certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") To see the stack trace of this error execute with --v=5 or higher ▪ Generating certificates and keys ... 💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvaila
ble--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-cotroller$-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 stdout: [init
 Using Kubernetes version: v1.23.6 
[preflight]  Running pre-flight checks 
[preflight]  Pulling images required for setting up a Kubernetes cluster 
[preflight]  This might take a minute or two, depending on the speed of your internet connection 
[preflight]  You can also perform this action in beforehand using 'kubeadm config images pull' 
[certs]  Using certificateDir folder "/var/lib/minikube/certs"

[certs]  Using existing ca certificate authority

[certs]  Using existing apiserver certificate and key on disk

stderr:
 [WARNING FileExisting-socat: socat not found in system path
 [WARNING Service-Kubelet: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase certs/apiserver-kubelet-client: 
[certs]  certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" $hile trying to verify candidate authority certificate "minikubeCA")
To see the stack trace of this error execute with --v=5 or higher
LandRover commented 2 years ago

Hey, about the ubuntu_mirror_replace_to_fastest.sh - thanks, i'll look into the mirror list.

About the issue, the final steps on the provisioning looks actually really good. default: + echo '[v] /home/vagrant/.bash_aliases path docker-clean added.' <-- is a good sign that everything finished.

that part, I don't understand, why did it shutdown? what triggered it? ==> default: Attempting graceful shutdown of VM...

Another thing you can do, is SSH to the VM, cd $STAFFJOY. and run ./vagrant/minikube.sh

please share the results.

Oleg.

0x41mmarVM commented 2 years ago

Thanks for your response. The output is above; that said, somehow, after destroying and re-creating multiple times, then re-running all of the provisioning scripts and compiling the go app again, it's up! I still have no idea what happened.

LandRover commented 2 years ago

Interesting, if you can shed some light I'll give it a try too.

Was there anything different while it succeeded? I also run it in a loop and I have very high rates.

Thanks,