Open JodelLisa2Point0 opened 3 years ago
@richmandlx Yes, exactly. The frontend pipeline should work. However, I have to do some tests after I have added the Expo Token as a secret to our repository.
I managed to get basic kubernetes working. The API Gateway and the Bikenest service are running correctly. ( 0e4a0d4) I had to create services so that the containers can communicate. IP discovery works via enviornment variables (for example if the service bikenest-db exists, the other containers will have the environment variable BIKENEST_DB_SERVICE_HOST that contains the ip of the bikenest db. This environment variable has to be used inside the bikenest service.
So basically we have to integrate the other services and change the structure of the docker-compose (change names of environment variables)
Vote @richmandlx for President!
Im leaving this video here because it gives a solid understanding about using aws with kubernetes. Basically he sets up kubernetes manually on aws servers. Aws also offers working kubernetes clusters also already. What is still not perfectly clear to me, is how we would actually deploy our app there. In the video he just executes commands on the aws server that deploy a nginx instance... https://www.youtube.com/watch?v=vpEDUmt_WKA
On the other hand I just found this: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/ which would make things a lot easier. Could we just rent an ubuntu server, install docker there deploy using the server as a remote context?
On the other hand I just found this: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/ which would make things a lot easier. Could we just rent an ubuntu server, install docker there deploy using the server as a remote context?
I tried it using two pcs locally and I can't get it to work. Best bet is probably still kubernetes
See the latest commit, i managed to get the kubemanifests.yml working so that i can deploy the backend using my local kubernetes cluster. One open point is the docker image building: The kubemanifests.yml holds instructions for example to use the backend_booking image. This image is already built on my local machine, so the clust just takes it. How would this work with remote deployment? Do we built the images on the remote server or something like that? It is possible to push built images to the dockerhub, kubernetes could then pull them from there. The CD pipeline would then probably build the images and push them to the docker hub? If the images should not be available in public, there is the option to use private container registries. It seems like AWS woudl for example support this. https://kubernetes.io/de/docs/concepts/containers/images/ https://dzone.com/articles/running-local-docker-images-in-kubernetes-1 https://github.com/marketplace/actions/push-to-amazon-ecr
Kubernetes works also with the private repository from docker hub. http://docs.heptio.com/content/private-registries/pr-docker-hub.html First push the built images to the private registry docker-compose push (but the image tags have to specified inside the docker-compose.yml file). Now use the specified image tags inside the kubemanifests.yml and specifiy the secret for the docker-hub using
kubectl create secret docker-registry $SECRETNAME \
--docker-username=$USERNAME \
--docker-password=$PW \
--docker-email=$EMAIL
and
kubectl patch serviceaccount default \
-p "{\"imagePullSecrets\": [{\"name\": \"$SECRETNAME\"}]}"
@s-kruschel already created a cd pipeline for the frontend, right?
we also just talked a lot about deploying the backend. summary: it seems like microservices are not really deployable with Heroku, therefore we look into Kubernetes now. Basically we can host a single node kubernetes cluster on our local machine and try getting the deployment to work. If it does work, we could start using a kubernetes provider.