vardius / go-api-boilerplate

Go Server/API boilerplate using best practices DDD CQRS ES gRPC
https://go-api-boilerplate.local
MIT License
934 stars 137 forks source link

Documentation #15

Open vardius opened 5 years ago

vardius commented 5 years ago

If

  1. You have some questions
  2. You had a problem with setup
  3. You had a problem with configuration
  4. You don't know how to do something
  5. Have ideas or propositions
  6. Think something could be explained better/in more details

Let me know so I can update documentation, allowing other ppl get over same issues faster. Please comment here about your suggestion what should/could be added/updated in the documentation. And I will try to explain or to do it as soon as possible.

mar1n3r0 commented 4 years ago

Hello and thanks for the fantastic boilerplate. I am using it for a starter point for my own project and currently testing with minikube locally. I will try to summarize my discoveries so far:

  1. After the breaking changes with apiVersion with kubernetes 1.16 onwards some of the helm charts don't work for me since they were not updated according to the guidelines. See: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/ Example after running helm make-install originally:
    
    helm install --name go-api-boilerplate --namespace go-api-boilerplate helm/app/
    Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
    make: *** [Makefile:64: helm-install] Error 1

As you can see with the current script it's impossible to see which chart specifically failed. Do you observe  the same behavior by any chance ? 
2. Because of that I have heavily modified the helm scripts to actually get latest packages from helm hub rather than use the archives included in the repo. 
3. Even some of the latest stable charts are not updated as of yet to 1.16 guidelines. Namely magic-namespace and heapster so far. Due to this I have disabled those for the time being
4. Since I am installing the charts one by one and not as one whole package like the original I am facing some unexpected issues. Mostly some kubernetes services names don't match nginx ingress definitions. For example: go-api-boilerplate.user, go-api-boilerplate.auth etc... I see them as microservice-user, microservice-auth in my services list.

As soon as I have some decent workaround for the hardcoded charts I can create a PR, meanwhile I would appreciate if you had some ideas how to make those dependencies more flexible instead of tgz packages in the repo.
hmersin commented 4 years ago

Hi,

Thank you for the great repo. I think it has a lot of potential but I think it has a steep learning curve.

I would like to ask for an architectural documents, one that shows the components and one for the data flow. Also some docs about AWS ECR would be nice. I was able to generate images, tag and upload them to ECR but couldn't make k8s pull the images from ECR, I am guessing due to some docker-secret thing.

Please keep up the great work

vardius commented 4 years ago

Hi @hmersin,

yes if your docker registry is private you need to add secret to your k8s so it can pull images more info here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

as to brief explanation for the architectural concepts I can list some key points

the boilerplate comes with two micro services examples auth and user with grpc communication between them and ui service web that communicates with api via http

api allows to dispatch commands following cqrs pattern and exposes data to ui using view model, request flow would look as follow:

dispatch command request -> command handler -> aggregate root -> event store -> event handler

later on the data can be queried via api using desired persistence model

Directory structure:

boilerplate setup should be easy enough to follow the flow, I hope its not as hard to understand as it might seem if you have specific questions please ask, will try answering best to my abilities, hope this little explanation attempt helps a bit

hmersin commented 4 years ago

Thank you for the super fast response and the architectural explanation.

When I run make helm-install my pods start to fire up except the web one. I think if I putting the web section into the requirements.yaml will fix it.

Again thanks for the great work,

vardius commented 4 years ago

@hmersin ohh yea very possible that is missing if you improve/fix anything pr is welcome

luisliz commented 4 years ago

Hello, I'm trying to work with these locally but I don't understand how to do this, the only thing that I can successfully run are the docker-run BIN=. To develop would I use Helm or telepresence?

I'm also a bit confused about how a user is registered with email or is this not implemented?

p.s. Love the boilerplate, I hope to use it in a project if I can get the hang of it.

vardius commented 4 years ago

@luisliz let me answer about user registration first: it is designed to work as a password-less signup meaning that every time you want to login/signup you basically just provide an email address and get an Email message with a link which includes auth token, so after clinking the button you get logged in automatically.

image

Ofc this is only a boilerplate and I did it this way because it seems kind of trendy nowadays to do so. If needed it shouldn't be hard to change/extend to a normal email/password login process. And yes it does work right now.

image

k8s setup comes with dev mailbox https://github.com/maildev/maildev which is preinstalled in the go-api-bilerplate namespace. So here we come to the first question how to run it locally.

To do so ideally you have some Kubernetes cluster, docker for desktop comes with k8s local coaster which works great. https://www.docker.com/products/docker-desktop

image

after that you can simply follow readme file steps https://github.com/vardius/go-api-boilerplate#quick-start which will guide you how to deploy this boilerplate to you local k8s cluster.

as @hmersin mentioned earlier https://github.com/vardius/go-api-boilerplate/issues/15#issuecomment-694443127 there is one requirement missing for web service which I added here https://github.com/vardius/go-api-boilerplate/commit/86b2aaca5f71f88149bfa962f430b8f0ed11a367

once you get it to run you can enter dev mailbox via web ui

image

or just go to http://maildev.go-api-boilerplate.local if you have updated you /etc/hosts correctly

As to debugging/developing code locally the way I usually do it is have a develop version running on my cluster and then the service I am working on I simply run locally via go run cmd/user/main.go with env set to cluster so it does work on the same database. if you are using goland or vscode it is easy to run debugger but most of the time unnecessary, anyway you don't need telepresence.

If you want to override default env variables from config you can do it as follow MYSQL_HOST=mysql.go-api-boilerplate.local go run -race cmd/user/main.go

Let me know if you have further question or my explanation is not enough, I am happy to try and explain even further.

If you find it not working at the first time you might want to open api request in a new tab and make browser accept self signed certificate.

image image image
luisliz commented 4 years ago

Thank you very much!

xtay315 commented 4 years ago

we should put commands into application layer , domain layer should know nothing about transaction and db

vardius commented 4 years ago

@xtay315 why commands ?

command are something you want system to do, and I believe they should be kept in the domain layer commands are simply a DTOs and know nothing about transaction layer

I think we could move command handlers to application layer, but definitely not commands itself

xtay315 commented 4 years ago

@xtay315 why commands ?

command are something you want system to do, and I believe they should be kept in the domain layer commands are simply a DTOs and know nothing about transaction layer

I think we could move command handlers to application layer, but definitely not commands itself

commands are simply a DTOs, it is not a domain concept. when a command go into domain layer , you should translate it into domain concept (entity, vo, or agg). a command just the contract/interface/api of application facade.

vardius commented 4 years ago

Could you maybe do a simple PR ? Would be much easier for me to picture things in the head while reading the code

beautyfree commented 3 years ago

@vardius I interesting in pipeline of development. What is your development setup and how do you start and restart the service you are currently working on?

vardius commented 3 years ago

This is only an example and is meant to be modified by other developers with adjustment for their needs. However I do run it on my home Kubernetes cluster, and my setup in this case is simple. In my case I have both mono repo (go-api-boilerplate) and additional other single service repositories, how it works that I am using GitHub Actions that are triggered by release, you can find examples here

https://github.com/vardius/go-api-boilerplate/blob/master/.github/workflows/user-publish.yaml https://github.com/vardius/go-api-boilerplate/releases/tag/v0.0.0%2Buser https://github.com/vardius/go-api-boilerplate/actions/runs/276702097

in this examples flow is simple, after creating release actions builds docker image which later in my case because my home server doesn't have external IP address I manually let it know to pull new version, but in real life probably you would build values as you go by another action (trigger by successful docker build) and deploy it to Kubernetes cluster

helm make-upgrade is the command I am running from my local network after updating version in values file https://github.com/vardius/go-api-boilerplate/blob/master/helm/app/values.yaml

In real life you probably want that process to be automated, and would add more GitHub Actions to continue the flow. Might even host your own helm chart for https://github.com/vardius/go-api-boilerplate/tree/master/helm/microservice and then split app values into many small ones per service this way you could deploy one by one, and get rid of mono-repo completely

for my home server I was thinking of hosting GitHub action runner which would automatically deploy it (no need for external ip then) reducing my involvement completely https://github.com/actions/runner

omerdn1 commented 3 years ago

@vardius hey bud. great work on this, I really appreciate it!

Are you familiar with DevSpace? I have been using their tools, specifically their cluster development features (including hot reloading), in some personal projects and I'm super happy with it. I was wondering if you ever set it up on this environment and if you can help with the configuration of the devspace.yaml file.

vardius commented 3 years ago

Hi @omerdn1

thanks for asking, I am not familiar with it, thanks for sharing. I will look into it. I would also strongly encourage you to make try and integrate it with this boilerplate ? Any contributions are more then welcome!

ppusapati commented 3 years ago

@vardius Thank you for the great work.

Can we integrate this with go-micro and nats?

vardius commented 3 years ago

@ppusapati

@vardius Thank you for the great work.

Can we integrate this with go-micro and nats?

would you like to submit a proof of concept as a new pr ?