Open eliassal opened 3 years ago
Hey @eliassal ! Thanks for pointing that out. For Zipkin, when prompted use http://zipkin:9411
and for Redis use redis:6379
.
Updated the docs. Let me know if it works now?
So many thanks for your prompt response, you are the guy :-). It is working like a charm in spite of the fact that I had also the strange messages of " service does not have a project associated as you can notice in the snapshot. Also, maybe it would useful to update the doc regarding the first input |"Enter container Registry" and tell beginners is they need to enter the user name for dockerhub
oooouh, a lot of hardwork man, hats down. However, I was able to reach the step of servermesh but I tried to understand the grafana dashboard, really all this is new to me and not able to decrypt what does those graphs mean in the context
IN Configuring Resiliency after executing
kubectl apply -f .\backendServiceProfile.yaml You indicate "Forward port to your frontend service and navigate to the site to see the partial failures go away", but it is already running as we have done a step before
kubectl port-forward svc/frontend 3000/80
should stop the service and run it agaian or it is something else?
Then you indicate "navigate to the site to see the partial failures go away...", where, I don't see any place where I can see failures
@eliassal When you apply the ServiceProfile configuration and refresh the page you won't see the error page again. This is because, the sidecar within the mesh is trying to retry before it can throw errors. So, unless all the retry fails, you will not see the error page again. If you've not exited the window where you port-forward, you don't have to execute them again. Watch my Linkerd demo on YouTube: https://youtu.be/zW4INO353Xg?t=3319
What you see is an expected behavior - you don't see failures because we have handled that. In a microservices world, there are high chances for interim failures to occur. So, whenever there is a failure in service-to-service communication, we make the services resilient to handle them. One of the ways to do that is using Retry before throwing the exception back to the calling service. We also must be careful on how many times we can retry so we don't choke the servers. That's why we configure to a number we think is right. In case Linkerd, we configure them in yaml files and don't bake them in code. Hope that helps.
Also, check this out: https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
Answering to your question on Grafana - One of the key features of a service mesh it to understand how the services are communicating with each other. Observability helps in understanding how systems are working with each other and how can things be improved. The dashboard gives you a peak into how the systems are communicating with each other. Think of a scenario where there are 100s of microservices working with each other. These dashboard will start to make sense in such scenarios and to understand the root of the problem,
So many thanks again for all those details. Afeter reaching the end of the video, graphs and different tools started to make sense for me. One thing still I am not able to understand. By the end, you add in backendProfile.yaml
retryBudget: retry ratio: 0.2.....
so does this allow us commen/unuse the lines of code AddPolicyHandler (getreryPolicy and Getcircuit...) in startup of the frontend? Thanks
Yes, the Polly code is not needed when using service mesh.
Nish, Code worked like a charm until I reached the step Managing resiliency using Linkerd Service Mesh The 2 Kubectl appl works fine
Then I type tye deploy -i I get asked to enter conn string to redis I enter tcp://localhost:6379 I get asked to enter conn string to zipkin I enter http://localhost:9411
In spite of the fact that the 4 pods are running
but then I get some strange messages for both " ....does not have a project associated, skipping" as you can notice in the 1st snapshot, when I browse to localhost:3000, I get the following error. After googling, I not not able to find some help