muteor / microservice_experiment

Playing with microservice tools - nothing to see here
7 stars 1 forks source link

Clarification on your linkerd routing. #1

Open zoza1982 opened 7 years ago

zoza1982 commented 7 years ago

Hi I read you interesting blog about this experiment and I found it very interesting...but one thing puzzles me a lot. Having a hard time understanding this example.

For example:

" Use this router for linkerd-to-service This server should be registered in service discovery so that incoming traffic is served here."

baseDtab: | /local => /$/inet/127.1/80; /http/1.1// => /local ;

My question is: If this is my linkerd to service router and my application runs on port 80, what' the point of defining port here when you can also use consul to automatically figure that out just like you used it in router for outbound?

It does not makes sense documenting each and every application port in linkerd conf for "linkerd-to-service" router when you can just rely on linkerd-consul namer to do that automatically for you.

Is that makes sense? Maybe I am not understanding something here correctly....

muteor commented 7 years ago

That is the local incoming config, as I used a sidecar approach each service has its own linkerd instance. Which means for the incoming traffic all we need do is route that to the local "thing", in this example its nginx web server on port 80 but could be anything you like.

This config is internal to the service, nothing else will ever know about it, so it doesn't need to involve any service discovery because the service knows where itself is.

Hope that makes sense :)

Thanks

Keith

zoza1982 commented 7 years ago

Oh I see, so the sidecar container is the linkerd itself for each container app. Interesting.... One thing just potentially worries me ( has to be tested ) is the efficiency ( is it worth it resource wise ) e.g if you have several app containers on the same host, also running linkerd sidecar for each of those apps. So linkerd consumption is the question mark? considering it is java/scala and those type of apps do like eating resources.

Another concern is which naturally comes in is...whether it's scalable and makes sense at large scale.

muteor commented 7 years ago

Yeah thats a good point, I did use the optimised JVM stack however I think others have found per instance sidecar'ing problematic.

You should checkout:

https://blog.buoyant.io/2016/10/14/a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not/

I am actually working on updating the example to use pure kubernetes because rancher 1.3 changed the way networking works and consul-registrator no longer works.

zoza1982 commented 7 years ago

Very cool article, thanks for Sharing. When you say consul-registrator ...are you referring to https://github.com/gliderlabs/registrator ? Or you are using something else?

muteor commented 7 years ago

Yeah thats the one, its pretty cool. It's a shame it doesn't work with Rancher now.

zoza1982 commented 7 years ago

What's exactly the problem with registrator? I dealt with it before on CoreOS and calico overlay-networking ..and there tricks you can do to make it work and hookup to the right network.

Specifically what is ranger changing in comparison to vanilla kubernetes?