berkeli / immersive-go

Creative Commons Zero v1.0 Universal
10 stars 0 forks source link

Distributed Systems pt 3 #53

Open berkeli opened 1 year ago

berkeli commented 1 year ago

Read: https://docs.google.com/document/d/1WoOTLTdtDqnL3fv3YVfI32kfySHqh7y1UfLizBJ3LXY/edit#heading=h.ep7stp2jwevq

berkeli commented 1 year ago

Name 5 functions of load balancers

Load balancing, ensuring load is distributed to available backends Service Discovery Health checking (active/passive) Sticky sessions (makeing sure requests form clients receive the corresponding response from backend) Observability

Why might you use a Layer 7 loadbalancer instead of a Layer 4?

L7 load balancers contain much more information about requests than L4, which allows for a more fine tuned load balancing.

When might you use a Layer 4 loadbalancer?

When you need a much simpler TCP/UDP load balancing, that doesn't require forwarding of connections based on their content.

Give a reason to use a sidecar proxy architecture rather than a middle proxy

Middle proxy can be a single point of failure even if it is distributed, also they make it hard to understand where the problem lies if there's a problem in the system.

Why would you use Direct Server Return?

Load balancing can be quite expensive if every response from the server goes through a load balancer when travelling to the client. To eliminate this overhead, DSR sends response directly to the client.

What is a Service Mesh? What are the advantages and disadvantages of a service mesh?

Service mesh is having infrastructure layer in your app so that services do not communicate directly with each but via sidecar proxy. Each service has it's own sidecar proxy and this allows for a more control over what kind of requests can be made. It also simplifies services, as this logic doesn't need to be coded into the application.

What is a VIP? What is Anycast?

VIP - virtual IP addresses Anycast - routing methodology where single IP is shared by multiple devices and when a request comes through it is typically send to the closest server, or to the one that requires least amount of hops.

**Why doesn’t autoscaling work to redistribute load in systems with long-lived connections?***

Auto-scaling works only for new connections, because long-lived connections get distributed at the time of establishing the connection. Transferring of live connections from 1 backend to another isn't a job for auto-scaling or load balancer.

How can we make these kinds of systems robust?

One solution is to obtain the list of backends and keep it refreshed. Then establish a connection to each backend that will be kept alive, which then can be used to make requests on a round robin basis or more sophisticated load balancing.