servicemeshinterface / smi-spec

Service Mesh Interface
https://smi-spec.io
Apache License 2.0
1.05k stars 125 forks source link

Multi-cluster discussion - WIP #212

Closed nicholasjackson closed 1 year ago

nicholasjackson commented 3 years ago

This issue is to start the discussion on how SMI can support multi-cluster connectivity. It is becoming more and more common for users of service mesh to have more than one cluster. This broadly falls into the following categories:

  1. Multiple Kubernetes clusters using the same service mesh
  2. Hetrogenius workloads (Kubernetes, VMs, Bare metal) using the same service mesh
  3. Multiple Kubernetes clusters using different service meshes (e.g. Istio and LinkerD)
  4. Hetrogenius workloads using different service meshes (e.g. Istio, Consul)

In addition to this, the situation of each of these clusters is often in a different datacenter (region, etc), multi-cloud, or a hybrid setup spanning private datacenter and public cloud. In all of these scenarios, connectivity can be challenging due to network address translation (NAT) and connectivity between datacenters.

To provide multi-cluster capabilities broadly two core features are required which provide the ability to discover traffic across different service meshes and authenticate it.

  1. Service catalog federation
  2. Identity federation

I would like to start a discussion on how SMI can support multi-cluster operations, ideas of involvement are:

  1. SMI supports management planes like Gloo and Meshery by providing abstraction from the underlying service mesh and ensuring resources exist to enable configuration for multi-cluster operations.
  2. SMI adopts and contributes to a specification like Hamlet from VMWare (https://github.com/vmware/hamlet)
  3. SMI develops custom standard and specification

It is my current belief that in order to facilitate federation both common configuration and an interoperation standard are required. I also believe that any standard should be designed to support ecosystem developers like Meshery and Gloo and not purely be service mesh provider controlled. Configuration without standard would mean that while ecosystem providers would have a common interface for configuration they would still have to do the heavy lifting related to the integration for each of the service meshes. Other SMI specifications require that the vendor manages the implementation and this provision of an interface provides ecosystem developers to support all service mesh that supports SMI without the need for knowledge of the underlying implementation. As the number of service meshes grows the ability to support ecosystem applications becomes increasingly important.

Call for help In order for any standard to be widely adopted, it needs to achieve consensus from everyone who has a vested interest in its use. I, therefore, ask all members of the Service Mesh Interface community and vendors of service mesh and ecosystem products to input their opinions and expertise to this discussion.

ngehani commented 3 years ago

What about clusters across regions?

What about clusters across providers? or is this out of scope?

nicholasjackson commented 3 years ago

I grouped region into the logical term datacenter, it's kinda a broad term but I added region to the original to try and make sure I meant that too.

With regard to providers, what are you thinking here, clouds like Microsoft, AWS, GCP, or Meshes like Istio, AppMesh, OSM?

sergiopozoh commented 3 years ago

What about clusters across regions?

What about clusters across providers? or is this out of scope?

I think these are implementation details related to where the clusters are located, on prem, cloud, hybrid. Implicitly, in some cases these locations will be tied to a provider (Google, AWS, etc.)

nicholasjackson commented 3 years ago

Gotcha,

I have covered all these points but will update tomorrow to make this clearer.

sergiopozoh commented 3 years ago

I generally agree with @nicholasjackson comments regarding the needs. I would only add that in addition to service catalog federation, and identity federation, we have to normalize in the identity format (SPIFFE for example), and catalog federation has to be complemented with routing configuration automation (it's not only about discovery but also about reachability).

Hamlet already supports service catalog federation and routing config automation for all the use cases Nic described, when services use mTLS and when they don't. But Hamlet does not support identity federation yet.

We have worked with the SPIFFE team a couple of months ago and have a draft architecture for the integration of Hamlet and SPIFFE identity federation (assuming normalization of identity to SPIFFE). CC @anvega

rcernich commented 3 years ago

I think it's important to distinguish between multi-cluster and federation.

Mutli-cluster seems to be more directly related to supporting HA (high availability) and FO (fail-over) type use cases, where the mesh is deployed identically across different k8s clusters (at least in the Istio sense of multi-cluster). There is also a related k8s kep for clustering service endpoints across k8s clusters.

Federation seems to be aimed more at integrating services across different meshes (e.g. across different departments, divisions, companies).

Another aspect to consider is the degree of trust between the meshes. Mutli-cluster tends to require a high level of trust, whereas federation seems to imply a lower level of trust. This makes sense if you think about aggregating endpoints for a service across clusters; you need to trust that they're interchangeable. This may not be the case if your just adding/exposing services (federation).

Of your list:

  1. Multiple Kubernetes clusters using the same service mesh

Seems like a multi-cluster scenario (same mesh, different k8s).

  1. Heterogeneous workloads (Kubernetes, VMs, Bare metal) using the same service mesh

Seems like an internal mesh detail. Although, this might end up being a reductive case for either federation or multi-cluster, depending on whether or not you're adding the external workload or a new service.

  1. Multiple Kubernetes clusters using different service meshes (e.g. Istio and LinkerD)

This sounds like federation to me. It may be possible to identify a standard configuration that could be used for things like service import/export, where service export might be used to configure some sort of ingress gateway, while service import identifies the external service and how to access it. If we're also talking about automating some of this, e.g. discovering exported services and their endpoints, then there would also be a need to define a standard protocol for exchanging such information.

  1. Heterogeneous workloads using different service meshes (e.g. Istio, Consul)

I'm not sure how this is much different than item three, other than maybe implying that you want to incorporate workloads from another mesh as endpoints for locally defined services, in which case it seems to overlap with item two. That said, is there an assumption that these are using some sort of compatible proxy?

Having said all that, I'd be curious to know if any folks have actually tried to get anything like this working with existing configuration elements of any of the meshes (e.g. using ingress/egress gateways plus service entries in istio).

Regarding identity format (popped up while I was typing), I think that's a more complicated topic, and I think SPIFFE is a possible solution, but not the only solution (e.g. using JWT for authn/z in addition to/in place of client certificates with SPIFFE ids). It does however highlight how this could balloon from a simple way of configuring interactions, to actually being able to implement them in some sort of standard fashion. (As an aside, at what point does this start looking like a general API management solution?)

My two cents. FWIW

FWiesner commented 3 years ago

Cool would be a way to have a first class concept of core and extension clusters formalized. Would be great to use for extending a SaaS solution with custom code/components. The SaaS solution would run on the protected core cluster and tenants would have a standardized way to join their extension cluster with custom code/components to the service mesh.

steeling commented 3 years ago

I'm also interested in how the SMI API's will work in terms of referencing remote clusters.

ie: if I have a TrafficTarget, will I be able to apply a policy that allows traffic to access my-service in ClusterX, but not ClusterY?

How do I set a Traffic Split for migrations between clusters? Should I specify a different FQDN, or should SMI add a field that denotes the cluster name, allowing the wildcard for all clusters?

kind: TrafficSplit
metadata:
  name: ab-test
spec:
  service: website
  matches:
  - kind: HTTPRouteGroup
    name: ab-test
  backends:
  - service: website-v1.my-ns.svc.cluster.clusterX
    weight: 0
  - service: website-v2.my-ns.svc.cluster.clusterY
    weight: 100
---
kind: HTTPRouteGroup
metadata:
  name: ab-test
matches:
- name: firefox-users
  headers:
  - user-agent: ".*Firefox.*"
steeling commented 3 years ago

In fact if we can agree on the following:

  1. Users should be able to specify either a specific cluster, the cluster set, or the local cluster for all API's.
  2. There is a concept of service equivalence, which creates a mechanism in which 2 services are aggregated under an FQDN representing the cluster set. ie: my-svc in clusterX and my-svc in clusterY can be aggregated under my-svc.my-ns.svc.cluster.global (or some other FQDN -- this comment makes no assumption on the format, need for namespaces, or what determines service equivalance).

Then I think we can make a proposal on the API changes required to support this prior to any solution around federated and federated service discovery. In fact, agreeing on this API change, would allow implementors to create a solution, backed by these API's, that may implement their own solutions for these 2 problems.

With that, I'd propose a concrete solution from my previous comment of adding a cluster field with the following requirements:

The cluster field would be added to:

  1. resource in TrafficMetrics
  2. source and dest in TrafficTarget
  3. backend in TrafficSplit