kumahq / kuma

🐻 The multi-zone service mesh for containers, Kubernetes and VMs. Built with Envoy. CNCF Sandbox Project.
https://kuma.io/install
Apache License 2.0
3.56k stars 328 forks source link

Have ExternalService mTLS feature to support same hosts with multiple mTLS credentials. #4400

Open rohank2002 opened 2 years ago

rohank2002 commented 2 years ago

Description

This is a case where a user wants to setup mTLS externalService pointing to same external hosts with multiple certificates. Attached are the example configurations which depict the use case. There are 2 different externalServices which point to the same external host, but have different mTLS credentials. Kuma should present the respective cert based on which externalservice is called using the internal mesh url (.mesh). Example:

apiVersion: kuma.io/v1alpha1
kind: ExternalService
mesh: default
metadata:
  name: certauth1
spec:
  tags:
    kuma.io/service: certauth1
    kuma.io/protocol: http
  networking:
    address: certauth.cryptomix.com:443
    tls: # optional
      enabled: true
      allowRenegotiation: false
      sni: certauth.cryptomix.com # optional
      caCert: # one of inline, inlineString, secret
        inline: ca_cert
      clientCert: # one of inline, inlineString, secret
        inline: cert_set1
      clientKey: # one of inline, inlineString, secret
        inline: cert_set1
apiVersion: kuma.io/v1alpha1
kind: ExternalService
mesh: default
metadata:
  name: certauth2
spec:
  tags:
    kuma.io/service: certauth2
    kuma.io/protocol: http
  networking:
    address: certauth.cryptomix.com:443
    tls: # optional
      enabled: true
      allowRenegotiation: false
      sni: certauth.cryptomix.com # optional
      caCert: # one of inline, inlineString, secret
        inline: ca_cert
      clientCert: # one of inline, inlineString, secret
        inline: cert_set2
      clientKey: # one of inline, inlineString, secret
        inline: cert_set2

Slack Conversation from Kuma Workspace for reference: https://kuma-mesh.slack.com/archives/CN2GN4HE1/p1653682254570689?thread_ts=1653678943.023659&cid=CN2GN4HE1

jakubdyszkiewicz commented 2 years ago

Triage: We need to handle the case when there are 2 external services with the same address. We could do a validation by listing all the external services and prevent user from doing so. We could not generate a VIP for certauth.cryptomix.com hostname.

Additionally, we could add a flag skipVIPGeneration (name not final) to generate a VIP for the main hostname.

rohank2002 commented 2 years ago

Hi @jakubdyszkiewicz not sure if I conveyed it correctly. I was requesting for a feature wherein there could be 2 externalServices pointing to the same address but holding different mTLS credentials, and Kuma can supply certs based on the internal mesh dns address. Eg. if from a pod inside the mesh if I hit certauth1.mesh kuma should present certs mentioned in certauth1 externalService, and the respective behavior for the 2nd externalService.

github-actions[bot] commented 2 years ago

This issue was inactive for 30 days it will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant please comment on it promptly or attend the next triage meeting.

github-actions[bot] commented 1 year ago

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.

slonka commented 1 year ago

I was requesting for a feature wherein there could be 2 externalServices pointing to the same address but holding different mTLS credentials, and Kuma can supply certs based on the internal mesh dns address. Eg. if from a pod inside the mesh if I hit certauth1.mesh kuma should present certs mentioned in certauth1 externalService, and the respective behavior for the 2nd externalService.

This is how it will work. Generally we generate two entries: one with [external-service-name].mesh (no problem here) and the second one with the original name/port taken from networking.address (problematic one). The clash here is that networking.address is the same in both external services (it's certauth.cryptomix.com:443). So the triage entry suggested that we could skip generating the second entry based on a flag defined in external service.

slonka commented 1 year ago

We could do a validation by listing all the external services and prevent user from doing so.

In my opinion this alone does not solve the problem (but is a valid way to prevent the error). I think this is a valid use case and just preventing the error does solve the issue. Validation should be done but we also need to provide a way to solve the actual problem.

We could not generate a VIP for certauth.cryptomix.com hostname.

This is not specific enough - it does not mention for which ES. We could generate the vip only for the first one and silently stop generating for other ES but that might be confusing to the user. So the next part:

Additionally, we could add a flag skipVIPGeneration (name not final) to generate a VIP for the main hostname.

to me, is not additional but essential. A flag would steer which ES would generate the original host/port. There are a couple of alternatives here:

So to sum up here are the possible options:

  1. Do only validation 2. Validation + skipVIPGeneration + suggest name in comments - picked by internal poll
  2. Validation + generateVIP + suggest name in comments
  3. Validation + internalAddress + suggest name & special value in comments
  4. Edit and add your suggestion.
github-actions[bot] commented 1 year ago

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.

github-actions[bot] commented 1 year ago

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.

github-actions[bot] commented 11 months ago

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.

github-actions[bot] commented 8 months ago

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.

github-actions[bot] commented 5 months ago

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.

github-actions[bot] commented 2 months ago

This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.