Closed jamesdbloom closed 4 months ago
Such a feature makes sense. I'd go with overriding with an annotation
We have this type of annotation for port: 80.service.kuma.io/protocol: http
.
So you'd have something like: 80.service.kuma.io/service: myService
.
WDYT?
@lahabana I think that sounds like a good option.
Hey James,
I hear your arguments and I totally agree with them. I'd love to fix this problem, but there is a caveat here - security.
Let's say we allow 80.service.kuma.io/service: myService
.
Assuming there is a service redis
in namespace team-a
. Nothing stops me from creating malicious service in another namespace team-b
named redis
and collect the traffic that is intended to go to the original redis
.
On Universal this problem is solved because we are generating dataplane token for kuma.io/service: redis
.
Correct me if I'm wrong but so far I saw that the security scope of Kubernetes is bound to a namespace. What I mean by that is that a team member has usually access to the team's namespace, that's why service name is also bound to a namespace.
We could introduce service name override but we also need to have a thing that restricts users to use it. Otherwise, it defeats the mTLS. We have the same problem on Kubernetes with labels (there is no mechanism to restrict users how to place them) and also Mesh.
One of the options that come to my mind is Service/Labels/Mesh Affinity which would be Kubernetes only objects. For example, if I know that team-a can use Service redis
, as a Mesh operator I could place this
kind: ServiceAffinity
metadata:
name: redis-affinity
spec:
affinities:
- service: redis
namespaces: ["team-a"]
Given your experience, would something like this be a viable option?
@jakubdyszkiewicz you make a good point about security.
The ServiceAffinity could work but I suspect it also would be an extra level of complexity and coordination between mesh operators and teams. So although it technical works and would meet the security requirements I'm not sure its a good option.
I guess what would help is support for RBAC and namespaces in kuma. Similar to what you can do in kubernetes but applied to service configuration and service deployment in kuma across both K8s and universal mode. This would also help to reduce coordination between mesh operators and teams, because teams could be self-service.
So perhaps this should be parked until RBAC and namespaces could be supported.
I guess what would help is support for RBAC and namespaces in kuma
Could you expand on namespace here? Did you mean to somehow support Kubernetes namespace in Kuma? Or to have a concept of Kuma Namespace so that Mesh can be split into a couple of Kuma Namespaces but at the same time one Kuma Namespace can gather many Kubernetes Namespace?
The coupling between a kubernetes namespace and a kuma namespace isn't something I'd thought about too much but I agree it could potentially cause issues. The basic idea I was thinking about in relation to RBAC and namespaces was as follows:
RBAC to control service configuration, service deployment and zones joining the mesh
Does that make sense?
This is related to: https://github.com/kumahq/kuma/issues/3238
This issue was inactive for 30 days it will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant please comment on it promptly or attend the next triage meeting.
This issue was inactive for 30 days it will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant please comment on it promptly or attend the next triage meeting.
Is there any plan to address this issue?
This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.
This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.
This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.
This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.
This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.
This issue was inactive for 90 days. It will be reviewed in the next triage meeting and might be closed. If you think this issue is still relevant, please comment on it or attend the next triage meeting.
This story is improved and fixed with #9706 and other MeshService stories. Closing this as Dupe
Summary
Problem
Currently the service name in K8s is generated as follows:
I believe this breaks encapsulation as clients need to know which namespace the pod is deployed in for that K8s cluster. A service can not be deployed to both K8s and onto VMs (i.e. universal mode) because the service names won't match. A service also can't be deployed to different namespaces in different K8s clusters because the service names won't match.
It doesn't seem correct that clients need to know which namespace a service is deployed into as this is a leaky abstraction.
This means there is no way to fail over from K8s to EC2 or between K8s with different namespaces.
I understand kubernetes namespaces are used for domain names for services internal to a K8s cluster, but I don't think copying this pattern to something which is exposed outside a cluster makes sense. Kubernetes Ingress Controller is a better analogy where service name can be completely decoupled from the namespace and details of how and where a service is deployed.
Proposed Solution
Ideally the default name could be overridden by an annotation on the K8s pod / service. If the service was exposing multiple ports there are several strategies that could be adopted: