Open yu289333 opened 5 years ago
@yu289333 in the last weeks I tried to figure out how this could be solved. But I was not successful.
1) @marwinski do you know if gardener has the control plan in a separate subnet? 2) Multiple users could be separated by namespace. I talked to @mvladev he said that namespaces are only logical groups and subnetting would therefore not possible.
@yu289333
- As a Kubernetes cluster administrator, I want to have the control plane communication on a separate network from the pod services in order to protect the control plane from rogue pods by default (before implementing any network policy in addition).
In most cases the control plane of a cluster is not visible to end-user workload (managed clusters like GKE)
- As a customer of a multi-tenant Kubernetes cluster running one application, I want to have my pod on a separate network from other pods used by other customers.
This one completely depends on the CNI used in the cluster. In some cases you are not allowed to change it at all. For example with Calico CNI you can assign individual Pods
or Namespaces
to use a specific IP pols - see https://www.projectcalico.org/calico-ipam-explained-and-enhanced/
@yu289333
- As a Kubernetes cluster administrator, I want to have the control plane communication on a separate network from the pod services in order to protect the control plane from rogue pods by default (before implementing any network policy in addition).
In most cases the control plane of a cluster is not visible to end-user workload (managed clusters like GKE)
I assume "not visible" is due to network policy blocking. But network policies are easy to make mistakes especially when they are changed frequently. Separating control plane onto a different network makes the separation not vulnerable to admin changes.
- As a customer of a multi-tenant Kubernetes cluster running one application, I want to have my pod on a separate network from other pods used by other customers.
This one completely depends on the CNI used in the cluster. In some cases you are not allowed to change it at all. For example with Calico CNI you can assign individual
Pods
orNamespaces
to use a specific IP pols - see https://www.projectcalico.org/calico-ipam-explained-and-enhanced/
Indeed, the solution is CNI dependent. If no solution in the form of code, can we provide guidelines on how to separate into different networks with the popular CNIs?
I assume "not visible" is due to network policy blocking.
This is only the case in case of self-hosted cluster. With it you normally dedicate 1 or 3 machines to host the control plane of your cluster.
However with most K8S hosted solutions, the control plane is not in your control and your workload can only access the K8S API server which can be hosted anywhere. And in this case the control plane is isolated via IaaS specific Security groups / firewall rules at IaaS level.
It seems that the default Gardner shoot cluster is configured with multiple subnets. I've not found any official documentation/code though.
This note https://github.com/gardener/gardener/issues/895 suggests gardener has a default subnetting scheme.
Description
Network security should be considered in all OSI 7 layers. Among them, Layer 3 – network layer is defined by Kubernetes CNI (container network interface) Layer 4 – transport layer is determined by ingress, services, and network policies.
Layer 3 security measures, separating end points to separate networks or subnets, are often ignored. For example, Kubernetes control plane communication and pod service (data plane) communication should be on separate networks. Yet, all end points are on the same (CIDR) subnet by default. Layer 3 security measures, in really, is much easier to implement than layer 4 measures. Separating pods into separate subnets can greatly simplify network policies. Layer 4 measures are often burdened by complexity and are prone to mistakes. For example, a network policy becomes difficult to manage and prone to mistakes when the number of rules/lines is bigger than 10. Adding TLs protection to services requires PKIs (public key infrastructure), which may be costly and are tedious to manage.
User Story
As a Kubernetes cluster administrator, I want to have the control plane communication on a separate network from the pod services in order to protect the control plane from rogue pods by default (before implementing any network policy in addition).
As a customer of a multi-tenant Kubernetes cluster running one application, I want to have my pod on a separate network from other pods used by other customers.
Implementation ideas
I suggest Karydia project
John