loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.26k stars 398 forks source link

Syncer forces unwanted defaults to services #233

Closed olljanat closed 2 years ago

olljanat commented 2 years ago

Might be or might not be related to #211 but anyway as new case I noticed that I cannot use Kyverno on host cluster to set some environment specific settings because syncer forces some default to there.

Here is my Kyverno policy:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: set-lb-service
spec:
  validationFailureAction: enforce
  background: false
  rules:
  - name: lb-service
    match:
      resources:
        kinds:
        - Service
        namespaces:
        - "vcluster-*"
    preconditions:
      all:
      - key: "{{request.operation}}"
        operator: In
        value:
        - CREATE
      - key: "{{request.object.spec.type}}"
        operator: Equals
        value: LoadBalancer
    mutate:
      patchStrategicMerge:
        metadata:
          annotations:
            metallb.universe.tf/address-pool: "{{ request.namespace }}"
        spec:
          allocateLoadBalancerNodePorts: false
          externalTrafficPolicy: Local
          sessionAffinity: ClientIP

Annotation for MetalLB and allocateLoadBalancerNodePorts: false gets added just fine but those other settings does not get applied. Afaiu it because k8s on vcluster adds these defaults to services:

spec:
  externalTrafficPolicy: Cluster
  sessionAffinity: None

and then syncer logic on https://github.com/loft-sh/vcluster/blob/f3a2d29dd8de1b6773797ebc18621ebdecdbe5d1/pkg/controllers/resources/services/translate.go#L89-L99 copies those over values set by Kyverno.

FabianKramm commented 2 years ago

@olljanat thanks for creating this PR! Currently the logic will try to avoid drift between the virtual and physical cluster, which is why vcluster will "own" certain fields and change them back to the virtual service, which is usually regarded source of truth for most fields. Annotations are an exception and are merged with ones that were set in the host cluster. Is there a reason why you cannot set those fields either within vcluster or deny them via kyverno as essentially the service would behave differently from what is expected if you configure those fields differently in the virtual cluster?

olljanat commented 2 years ago

Is there a reason why you cannot set those fields either within vcluster or deny them via kyverno as essentially the service would behave differently from what is expected if you configure those fields differently in the virtual cluster?

The reason is that I'm trying to do this as simple as possible for those guys who are actually deploying stuff to vcluster on way that they don't actually need to care about how networking setup is done.

Reason to change these settings:

Also as context: I'm building the K8s environment where host clusters is spread to three datacenters on way that each DC run one manager node (common for all vclusters) and one worker per vcluster (totally three workers for each). Then each DC have its own internet connection, load is balanced between those with Azure Traffic Manager/Amazon Route 53 and target is that services needed by one single application (one public URL) are used primarily from same DC where initial connection is created.

Currently the logic will try to avoid drift between the virtual and physical cluster, which is why vcluster will "own" certain fields and change them back to the virtual service, which is usually regarded source of truth for most fields.

As I'm trying only change default to fit our environment but most likely vcluster users are allowed to change those settings if/when needed so maybe one solution would be if we can set different defaults on for these too values on vcluster? Of course alternative for that would be run own Kyverno instance on each vcluster but it feels overkill.

olljanat commented 2 years ago

Currently the logic will try to avoid drift between the virtual and physical cluster, which is why vcluster will "own" certain fields and change them back to the virtual service, which is usually regarded source of truth for most fields. Annotations are an exception and are merged with ones that were set in the host cluster. Is there a reason why you cannot set those fields either within vcluster or deny them via kyverno as essentially the service would behave differently from what is expected if you configure those fields differently in the virtual cluster?

Or well. It looks to be that we will anyway end up to solution where we run Traefik as ingress controller inside of each vcluster instead of using the one on host cluster and only services which use type LoadBalancer will be Traefik it selves so it is not too bad if Kyverno will just deny those without externalTrafficPolicy: Local and allocateLoadBalancerNodePorts: false settings.

Sorry about going back and forward with this one.