solo-io / gloo

The Cloud-Native API Gateway and AI Gateway
https://docs.solo.io/
Apache License 2.0
4.09k stars 446 forks source link

Last applied federated resource "takes ownership" of the resource in Edge cluster #7476

Open huzlak opened 1 year ago

huzlak commented 1 year ago

Gloo Edge Version

1.12.x (latest stable)

Kubernetes Version

1.22.x

Describe the bug

Having 2 fed deployments in different namespaces with same clusters registered and applying exactly same federated resource in 2 different namespaces, the one where it was applied later takes "ownership" of the resource in the edge cluster and it can't be removed, only the original one can be removed without affecting the resource in edge cluster

Steps to reproduce the bug

## PREREQS:
## 3 clusters called:
## - local - to deploy fed controller
## - remote1 - 1st edge cluster
## - remote2 - 2nd edge cluster

## Deploy original gloo Fed instance: 
helm upgrade -i gloo glooe/gloo-ee --namespace gloo-system --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY"  --kube-context local --values - <<EOF
global:
  glooRbac:
    create: true
    namespaced: true
grafana: # The grafana settings can be removed for Gloo Edge OSS
  defaultInstallationEnabled: false
gloo-fed:
  enabled: true
  glooFedApiserver:
    enable: true
prometheus:
  enabled: false
observability:
  enabled: false
gloo:
  settings:
    create: true
    writeNamespace: gloo-system
    watchNamespaces:
    - gloo-system
EOF

## Deploy 2 edge clusters
helm upgrade -i gloo glooe/gloo-ee --namespace gloo-system --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY"  --kube-context remote1 --values - <<EOF
prometheus:
  enabled: false
observability:
  enabled: false
gloo-fed:
  enabled: false
  glooFedApiserver:
    enable: false
gloo:
  gloo:
    logLevel: debug
  gatewayProxies:
    gatewayProxy:
      failover:
        enabled: true
EOF
helm upgrade -i gloo glooe/gloo-ee --namespace gloo-system --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY"  --kube-context remote2 --values - <<EOF
prometheus:
  enabled: false
observability:
  enabled: false
gloo-fed:
  enabled: false
  glooFedApiserver:
    enable: false
gloo:
  gloo:
    logLevel: debug
  gatewayProxies:
    gatewayProxy:
      failover:
        enabled: true
EOF

## Register clusters in original fed deployment
kubectl config use-context local
glooctl cluster register --federation-namespace gloo-system --cluster-name remote1 --remote-context remote1
glooctl cluster register --federation-namespace gloo-system --cluster-name remote2 --remote-context remote2

## Verify if registered:
kubectl config use-context local
glooctl cluster list --federation-namespace gloo-system

## Create federated resources:
kubectl apply --context local -n gloo-system -f- <<EOF
apiVersion: fed.gloo.solo.io/v1
kind: FederatedUpstream
metadata:
  name: my-federated-upstream
spec:
  placement:
    clusters:
      - remote1
      - remote2
    namespaces:
      - gloo-system
  template:
    spec:
      static:
        hosts:
          - addr: solo.io
            port: 80
    metadata:
      name: fed-upstream
EOF
kubectl apply --context local -n gloo-system -f- <<EOF
apiVersion: fed.gateway.solo.io/v1
kind: FederatedVirtualService
metadata:
  name: my-federated-vs
spec:
  placement:
    clusters:
      - remote1
      - remote2
    namespaces:
      - gloo-system
  template:
    spec:
      virtualHost:
        domains:
          - "*"
        routes:
          - matchers:
              - exact: /solo
            options:
              prefixRewrite: /
            routeAction:
              single:
                upstream:
                  name: fed-upstream
                  namespace: gloo-system
    metadata:
      name: fed-virtualservice
EOF

## Check that the upstream and vs are created in the remote clusters
kubectl get upstreams -n gloo-system  --context remote1 fed-upstream 
kubectl get upstreams -n gloo-system  --context remote2 fed-upstream
kubectl get vs -n gloo-system  --context remote1 fed-virtualservice 
kubectl get vs -n gloo-system  --context remote2 fed-virtualservice

## Verify that connection works:
kubectl config use-context remote1
curl $(glooctl proxy url)/solo -H "Host: solo.io" -w "Response code: %{http_code}\n" ## You should get response code 301 as we recieve redirection to https
kubectl config use-context remote2
curl $(glooctl proxy url)/solo -H "Host: solo.io" -w "Response code: %{http_code}\n" ## You should get response code 301 as we recieve redirection to https
kubectl config use-context local

## Deploy new gloo fed relase called gloo-new into gloo-system-new namespace with namespace scoped values:
helm upgrade -i gloo-new glooe/gloo-ee --namespace gloo-system-new --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY"  --kube-context local --values - <<EOF
global:
  glooRbac:
    create: true
    nameSuffix: "-new"
    namespaced: true
grafana: # The grafana settings can be removed for Gloo Edge OSS
  defaultInstallationEnabled: false
gloo-fed:
  enabled: true
  glooFedApiserver:
    enable: true
prometheus:
  enabled: false
observability:
  enabled: false
gloo:
  settings:
    create: true
    writeNamespace: gloo-system-new
    watchNamespaces:
    - gloo-system-new
EOF

## Register clusters in new fed deployment:
glooctl cluster register --federation-namespace gloo-system-new --cluster-name remote1 --remote-context remote1
glooctl cluster register --federation-namespace gloo-system-new --cluster-name remote2 --remote-context remote2

## Create the same federated upstream and virtualservice in new fed deployment's namespace:
kubectl apply --context local -n gloo-system-new -f- <<EOF
apiVersion: fed.gloo.solo.io/v1
kind: FederatedUpstream
metadata:
  name: my-federated-upstream
spec:
  placement:
    clusters:
      - remote1
      - remote2
    namespaces:
      - gloo-system
  template:
    spec:
      static:
        hosts:
          - addr: solo.io
            port: 80
    metadata:
      name: fed-upstream
EOF
kubectl apply --context local -n gloo-system-new -f- <<EOF
apiVersion: fed.gateway.solo.io/v1
kind: FederatedVirtualService
metadata:
  name: my-federated-vs
spec:
  placement:
    clusters:
      - remote1
      - remote2
    namespaces:
      - gloo-system
  template:
    spec:
      virtualHost:
        domains:
          - "*"
        routes:
          - matchers:
              - exact: /solo
            options:
              prefixRewrite: /
            routeAction:
              single:
                upstream:
                  name: fed-upstream
                  namespace: gloo-system
    metadata:
      name: fed-virtualservice
EOF

kubectl delete federatedupstream -n gloo-system-new --context local my-federated-upstream
kubectl get upstream -n gloo-system --context remote1  
#### Upstream does not exist anymore

Expected Behavior

When I delete the federated resource, but have another one pointing to same resource in edge cluster, I expect the resource to stay there if there is at least 1 instance of federated resource still pointing to the same resource in Edge.

Additional Context

No response

jenshu commented 1 year ago

A couple of notes:

github-actions[bot] commented 5 months ago

This issue has been marked as stale because of no activity in the last 180 days. It will be closed in the next 180 days unless it is tagged "no stalebot" or other activity occurs.