Having 2 fed deployments in different namespaces with same clusters registered and applying exactly same federated resource in 2 different namespaces, the one where it was applied later takes "ownership" of the resource in the edge cluster and it can't be removed, only the original one can be removed without affecting the resource in edge cluster
Steps to reproduce the bug
## PREREQS:
## 3 clusters called:
## - local - to deploy fed controller
## - remote1 - 1st edge cluster
## - remote2 - 2nd edge cluster
## Deploy original gloo Fed instance:
helm upgrade -i gloo glooe/gloo-ee --namespace gloo-system --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY" --kube-context local --values - <<EOF
global:
glooRbac:
create: true
namespaced: true
grafana: # The grafana settings can be removed for Gloo Edge OSS
defaultInstallationEnabled: false
gloo-fed:
enabled: true
glooFedApiserver:
enable: true
prometheus:
enabled: false
observability:
enabled: false
gloo:
settings:
create: true
writeNamespace: gloo-system
watchNamespaces:
- gloo-system
EOF
## Deploy 2 edge clusters
helm upgrade -i gloo glooe/gloo-ee --namespace gloo-system --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY" --kube-context remote1 --values - <<EOF
prometheus:
enabled: false
observability:
enabled: false
gloo-fed:
enabled: false
glooFedApiserver:
enable: false
gloo:
gloo:
logLevel: debug
gatewayProxies:
gatewayProxy:
failover:
enabled: true
EOF
helm upgrade -i gloo glooe/gloo-ee --namespace gloo-system --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY" --kube-context remote2 --values - <<EOF
prometheus:
enabled: false
observability:
enabled: false
gloo-fed:
enabled: false
glooFedApiserver:
enable: false
gloo:
gloo:
logLevel: debug
gatewayProxies:
gatewayProxy:
failover:
enabled: true
EOF
## Register clusters in original fed deployment
kubectl config use-context local
glooctl cluster register --federation-namespace gloo-system --cluster-name remote1 --remote-context remote1
glooctl cluster register --federation-namespace gloo-system --cluster-name remote2 --remote-context remote2
## Verify if registered:
kubectl config use-context local
glooctl cluster list --federation-namespace gloo-system
## Create federated resources:
kubectl apply --context local -n gloo-system -f- <<EOF
apiVersion: fed.gloo.solo.io/v1
kind: FederatedUpstream
metadata:
name: my-federated-upstream
spec:
placement:
clusters:
- remote1
- remote2
namespaces:
- gloo-system
template:
spec:
static:
hosts:
- addr: solo.io
port: 80
metadata:
name: fed-upstream
EOF
kubectl apply --context local -n gloo-system -f- <<EOF
apiVersion: fed.gateway.solo.io/v1
kind: FederatedVirtualService
metadata:
name: my-federated-vs
spec:
placement:
clusters:
- remote1
- remote2
namespaces:
- gloo-system
template:
spec:
virtualHost:
domains:
- "*"
routes:
- matchers:
- exact: /solo
options:
prefixRewrite: /
routeAction:
single:
upstream:
name: fed-upstream
namespace: gloo-system
metadata:
name: fed-virtualservice
EOF
## Check that the upstream and vs are created in the remote clusters
kubectl get upstreams -n gloo-system --context remote1 fed-upstream
kubectl get upstreams -n gloo-system --context remote2 fed-upstream
kubectl get vs -n gloo-system --context remote1 fed-virtualservice
kubectl get vs -n gloo-system --context remote2 fed-virtualservice
## Verify that connection works:
kubectl config use-context remote1
curl $(glooctl proxy url)/solo -H "Host: solo.io" -w "Response code: %{http_code}\n" ## You should get response code 301 as we recieve redirection to https
kubectl config use-context remote2
curl $(glooctl proxy url)/solo -H "Host: solo.io" -w "Response code: %{http_code}\n" ## You should get response code 301 as we recieve redirection to https
kubectl config use-context local
## Deploy new gloo fed relase called gloo-new into gloo-system-new namespace with namespace scoped values:
helm upgrade -i gloo-new glooe/gloo-ee --namespace gloo-system-new --version 1.12.37 --create-namespace --set-string license_key="$LICENSE_KEY" --kube-context local --values - <<EOF
global:
glooRbac:
create: true
nameSuffix: "-new"
namespaced: true
grafana: # The grafana settings can be removed for Gloo Edge OSS
defaultInstallationEnabled: false
gloo-fed:
enabled: true
glooFedApiserver:
enable: true
prometheus:
enabled: false
observability:
enabled: false
gloo:
settings:
create: true
writeNamespace: gloo-system-new
watchNamespaces:
- gloo-system-new
EOF
## Register clusters in new fed deployment:
glooctl cluster register --federation-namespace gloo-system-new --cluster-name remote1 --remote-context remote1
glooctl cluster register --federation-namespace gloo-system-new --cluster-name remote2 --remote-context remote2
## Create the same federated upstream and virtualservice in new fed deployment's namespace:
kubectl apply --context local -n gloo-system-new -f- <<EOF
apiVersion: fed.gloo.solo.io/v1
kind: FederatedUpstream
metadata:
name: my-federated-upstream
spec:
placement:
clusters:
- remote1
- remote2
namespaces:
- gloo-system
template:
spec:
static:
hosts:
- addr: solo.io
port: 80
metadata:
name: fed-upstream
EOF
kubectl apply --context local -n gloo-system-new -f- <<EOF
apiVersion: fed.gateway.solo.io/v1
kind: FederatedVirtualService
metadata:
name: my-federated-vs
spec:
placement:
clusters:
- remote1
- remote2
namespaces:
- gloo-system
template:
spec:
virtualHost:
domains:
- "*"
routes:
- matchers:
- exact: /solo
options:
prefixRewrite: /
routeAction:
single:
upstream:
name: fed-upstream
namespace: gloo-system
metadata:
name: fed-virtualservice
EOF
kubectl delete federatedupstream -n gloo-system-new --context local my-federated-upstream
kubectl get upstream -n gloo-system --context remote1
#### Upstream does not exist anymore
Expected Behavior
When I delete the federated resource, but have another one pointing to same resource in edge cluster, I expect the resource to stay there if there is at least 1 instance of federated resource still pointing to the same resource in Edge.
Both fed instances have registered the same set of remote clusters (i.e. parallel canary of gloo fed and gloo edge), which is not something that we support yet. Both fed instances will try to overwrite the same resources on the remote clusters.
All federated resources are consumed by all fed instances (they don't read federated resources only from their own install namespace) so I would expect that having duplicate federated resources will result in duplicate write attempts on the remote clusters.
This issue has been marked as stale because of no activity in the last 180 days. It will be closed in the next 180 days unless it is tagged "no stalebot" or other activity occurs.
Gloo Edge Version
1.12.x (latest stable)
Kubernetes Version
1.22.x
Describe the bug
Having 2 fed deployments in different namespaces with same clusters registered and applying exactly same federated resource in 2 different namespaces, the one where it was applied later takes "ownership" of the resource in the edge cluster and it can't be removed, only the original one can be removed without affecting the resource in edge cluster
Steps to reproduce the bug
Expected Behavior
When I delete the federated resource, but have another one pointing to same resource in edge cluster, I expect the resource to stay there if there is at least 1 instance of federated resource still pointing to the same resource in Edge.
Additional Context
No response