netscaler / netscaler-helm-charts

NetScaler helm charts
https://github.com/netscaler/netscaler-helm-charts
Apache License 2.0
21 stars 31 forks source link

one Cluster, multiple ADC/VPX #136

Closed patsch9 closed 2 years ago

patsch9 commented 2 years ago

Hello,

we want to install multiple ingress controller for different kind of service (eg. one ADC/VPX for external services and one ADC/VPX for internal services)

The first app can be installed via cloud-native-chart, when i install the second with different namespace and serviceAccount names therre comes the error Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "kube-cnc-router" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "citrix-ingress-internal": current value is "citrix-ingress-external"

Is the daployment of multiple instances for connecting different ADC's/VPX's in the same cluster supported?

Thanks!

apoorvak-citrix commented 2 years ago

@patsch9 We do support multiple Citrix ingress controllers(CIC) in the same cluster configuring different VPXs. But we currently don’t support multiple Citrix Node Controller (CNC) in the same cluster.

What does your topology look like? Is Any of the VPX’s in the same network as Kubernetes cluster Nodes? can you please try not enabling CitrixNodeController(CNC) while deploying the second helm chart?

patsch9 commented 2 years ago

@apoorva-05 cluster is in a own subnet and the two vpx's are in differen subnets two (example: cluster 10.1.1.0/24. VPX1 10.1.2.0/24, VPX2, 10.1.3.0/24) one VPX is for inhouse traffic and one for internet publishement.

When i deploy the second helm chart i have to disable create crds and have to set a different serviceaccountname. All is fine to this point, but when i enable node controller to have acces to the container network from vpx the first helm chart is fine but the second comes with the message i descibred above, because the chart want to create a clusterRole with the name kube-cnc-router that already exist. In the Helm Chart there is no option to change the clusterRole Name of kube-cnc-router. I think in the next step there comes a failure as well because the cnc-controller want to create new pods for cnc router with the name "kube-cnc-router-HOSTNAME..." that i cant change as well.

I think when there is a option to edit the clusterRole Name of kube-cnc-router and set a prefix for the pod name of kube-cnc-router it will be possible to create a second node controller instance.

patsch9 commented 2 years ago

citrix-node-controller/templates/serviceaccount.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: _{{ include "citrix-k8s-node-controller.cncrouterserviceAccountName" . }}_
rules:
  - apiGroups: ["*"]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "patch", "delete", "update"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "patch", "delete", "update"]
  - apiGroups: ["crd.projectcalico.org"]
    resources: ["ipamblocks"]
    verbs: ["get", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: _{{ include "citrix-k8s-node-controller.cncrouterserviceAccountName" . }}_
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-cnc-router
subjects:
- kind: ServiceAccount
  name: kube-cnc-router
  namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: _{{ include "citrix-k8s-node-controller.cncrouterserviceAccountName" . }}_
  namespace: {{ .Release.Namespace }}

citrix-node-controller/values.yaml

-----------------------------------------------------------
# Default values for citrix-node-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

image: quay.io/citrix/citrix-k8s-node-controller:2.2.9
pullPolicy: IfNotPresent
license:
  accept: no
nsIP:
adcCredentialSecret:
network:
vtepIP:
vxlan:
  id:
  port: 
cniType:
dsrIPRange:
clusterName:
cncRouterImage:
_cncrouterserviceAccountName;_
_cncrouterNameprefix: "VPX1"_

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  create: true
  # The name of the ServiceAccount to use.
  # If not set and `create` is true, a name is generated using the fullname template
  # name:

With this edits you maybe only have to edit the creation process of the cnc-router pod in parts of association to the service account, the pod name and the nodeid value

apoorvak-citrix commented 2 years ago

@patsch9 Along with the changes you mentioned in yaml and helm, there is an enhancement that is required from the CNC code aswell, which we will work on.

If this requirement is a blocker for you, we can try looking for a workaround for your use-case.

patsch9 commented 2 years ago

@apoorva-05 actual this is a blocker for us because we have usecases in out kubernetes environments where are workloads that have to be published inhouse and to the internet. Out Networking deparment have the restriction that one netscaler is for internal use and one netscaler is for external use cases. A workaround would be greatfull for the first time.

mayurmohanpatil commented 2 years ago

@patsch9 we would like to connect with you over email. Could you send us an email to appmodernization@citrix.com id where we will assist you further. will you be interested in joining our slack community?

patsch9 commented 2 years ago

@mayurmohanpatil i have send and email for contact.

apoorvak-citrix commented 2 years ago

@patsch9 We have addressed this issue in the latest release of Citrix-Node-Controller(v2.2.10). UseCase of deploying multiple Instances of Citrix Node Controller in the same cluster is now supported. For more details please have a look at Helm Chart Release Notes. Closing this issue.