fluxcd / flux2

Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit.
https://fluxcd.io
Apache License 2.0
6.46k stars 596 forks source link

How to deploy multiple operator on one cluster #2012

Closed mhsh64 closed 1 year ago

mhsh64 commented 2 years ago

Here is the plan

Flux namespace1: flux-system

Flux namespace2: flux-system-dedicated

In each namespace we are deploying source and kustomize operators. We also deployed kustomization and gitrepositories under flux-system namespace,

But we observed even the operators in flux-system-dedicated are monitoring those crds in flux-system namespace

Is this possible to have multiple fluxv2 operators on one cluster dedicated with their own kustomization and gitrepositories crds?

kingdonb commented 2 years ago

No, this is not possible. Flux is meant to be installed cluster-wide, and the multi-tenancy features of Flux are designed so that there is one Flux installation per cluster, or one Flux installation managing multiple clusters.

One kustomize-controller manages all instances of the Kustomization CRD, one Helm Controller manages all HelmReleases, etc. If multiple replicas of the Flux controllers are started, they will perform a leader election and only one will be running. If you need to scale Flux up for throughput, there is the --concurrency flag on each controller that you can use to increase the number of concurrent operations that can be processed.

Can you please tell us more about your specific goals and why exactly you are trying to run Flux this way?

mhsh64 commented 2 years ago

Thanks @kingdonb for your response

We have a specific repository which we keep manifests for deployments, this repo is being updated frequently, since this is a source of all app teams crds.

So we need to have an interval such as 30 sec to pull this repo and applies the manifests by kustomize

So the reason using a dedicated operator for this repo is first it needs to be pull and applies manifests frequently, also for performance And also for reading logs

When we check logs for main operator, it shows for all git repositories , which it makes the log reading a little bit hard

kingdonb commented 2 years ago

That's good feedback! Thank you for the added clarity.

I should mention this, maybe you haven't seen it already:

The flux2-multi-tenancy example provides needed information about how a multi-tenant installation such as this can be undertaken safely; this example includes details like, how can you enforce tenant safety through policy, and how can you make sure the policies needed for safety are applied before tenants get a chance to apply their own manifests that could otherwise go around policies that weren't installed yet.

Orderly installation of Flux kustomizations are provided when needed through use of the dependsOn feature, and this is described as part of the multi-tenancy example linked above as well.

The repo shows how one Flux installation can be used with multiple tenants in the same cluster. This is the expected best-practices way to install Flux with more than one tenant on a single cluster, which is I think what you're describing. It is not recommended or currently possible to have multiple flux instances, but this model should work for any use case.

There are two guides like this that show advanced use cases of Flux, we recommend reading both for better understanding of all of the advanced features of Flux. (The other one: https://github.com/fluxcd/flux2-kustomize-helm-example)

Regarding your other concern, here is also flux logs which has the capability to filter logs by namespace or by resource: https://fluxcd.io/docs/cmd/flux_logs/#examples

Right now the advice is to use --namespace or -n for resources that are not in the flux-system namespace, and we also generally advise users to keep any "tenant" resources somewhere else besides flux-system for security and tidiness/orderliness reasons.

This is guidance that is in line with the flux2-multi-tenancy repo. Please check it out and let us know if it helps.

As for performance, the expectation is as described in the above example, that you will use one flux Kustomization resource per "application", per logical grouping, one for each tenant, and they can all be reconciled in parallel or even concurrently on their own interval. The platform team who manages CRDs and that has cluster-admin is basically another separate tenant, and they will the only ones that can manage CRDs, since other tenants are restricted to a namespace.

Flux Kustomizations can all reconcile on their own cycles and won't block each other unless you design repo your structure that way, and so resources that are independent can be reconciled concurrently; this is by default.

But depending on how many tenants you may need to increase the RAM limits or request, and/or change --concurrency parameter to enable more than 4 concurrent reconcilers. There is no scaling Flux horizontally, you cannot increase the number of flux instances or replicas count – only one replica will be effective and the others, if scaled up, will be locked out from taking action through a leader election protocol.

But you can increase concurrency to a higher number through --concurrency and let one Flux instance manage more resources concurrently, it will just increase the resource requirements to do so.

kingdonb commented 2 years ago

Another piece of advice; you can definitely set 30s as a reconciling interval, and that should be fine, but as the number of resources in the Kustomization grows larger, especially if they are complex ones and CRDs, you may find it takes longer to reconcile. If it is very big, every 30s may be too frequent compared to the number of changes it actually receives. You can reconcile the GitRepository more frequently and the Kustomization less frequently, and you will see performance benefits from this way. When a change to the Kustomization's GitRepository is detected, a downstream sync is automatically triggered right away after sync succeeds.

Performance has been improved somewhat through Server-Side Apply that landed in Flux 0.18, so this optimization is less important than it used to be, but we haven't tested scaling across every dimension, and it may still be important at some scales.

If the resources that you are defining and syncing don't really change internally that much, it will be wasteful in terms of performance to set your Kustomization interval to every 30s. That is the lowest setting that is allowed, because more frequent reconciliation will usually not perform acceptably, given that the cluster needs to check each time if the state in the cluster still matches the state in the repo, this is really only necessary if you are using this as a way to resolve drifts.

Unless people are changing resources in the cluster outside of the GitOps workflow (introducing drifts), this aggressive syncing rate should really never be needed.

You can further improve performance and avoid rate limits from your git provider by reducing the GitRepository interval, without any loss in time-to-deploy, by adding webhooks that ensure a sync happens immediately whenever a change is pushed.

The webhook receiver guide describes how to accomplish this. That way, you can set a 10m0s reconciling interval but your users will not wait 10 minutes for changes to be reflected in the cluster. Actually, they will start syncing and take effect almost immediately.

I use this in my own repo and it works great, the only caveat is that you will need to expose some part of your cluster to the internet so that the webhook can report when a change is pushed, but webhook receiver is a very small surface, the webhook only knows one "sync" instruction with no parameters and so it hardly even counts as an API.

mhsh64 commented 2 years ago

@kingdonb Thank you for your detailed information I will check all of these recommendations and come back to you

mhsh64 commented 2 years ago

Do you have any idea why flux logs --all-namespaces does not return any information?

kingdonb commented 2 years ago

What version of flux cli are you using, and what type of cluster are you running on? How did you install Flux?

We had issues with older versions of the Operator Hub flux for example, that have all been resolved in recent versions.

The flux logs command has several more options you may need to be aware of, including --flux-namespace which must be set if you have installed Flux somewhere else than flux-system

Besides what namespace you have installed Flux into, it can also matter how it was installed; if you used any method other than flux bootstrap to generate and install your manifests, or possibly the terraform provider (which should also be considered a supported method of installing), there could be different results, (and you could have also found a bug.)

If you have installed Flux into a different namespace than flux-system, note there are labels that should match the installed namespace:

https://github.com/fluxcd/flux2/blob/1d1d4bbf4b234031bc25a09a252bfb9fbe369f9d/cmd/flux/logs.go#L96

If the labels do not match, then setting --flux-namespace correctly likely also will not help, since the labels also need to match according to this code. I'm not entirely sure that installing flux into a different namespace than flux-system is currently documented or supported, but it seems to be on the roadmap (or at least maybe moving in that direction. 👍)

mhsh64 commented 2 years ago

flux -v flux version 0.19.1

Cluster 1.21.1 AKS

All flux resources deployed in flux-system namespace with all source and kustomize crds

Still not able to get any logs

stefanprodan commented 2 years ago

@mhsh64 can you please post here kubectl get ns flux-system -oyaml, flux check and flux get all --all-namespaces, feel free to censor any private info.

mhsh64 commented 2 years ago
flux get all --all-namespaces

NAMESPACE       NAME                    READY   MESSAGE                                                                 REVISION                                                SUSPENDED

flux-system     gitrepository/x1        True    Fetched revision: master/a5efba86dd38b9e87ac97e8bb24e7a048654479b       master/a5efba86dd38b9e87ac97e8bb24e7a048654479b         False

flux-system     gitrepository/x2        True    Fetched revision: master/6197cf7b208183b922bfaf2a4c2abbfd084a69f8       master/6197cf7b208183b922bfaf2a4c2abbfd084a69f8         False

flux-system     gitrepository/x3        True    Fetched revision: master/eda43356784e217d76deebb580b4d72ef1c68e3f       master/eda43356784e217d76deebb580b4d72ef1c68e3f         False

flux-system     gitrepository/x4        True    Fetched revision: master/3313fca75c07217fcda882156576b9ee8e9e2066       master/3313fca75c07217fcda882156576b9ee8e9e2066         False                       

kubectl get ns flux-system -oyaml

apiVersion: v1

kind: Namespace

metadata:

  annotations:

  labels:

    app.kubernetes.io/component: flux-operator

    app.kubernetes.io/managed-by: Kustomize

    app.kubernetes.io/name: flux

    app.kubernetes.io/part-of: fluxcd

    app.kubernetes.io/version: 0.17.1

    kubernetes.io/metadata.name: flux-system

    name: flux

    manager: kubectl-client-side-apply

    operation: Update

    time: "2021-10-27T16:43:10Z"

  name: flux-system

  resourceVersion: "4677763"

  uid: 22eeba49-03d3-48e0-90d6-456108f47d0b

spec:

  finalizers:

  - kubernetes

status:

  phase: Active

flux check

► checking prerequisites

✔ Kubernetes 1.21.1 >=1.19.0-0

► checking controllers

✔ all checks passed

@stefanprodan thanks

mhsh64 commented 2 years ago

@stefanprodan do you know why this does not show anything ?

stefanprodan commented 2 years ago

Yes you don't use our labels on the namespace and probably not on the deployments. See https://github.com/fluxcd/flux2/blob/main/manifests/install/labels.yaml

mhsh64 commented 2 years ago

Cool yeah by applying the label it worked as expected Thanks

davinkevin commented 2 years ago

I have the same needs as the title describes, but for other purpose.

As cluster maintainers, we try to keep all our system available during "upgrades", simple to rollback for us and our users. So, our simplest (and best?) solution is to deploy some tools "twice" and switch namespace a version is responsible of, on demand.

To detail it, it would look like that for Flux:

This is like what Istio provides for Canary release by the way, which is very helpful, especially for a so critical things like network or, in flux context, application delivery.

stefanprodan commented 2 years ago

@davinkevin you could do a canary deployment of Flux using RBAC to restrict what a Flux instance can see in the cluster. For example, deny the installed version access to some namespace, then deploy the new Flux instance with RBAC that can access only that namespace.

davinkevin commented 2 years ago

Thanks @stefanprodan ! It's a good trick to achieve that.

To be able to delegate this operation (service account RBAC modification) to (human) namespace operator, we have to create a set of role attached to each namespace. It's possible, but hard to manage at scale.

Do you think this is something possible at flux level to use some labels to do that at application level? Exactly like istio is doing? It's a powerful solution and really simple to delegate in a "shared clusters" with hundreds of teams.

But again, thank you for the intermediate solution, it can help 👍

nikodemjedynak-dnv commented 1 year ago

@stefanprodan How would you recommend achieving the RBAC separation on what Flux instance can see in the cluster? I have reused the ClusterRole from RBAC from the community supported helm chart and role-binded it to the namespaces that I would prefer to use, however, all of the controllers seem to be seeking the ability to view CRDs on the cluster scope. Is it possible to override such behaviour? Thanks for any tips!

stefanprodan commented 1 year ago

We now support deploying multiple Flux controllers via sharding, docs here: https://fluxcd.io/flux/cheatsheets/sharding/