tektoncd / pipeline

A cloud-native Pipeline resource.
https://tekton.dev
Apache License 2.0
8.37k stars 1.76k forks source link

Install multiple instances of Tekton on a single K8s cluster #4605

Open aiden-deloryn opened 2 years ago

aiden-deloryn commented 2 years ago

Feature request

Currently there can only be one instance of Tekton installed on a K8s cluster. It would be useful if we could install multiple instances on a cluster (in different namespaces).

Use case

This would be beneficial in a multi-tenancy cluster environment where each tenant is provisioned with a unique namespace. Ideally, we would install a separate instance for each tenant which would run in their provisioned namespace and not interfere with other instances on the cluster. It would make sense in this scenario that each instance only watches one namespace (the namespace it belongs to).

tekton-robot commented 2 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

vdemeester commented 2 years ago

@aiden-deloryn so, in theory this is possible by customizing the "installation" process (aka, modifying the release.yaml), but it is only possible to do that if each instance only looks at one namespace.

aiden-deloryn commented 2 years ago

@vdemeester thanks for the tip. I did test this out back in February but there are still some outstanding issues which prevent it from working as expected. My memory is a little fuzzy on the details, but here are some notes I took from the testing I did which might be helpful.

How to install multiple (namespaced) instances of Tekton Pipelines Modify release.yaml:

Problems encountered when running multiple instances:

tekton-robot commented 2 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten with a justification. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

tekton-robot commented 1 year ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen with a justification. Mark the issue as fresh with /remove-lifecycle rotten with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

tekton-robot commented 1 year ago

@tekton-robot: Closing this issue.

In response to [this](https://github.com/tektoncd/pipeline/issues/4605#issuecomment-1207508135): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen` with a justification. >Mark the issue as fresh with `/remove-lifecycle rotten` with a justification. >If this issue should be exempted, mark the issue as frozen with `/lifecycle frozen` with a justification. > >/close > >Send feedback to [tektoncd/plumbing](https://github.com/tektoncd/plumbing). Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
rafalbigaj commented 1 year ago

@vdemeester @pritidesai @afrittoli installation of multiple instances of Tekton on a single K8s cluster is a highly expected feature.

Could we reopen this issue? Do we have any plan to address it?

pritidesai commented 1 year ago

@rafalbigaj @aiden-deloryn this is a reasonable feature request and aligns with the maturity of the pipelines project.

Thank you @aiden-deloryn for the detailed analysis and sharing your findings.

There is no near term plan to address this (as far as I know of) but something we can definitely look into.

@aiden-deloryn will appreciate if you can run your tests on the latest pipelines release - specially with V1 CRDs.

Just to note here, multiple instances of Tekton might also mean multiple versions of Tekton pipelines on a single K8S cluster.

/remove-lifecycle rotten

aiden-deloryn commented 1 year ago

@rafalbigaj @pritidesai thank you for responding to this feature request. Unfortunately I don't have any spare cycles at this time to investigate further as our internal development priorities have shifted and this is no longer on the critical path for us.

However, if somebody who is interested in this feature would like to test running multiple instances for the latest release and post the outcome here, I would be curious to see the results! I hope the information I provided previously might be of some help.

pritidesai commented 1 year ago

Thank you @aiden-deloryn for providing a list of changes needed to experiment this feature and updated release.yaml.

I was able to start experimenting with two deployments:

k create namespace tekton-pipelines-1
k create namespace tekton-pipelines-1-resolvers
k create namespace tekton-pipelines-2
k create namespace tekton-pipelines-2-resolvers
k apply -f https://raw.githubusercontent.com/TensorWorks/pipeline/issue-4605/release.yaml
k apply -f https://raw.githubusercontent.com/TensorWorks/pipeline/issue-4605/release-2.yaml

I am troubleshooting the issues with revoking cluster wide access to list cluster wide admission controllers.

W0110 23:18:28.482747       1 reflector.go:424] k8s.io/client-go@v0.25.4/tools/cache/reflector.go:169: failed to list *v1.ValidatingWebhookConfiguration: validatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "system:serviceaccount:tekton-pipelines-2:tekton-pipelines-webhook" cannot list resource "validatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
tekton-robot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

tekton-robot commented 1 year ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten with a justification. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

vdemeester commented 1 year ago

/lifecycle frozen

Maybe what we can do "as a quick win" is to publish a special release.yaml tailored to run in a single namespace.

taylor-schneider commented 1 week ago

Any progress here? If I understand the implication; currently there is no multi-tenant capabilities. I would need each "application" or "team" to have their own k8 cluster.

vdemeester commented 1 week ago

Not necessarily. If you have multiple teams split into different namespaces (for other things, like services, deployment, …), why not have the same for tektoncd/pipeline ? In both case, there is "one" controller to manage the objects for the whole cluster.

taylor-schneider commented 1 week ago

@vdemeester I want to have a secure self-service workflow; users can define event listeners / triggers / interceptors to have their pipelines run when their repos are modified. The pipelines will need access to secrets, the users will want to have the pipelines run as a service account. But I cannot figure out how to prevent malicious users from running pipelines as elevated service accounts.