Open sbose78 opened 4 years ago
I think the lifecycle for Controller
and Operator
are different.
If it is an Operator
, like Tekton Operator, then you can manage the controller and CRDs inside of the Operator, update or do something or it.
But I think for this repo, it is a build controller
, it include the function logic, doesn't include the operation logic.
If there is no easy way, I think like other controllers, we can document in README that let user how to install or update it. And the operation team will handle that.
like our side, our admin manages the controller lifecycle by using Concourse pipeline, if there is any Git release is published. the pipeline will redo the deploy steps:
kubectl apply -f deploy/crds/
kubectl apply -f deploy/
etc ....
to update the crds and controller with new image
So what we call an operator
is just code that creates and initialize controllers(via the manager). From my experience, is a best practice to allow the operator
also to generate on the fly & install the required controllers CRDs. I guess this is why the Tekton Operator does it, and I would prefer that this build
operator does the same. I think is better to reduce dependencies prior to the build
operator initialization.
Iยดm not sure if the above is a lifecycle
management of CRDs. I think is not, while it just installs them, and any future CRD definition change, will imply a restart of the operator
.
@sbose78 if you are behind a controller life-cycle managent, I think you need a helm chart. And to be able to package the operator
into an image, for versioning it. The chart should reference it, and the user should be able to define what version of the operator
he/she wants based on image digest(or similar).
I might be missing something here, so please keep me honest.
Also, what about an image mechanism(store in registry with some semantic versioning), if the helm chart is too much. Or is this something already in place?
Thanks!
So here are the things that need to "managed and lifecycle'd":
A builder
or pipeline
Service Account: A default service account created in every namespace which Builds would run as. It is similar to
builder
sa[1] in OpenShift, Orpipeline
sa which OpenShift Pipelines sets up for us in every namespace. I've documented the manual steps[2] to set it up today, Or
default
sa in every namespace in kubernetes that can be used.Controller image: The image which runs the controllers we have in this repo.
CRDs ( 3 or 4 CRDs ).
[Cluster]Rolebindings , [Cluster]Roles & Service account the controller would run as.
(1), (2) & (3) as a whole bundle needs to be versioned and managed by something on the cluster.
[1] https://docs.openshift.com/container-platform/3.6/dev_guide/service_accounts.html#default-service-accounts-and-roles [2] https://github.com/redhat-developer/build/blob/master/HACK.md#running-the-operator
๐ OLM can manage bundles like (1) (2) and (3) as a whole. Example https://github.com/sbose78/buildv2-olm-csv-sample/blob/master/0.0.4/buildv2-operator.v0.0.1.clusterserviceversion.yaml
๐ Just like typical package managers, OLM has a way to define dependencies. https://github.com/operator-framework/operator-lifecycle-manager/blob/master/README.md#dependency-resolution
๐ The tekton operator isn't on Operatorhub yet, hence dependency resolution doesn't happen for free. https://github.com/tektoncd/operator
๐ We tag and create a release.yaml at frequent intervals ( like tekton )
๐ A dedicated operator for setting things up. I would typically keep the "business logic" controllers separate from the one that sets those up.
From my experience, is a best practice to allow the operator also to generate on the fly & install the required controllers CRDs
Based on my analysis, I like option 2 :)
And, I think I'm going to anyway submit 'Option 2' to the list of Kubernetes operators on operatorhub.io https://github.com/operator-framework/community-operators/tree/master/upstream-community-operators but that will not affect your custom way of managing this dedicated operator if you wanted to have a custom way.
( more to be added )
Thanks for the analysis.
I like the option 2 and also hope to see a dedicated Build Operator
in the OperatorHub :)
Of course, we have the
default
sa in every namespace in kubernetes that can be used.
https://github.com/tektoncd/operator/issues/3 I see that adding the upstream tekton operator is marked as important/soon
If that's done, then we have this easy operatorhub experience too https://www.youtube.com/watch?v=M90IKYKm2aI which is feasible on OpenShift ( https://github.com/sbose78/buildv2-olm-csv-sample/blob/master/0.0.5/buildv2-operator.v0.0.1.clusterserviceversion.yaml#L15 )
I like also option 2! thanks for the detail explanation!
How would we lifecycle the contoller/CRDs ?